text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
#include <iostream> int main() { using namespace std; cout << "This program may reformat your hard disk\n" "and destroy all your data.\n" "Do you wish to continue? <y/n> "; char ch; cin >> ch; if (ch == 'y' || 'Y') cout << "You were warned!\a\a\n"; else if (ch == 'n' || 'N') cout << "A wise choice ... bye\n"; else cout << "That wasn't a y or n, so I guess I'll " "trash your disk anyway.\a\a\a\n"; return 0; } The problem is, no matter what the input will be, the program will always execute the first if-loop. So an input of 'n' or 'a' will result in the output "You were warned!". I have encountered this problem with the or operator in my own programs (On two different computers actually), and often thought I was just misusing it, but somehow I wonder if it's not me, and maybe some kind of bug? I suspect the OR operator since, if I remove it in all of the loops and leave only one char for each loop ('y' and 'N' for instance), the problem will disappear. I am using Code::Blocks 10.05 (rev 6283) with the GNU GCC version 0.99 compiler (Which came with the Code::Blocks bundle). Hope someone can explain what is happening here. Best regards, Boooke
http://www.gamedev.net/topic/620698-logical-or-operator/
CC-MAIN-2014-52
refinedweb
221
82.24
Fast characteristic polynomial algorithm over cyclotomic fields UPDATE: A first real version of this is now done. See the first two patches at The timings for this new code on sage.math are given in the table below under "Sage Multimod". Also a theoretical thing to do is give an a priori provable bound on the number of primes needed to get the charpoly via CRT. Right now it just waits until 3 successive charpolys stabilize. Or rather, adding in 3 primes gives the same charpoly. That will in practice always work, but isn't proved. And, it is probably slower than just starting with the right bound. Craig: I thought for a minute about a bound for the size of coefficients in the charpoly. I came up with a pretty trivial seeming bound (what I would now call a "Hadamard-type" bound), so I looked it up. I found this paper: I don't know anything about the people in the world of computational linear algebra, but this guy has written papers with Clement, and the FFLAS website is maintained by him, so I figured it was probably the right thing to be looking at. So there are two lemmas in the paper that provide a bound for the size of the largest coefficient in the charpoly. I implemented the first of them, and it looked pretty good. Then I implemented the second, and it was wildly worse! It's presented as though it's a sharpening, but I couldn't figure out how, at least in the examples I tried out. I think they may be doing it for a different reason, but in any event, it seemed like we were better off with the first bound. I left the code for both in, with the second in comments, if you want to try it out. I also put in the naive estimates for n = 1,2,3 (the result in the paper is for n > 3): if there's something better, let me know. Maybe we should just directly compute the charpoly in those cases, since it's easy to write down? Using this code with 23 replaced by 23, 67, and 199 I benchmarked both Sage and Magma on various machines. Here Sage is just using PARI. For 23, Magma is 10 times faster. For 67, Magma is 5-10 times faster. For 199, Magma and Sage are almost the same. So probably they are using exactly the same algorithm, but Magma is more optimized for the small cases. ModularSymbols_clear_cache() eps = DirichletGroup(23*3, CyclotomicField(11)).1^2 M = ModularSymbols(eps); M t = M.hecke_matrix(2) time t.charpoly() Magma code to make the matrix and benchmark charpoly: eps := DirichletGroup(67*3, CyclotomicField(11)).2^2; M := ModularSymbols(eps,2,0); t := HeckeOperator(M,2); time f := CharacteristicPolynomial(t); The timings themselves are as follows, all on an Opteron 1.8Ghz. The 331 matrix is 220x220, and our new algorithm is already 448 times faster than Magma in that case. By the way, here are similar timings for computing the square of the matrix T_2, i.e., for matrix multiplication: So Magma's matrix multiply is vastly superior to what is in Sage, but only because Sage's multiply is very stupid and generic. Regarding charpoly above, the answer isn't very big. Each coefficient looks like this in size: -13699065951748748504444162373*zeta_11^9 - 30666629423224882851453031398*zeta_11^8 - 17759717829637529333530323750*zeta_11^7 + 18956836030606298117040309088*zeta_11^6 - 17759717829637529333530323750*zeta_11^5 - 30666629423224882851453031398*zeta_11^4 - 13699065951748748504444162373*zeta_11^3 - 21360349014330060044744277916*zeta_11 - 21360349014330060044744277916 Probably that should take at most 350/5 = 70 primes to get using a multimodular algorithm. In magma, charpoly of a random 132x132 matrix over GF(10007) done 70 times takes 0.63 seconds. The same in Sage takes 2.46 seconds. This time plus the overhead of CRT should be about the time it takes to do the charpoly. (On OS X Sage takes 1.07 seconds and Magma 0.43 seconds.) So I don't see why Sage shouldn't be able to do this charpoly in about 10 seconds. Here is a little demo proof-of-concept illustrating computing the 67 charpoly more quickly by doing it over ZZ and using p-adic reconstruction. I do not think this is the way to go -- I think straight multimodular is -- but this already illustrates that both PARI and Magma are pretty stupid right now. def padic_cyclotomic_reconstruction(K, w, p, prec, phi): n = K.degree() zeta = K.gen() X = [zeta^i for i in range(n)] + [w] A = matrix(ZZ, n + 2, n + 2) for i in range(len(X)): A[i,i] = 1 A[i,n+1] = Mod(phi(X[i]), p^prec).lift() A[n+1, n+1] = p^(prec-1) L = A.LLL() #print L rr = L[1].copy() rr[0] -= rr[-1] alpha = -1/rr[-2] lin_comb = rr[:-2]*alpha return K(lin_comb.list()) def charpoly_cyclo(A, prec=20, pstart=389): # Compute charpoly of A using p-adic cyclotomic algorithm. K = A.base_ring() n = K.number_of_roots_of_unity() p = pstart while p % n != 1: p = next_prime(p) print "p = ", p f = K.defining_polynomial() C = pAdicField(p, prec) R = f.roots(C) phi = K.hom(QQ(R[0][0].lift()), check=False) pp = p^prec B = matrix(QQ, A.nrows(), A.ncols(), [phi(w)%pp for w in A.list()]) time f = B.charpoly() return [padic_cyclotomic_reconstruction(K, w, p, prec, phi) for w in f.list()] /// ModularSymbols_clear_cache() eps = DirichletGroup(67*3, CyclotomicField(11)).1^2 M = ModularSymbols(eps); M t = M.hecke_matrix(2) time f = t.charpoly() /// Time: CPU 8.30 s, Wall: 8.78 s time B = charpoly_cyclo(t, 30, next_prime(10000)) /// p = 10099 Time: CPU 2.26 s, Wall: 2.64 s CPU time: 3.56 s, Wall time: 4.14 s
http://www.sagemath.org:9001/cyclo/charpoly
crawl-003
refinedweb
964
76.11
NOTE: This is a revised version of this blog that reflects much better ways to do some of the tensor algebra in the first example below. Google has recently released some very interesting new tools to the open source community. First came Kubernetes, their container microservice framework, and that was followed by two new programming systems based on dataflow concepts. Dataflow is a very old idea that first appeared in the computer architecture community in the 1970 and 80s. Dataflow was created as an alternative to the classical von-Neumann computer design. It was hoped that it would have a performance advantage because it would exploit much greater levels of parallelism than thought possible with classical computers[1]. In dataflow systems computation can be visualized as a directed graph where the vertices of the graph are operators and data “flows” through the system along the edges of the graph. As soon as data is available on all the input edges of an operator node the operation is carried out and new data is put on the output edges of the node. While only a few actual data flow computers were built, the concept has been fundamental to distributed and parallel computing for a long time. It shows up in applications like complex event processing, stream analytics and systems like the Microsoft AzureML programming model I described earlier. Google’s newly released Cloud Dataflow is a programming system for scalable stream and batch computing. I will return to Cloud Dataflow in another post, but for now, I will focus on the other dataflow system they released. Called TensorFlow, it is a programming system designed to help researchers build deep neural networks[2] and it appears to be unrelated to Cloud Dataflow. All the documentation and downloadable code for TensorFlow are on-line. TensorFlow is also similar to Microsoft Research’s Computational Network Toolkit (CNTK) in several ways that I will describe later. TensorFlow is designed allow the programmer to easily “script” a dataflow computation where the basic units of computing are very large multi-dimensional arrays. The scripts are written in Python or C++ and it works very well with IPython/jupyter notebooks. In the following pages I will give a very light introduction to TensorFlow programming and illustrate it by building a bare-bones k-means clustering algorithm. I will also briefly describe one of their examples of a convolutional neural network. TensorFlow can be installed and run on your laptop, but as we shall show below, it really shines on bigger more powerful hardware. An interesting thing happened in the recent evolution of deep neural networks. The most impressive early work on really large deep neural network was done on large cloud-scale clusters. However, the type of parallelism needed for doing deep network computation was really very large array math, which is really better suited to execution on a GPU. Or a bunch of GPUs on a massive memory machine or a cluster of massive memory machines with a bunch of GPUs each. For example, the Microsoft CNTK has achieved some remarkable results on 8 GPU systems and it will soon be available on the Azure GPU Lab. (I also suspect that supercomputers such as the SDSC Comet with large memory, multi-GPU, multi-core nodes would be ideal.) TensorFlow: a shallow introduction. There is a really nice white paper by the folks at Google Research with far more details about the TensorFlow system architecture than I give here. What follows is a shallow introduction and an experiment. There are two main concepts in TensorFlow. The first is the idea of computing on objects that are very large multi-dimensional arrays called tensors. The second is the fact that the computation you build with tensors are compiled into graphs that are executed in a “dataflow” style. We need to unpack both of these concepts. Tensors Let’s start with tensors. First, these are not your great-great grandfather’s tensors. Those tensors were introduced by Carl Friedrich Gauss for differential geometry where they were needed to provide metrics that could be used to describe things like the curvature of surfaces. Einstein “popularized” tensors in his general theory of relativity. Those tensors have a very formal algebra of covariant and contravariant forms. Fortunately, we don’t have to go there to understand the use of tensors in TensorFlow’s machine learning applications. In fact, if you understand Numpy arrays you are off to a good start and Numpy arrays are just really efficient multidimensional arrays. TensorFlow can be programmed in Python or C++ and I will use the Python binding in the discussion below In TensorFlow tensors are created and stored in container objects that are one of three types: variables, placeholders and constants. Let’s start with constants. As the name implies constant tensors are initialized once and never changed. Here are two different ways to create constant arrays that we will use in an example below. One is a tensor of size Nx2 filled with zeros and the other is a tensor of the same shape filled with the value 1. import numpy as np import tensorflow as tf N = 10000 X = np.zeros(shape=(N,2)) Xv = tf.constant(X, name="points") dones = tf.fill([N,2], np.float64(1.)) We have used a Numpy array of values to create the TensorFlow constant. Notice that we have given our tensor a “name”. Naming tensors is optional but it comes in very handy when debugging. Variables are holders of multidimensional arrays that persist across sessions and may be modified and even saved to disk. The concept of a “session” is important in TensorFlow. You can think of it as a context where actual TensorFlow calculations take place. The first calculation involving a variable is to load it with initial values. For example, let’s create a 2 by 3 tensor that, when initialized contains the constant 1.0 in every element. Then let’s convert that back to a Numpy array and print it. To do that we need a session and we will call the variable initializer in that session. When working in the IPython notebook it is easiest to use an “InteractiveSession” so that we can easily edit and redo computations. sess = tf.InteractiveSession() myarray = tf.Variable(tf.constant(1.0, shape = [2,3])) init =tf.initialize_all_variables() sess.run(init) mynumpy =myarray.eval() print(mynumpy) [[ 1. 1. 1.] [ 1. 1. 1.]] As shown above, the standard way to initialize variables is to initialize them all at once. The process of converting a tensor back to a Numpy array requires evaluating the tensor with the “eval” method. As we shall see this is an important operator that we will describe in more detail below. The final tensor container is a “placeholder”. Creating a placeholder object only requires specifying a type and some information about its shape. We don’t initialize a placeholder like a variable because it’s initial values will be provided later in an eval()-like operation. Here is a placeholder we will use later. x = tf.placeholder(tf.float32, [None, 784], name="x") Notice that in this case the placeholder x is two dimensional but the first dimension is left unbound. This allows us to supply it with a value that is any number of rows of vectors of length 784. As we shall see, this turns out to be very handy for training neural nets. Dataflow The real magic of TensorFlow is in the dataflow execution of computations. A critical idea behind TensorFlow is to keep the slow world of Python away from the fast world of parallel tensor algebra as much as possible. Python is used only to describe the dataflow graph of the computation. TensorFlow has a large library of very useful tensor operators. Let’s describe a computation we will use in the k-means example. Suppose we have 8 points in the plane in a vector called Xv and 4 special points in an array called kPoints. I would like to label each of the 8 points with the index of the special point nearest to it. This should give me 4 clusters of points. I now want to find the centroid of each of these clusters. Assume we have a tensor called “blocked” where row i gives the distance from each of our 8 points to the ith special point. For example, if “blocked” is as shown below then what we are looking for is the index of the smallest element in each column. To find the centroids I use this min index vector to select the elements of Xv in each cluster. We just add them up to and divide by the number of points in that cluster. The TensorFlow code to do this is shown below. TensorFlow has a large and handy library of tensor operators many of which are similar to Numpy counterparts. For example, argmin computes the index of the smallest element in an array given the dimension over which to search. Unsorted_segment_sum will compute a sum using another vector to define the segments. The min index vector nicely partitions the Xv vector into the appropriate “segments” for the unsorted_segment_sum to work. The same unsorted_segment_sum operator can be used to count the number of elements in each cluster by adding up 1s. mins = tf.argmin(blocked, 0, name="mins") sums = tf.unsorted_segment_sum(Xv, mins, 4) totals = tf.unsorted_segment_sum(dones, mins, 4, name="sums") centroids = tf.div(sums, totals, name = "newcents") The key idea is this. When this python code is executed it builds a data flow graph with three inputs (blocked, Xv, dones) and outputs a tensor called centroids as shown below[3].[4] The computation does not start until we attempt to evaluate the result with a call to centroids.eval(). This sort of lazy evaluation means that we can string together as many tensor operators as needed that will all be executed outside the Python REPL. Figure 1. Computational flow graph TensorFlow has a very sophisticated implementation. The computing environment upon which it is running is a collection of computational devices. These devices may be cpu cores, GPUs or other processing engines. The TensorFlow compilation and runtime system maps subgraphs of the flow graph to these devises and manages the communication of tensor values between the devices. The communication may be through shared memory buffers or send-receive message pairs inserted in the graph at strategically located points. Just as the dataflow computer architects learned, dataflow by itself is not always efficient. Sometimes control flow is needed to simplify execution. The TensorFlow system also inserts needed control flow edges as part of its optimizations. As was noted above, this dataflow style of computation is also at the heart of CNTK. Another very important property of these dataflow graphs is that it is relatively easy to automatically compute derivatives by using the chain-rule working backwards through the graph. This makes it possible to automate the construction of gradients that are important for finding optimal parameters for learning algorithms. I won’t get into this here but it is very important. A TensorFlow K-means clustering algorithm With the kernel above we now have the basis for a simple k-means clustering algorithm shown in the code below. Our initial centroid array is a 4 by 2 Numpy array we shall call kPoints. What is missing is the construction of the distance array blocked. Xv holds all the N points as a constant tensor. My original version of this program used python code to construct blocked. But there is an excellent improvement on computation of “mins” published in GitHub by Shawn Simister at. Shawn’s version is better documented and about 20% faster when N is in the millions. Simister’s computation does not need my blocked array and instead expands the Xv and centroids vector and used a reduction to get the distances. Very nice (This is the revision to the blog post that I referred to above.) N = 10000000 k = 4 #X is a numpy array initialized with N (x,y) points in the plane Xv = tf.constant(X, name="points") kPoints = [[2., 1.], [0., 2.], [-1., 0], [0, -1.]] dones = tf.fill([N,2], np.float64(1.)) centroids = tf.Variable(kPoints, name="centroids") oldcents = tf.Variable(kPoints) initvals = tf.constant(kPoints, tf.float64) for i in range(20): oldcents = centroids #This is the Simister mins computation expanded_vectors = tf.expand_dims(Xv, 0) expanded_centroids = tf.expand_dims(centroids, 1) distances = tf.reduce_sum( tf.square( tf.sub(expanded_vectors, expanded_centroids)), 2) mins = tf.argmin(distances, 0) #compute the new centroids as the mean of the points nearest sums = tf.unsorted_segment_sum(Xv, mins, k) totals = tf.unsorted_segment_sum(dones, mins, k, name="sums") centroids = tf.div(sums, totals, name = "newcents") #compute the distance the centroids have moved sense the last iteration dist = centroids-oldcents sqdist = tf.reduce_mean(dist*dist, name="accuracy") print np.sqrt(sqdist.eval()) kPoints = centroids.eval() However, this version is still very inefficient. Notice that we construct a new execution graph for each iteration. A better solution is to pull the graph construction out of the loop and construct it once and reuse it. This is nicely illustrated in Simister’s version of K-means illustrated on his Github site. It is worth asking how fast Tensor flow is compared with a standard Numpy version of the same algorithm. Unfortunately, I do not have a big machine with fancy GPUs, but I do have a virtual machine in the cloud with 16 processors. I wrote a version using Numpy and executed them both from an IPython notebook. The speed-up results are in Figure 2. Comparing these we see that simple Python Numpy is faster than TensorFlow for values of N less than about 20,000. But for very large N we see that TensorFlow can make excellent use of the extra cores available and exploits the parallelism in the tensor operators very well. Figure 2. speed-up of TensorFlow on 16 cores over a simple Numpy implementation. The axis is log10(N) I have placed the source for this notebook in Github. Also an improved version based on Simister’s formulation can be found there. TensorFlow also has a very interesting web tool call TensorBoard that lets you look at various aspects of the logs of your execution. One of the more interesting TensorBoard displays is the dataflow graph for your computation. Figure 3 below shows the display generated for the k-means computational graph. Figure 3. TensorBoard graph displace of the k-means algorithm. As previously mentioned this is far more complex than my diagram in Figure 1. Without magnification this is almost impossible to read. Figure 4 contains a close-up of the top of the graph where the variable newcents is created. Because this is an iteration the dataflow actually contains multiple instances and by clicking on the bubble for the variable it will show you details as shown in the close-up. Figure 4. A close-up of part of the graph showing expanded view of the multiple instances of the computation of the variable newcents. A brief look at a neural network example The k-means example above is really just an exercise in tensor gymnastics. TensorFlow is really about building and training neural networks. The TensorFlow documents have a number of great example, but to give you the basic idea I’ll give you a brief look at their convolutional deep net for the MNIST handwritten digit example. I’ll try to keep this short because I will not be adding much new here. The example is the well-known “hello world” world of machine learning image recognition. The setup is as follows. You have thousands of 28 by 28 black and white images of hand written digits and the job of the system you build is to identify each. The TensorFlow example uses a two-layer convolutional neural net followed by a large, densely connected layer and a readout layer. If you are not familiar with convolutional neural nets, here is the basic idea. Images are strings of bits but they also have a lot of local 2d structure such as edges or holes or other patterns. What we are going to do is look at 5×5 windows to try to “find” these patterns. To do this we will train the system to build a 5×5 template W array (and a scalar offset b) that will reduce each 5×5 window to a point in a new array conv by the formula (the image is padded near the boundary points in the formula above so none of the indices are out of bounds) We next modify the conv array by applying the “ReLU” function max(0,x) to each x in the conv array so it has no negative values. The final step is to do “max pooling”. This step simply computes the maximum value in a 2×2 block and assigns it to a smaller 14×14 array. The most interesting part of the convolutional network is the fact that we do not use one 5×5 template but 32 of them in parallel producing 32 14×14 result “images” as illustrated in Figure 5 below. Figure 5. The first layer convolutional network. When the network is fully trained each of the 32 5×5 templates in W is somehow different and each selects for a different set of features in the original image. One can think of the resulting stack of 32 14×14 arrays (called h_pool1) as a type of transform of the original image much as a Fourier transform can separate a signal in space and time and transform it into frequency space. This is not what is going on here but I find the analogy helpful. We next apply a second convolutional layer to the h_pool1 tensor but this time we apply 64 sets of 5×5 filters to each the 32 h_pool1 layers (and adding up the results) to give us 64 new 14×14 arrays which we reduce with max pooling to 64 7×7 arrays called h_pool2. Rather than provide the whole code, which is in the TensorFlow tutorial, I will skip the training step and show you how you can load the variables from a previous training session and use them to make predictions from the test set. . (The code below is modified versions of the Google code found at GitHub and subject to their Apache copyright License.) Let’s start by creating some placeholders and variables. We start with a few functions to initialize weights. Next we create a placeholder for our image variable x which is assumed to be a list of 28×28=784 floating point vectors. As described above, we don’t know how long this list is in advance. We also define all the weights and biases described above. def weight_variable(shape, names): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial, name=names) def bias_variable(shape, names): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial, name=names) x = tf.placeholder(tf.float32, [None, 784], name="x") sess = tf.InteractiveSession() W_conv1 = weight_variable([5, 5, 1, 32], "wconv") b_conv1 = bias_variable([32], "bconv") W_conv2 = weight_variable([5, 5, 32, 64], "wconv2") b_conv2 = bias_variable([64], "bconv2") W_fc1 = weight_variable([7 * 7 * 64, 1024], "wfc1") b_fc1 = bias_variable([1024], "bfcl") W_fc2 = weight_variable([1024, 10], "wfc2") b_fc2 = bias_variable([10], "bfc2") Next we will do the initialization by loading all the weight and bias variable that were saved in the training step. We have saved these values using TensorFlow’s save state method previously in a temp file. saver = tf.train.Saver() init =tf.initialize_all_variables() sess.run(init) saver.restore(sess, "/tmp/model.ckpt") We can now construct our deep neural network with a few lines of code. We start with two functions to give us a bit of shorthand to define the 2D convolution and max_pooling operators. We first reshape the image vectors into the image array shape that is needed. The construction of the flow graph is now straight forward. The final result is the tensor we have called y_conv. def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #first convolutional layer x_image = tf.reshape(x, [-1,28,28,1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) #second convolutional layer h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) #final layer h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2) Notice that we have not evaluated the flow graph even though we have the values for all of the weight and bias variables already loaded. We need a value for our placeholder x. Suppose tim is an image from the test set. To see if our trained network can recognize it all we need to do is supply it to the network and evaluate. To supply the value to the network we use the eval() function with a special “feed_dict” dictionary argument. This just lists the name of the placeholder and the value you wish to give it. tim.shape = ((1,28*28)) y = y_conv.eval(feed_dict={x: tim}) label = np.argmax(y) print(label) You can read the TensorFlow tutorial to learn about the network training step. They use a special function that does the stochastic backpropagation that makes it all look very simple. How does this thing work? Developing an intuition for how the convolutional neural network actually works is a real challenge. The fact that it works at all is amazing. The convolutional steps provide averaging to help create a bit of location and scale invariance, but there is more going on here. Note that given a 28*28 image the output of the second convolutional step is 64 7×7 array that represent feature activations generated by the weight templates. It is tempting to look at these as images to see if one can detect anything. Obviously the last fully connected layer can do a great job with these. But it is easy to see what they look like. If we apply h_pool2.eval(feed_dict={x: image}) we can look at the result as a set of 64 images. Figure 6 does just that. I picked two random 9 images, two 0s, two 7s and three 6s. Each column in the figure represents depicts the first 9 of the 64 images generated by each. If you stare at this long enough you can see some patterns there. (Such as the diagonals in the 7s and the cup shape of the 4s.) But very little else. On the other hand, taken together each (complete) column is a sufficient enough signature for the last layer of the network to identify the figure with over 99% accuracy. That is impressive. Figure 6. Images of h_pool2 given various images of handwritten digits. I have put the code to generate these images on Github along with the code to train the network and save the learned weights. In both cases these are ipython notebooks. A much better way to represent the data learned in the convolutional networks is to “de-convolve” the images back to the pixel layers. In Visualizing and Understanding Convolutional Networks by Matthew Zeiler and Rob Fergus do this and the results are fascinating. Final Notes I intended this note to be a gentle introduction to the programming style of TensorFlow and not an introduction to deep neural networks, but I confess I got carried away towards the end. I will follow up this post with another that look at recurrent networks and text processing provided I can think of something original to say that is not already in their tutorial. I should also note that the construction of the deep network is, in some ways, similar to Theano, the other great Python package for building deep networks. I have not looked at how Theano scales or how well it can be used for tasks beyond deep neural networks, so don’t take this blog as an endorsement of TensorFlow over Theano or Torch. On a final note I must say that there is another interesting approach to image recognition problem. Antonio Criminisi has led a team that has developed Deep Neural Decision Forests that combine ideas from decision forests and CNNs. [1] This was before the explosion of massively parallel systems based on microprocessors changed the high-end computing world. [2] If you are new to neural networks and deep learning, you may want to look at Michael Nielsen’s excellent new on-line book. There are many books that introduce deep neural networks, but Nielsen’s mathematical treatment is very clear and approachable. [3] This is not a picture of the actual graph that is generated. We will use another tool to visualize that later in this note. [4] The observant reader will note that we have a possible divide by zero here if one of the special points is a serious outlier. As I said, this is a bare-bones implementation.
https://esciencegroup.com/2016/01/05/an-encounter-with-googles-tensorflow/
CC-MAIN-2017-04
refinedweb
4,226
64
The. nobru @nobru Posts made by nobru - RE: Why does instantiating QPixmap in a second thread block/slow main? The documentation is here Threads and QObjects - RE: Why does instantiating QPixmap in a second thread block/slow main?? - Why_()) - RE: Why does QScrollArea scroll on multi-touch even if nobody told it to? Finally I've solved this: it was gestures! I brutally disabled them with something like import sys from PyQt5 import Qt qapp = Qt.QApplication(sys.argv) picture = Qt.QPixmap('/home/bruno/Pictures/ABCDEFGH_red_on_green.png') qlab = Qt.QLabel() qlab.setPixmap(picture) MyScrollArea = Qt.QScrollArea def event(self, event): print('DEBUG: event type {}'.format(event.type())) if event.type() in (Qt.QEvent.Gesture, Qt.QEvent.GestureOverride): event.accept() return True else: return super(MyScrollArea, self).event(event) MyScrollArea.event = event qsa = MyScrollArea() qsa.setWidget(qlab) qsa.setGeometry(10, 30, 200, 200) qsa.show() sys.exit(qapp.exec_()) - Why does QScrollArea scroll on multi-touch even if nobody told it to? I'm using a multi-touch monitor (EIZO DuraVision FDF2382WT) with Lubuntu 18.04. I have a QScrollArea with a QLabel as widget. The QLabel has a QPixmap. Touching it with two or more fingers scrolls the picture. Why does this happen? According to my understanding of the documentation QTouchEvent handling is opt-in, except for QWindows which always receive them. I've tried setting AA_SynthesizeMouseForUnhandledTouchEvents to False, setting WA_AcceptTouchEvents for the QScrollArea and QLabel to both True and False. I saw no changes. I've tried reimplementing QScrollArea.event, QScrollArea.scroll, QLabel.event and the scroll bar setValue to log the calls, but nothing is printed. import sys from PyQt5 import Qt qapp = Qt.QApplication(sys.argv) picture = Qt.QPixmap('picture.png') qlab = Qt.QLabel() qlab.setPixmap(picture) qsa = Qt.QScrollArea() qsa.setWidget(qlab) qsa.setGeometry(10, 30, 200, 200) qsa.show() sys.exit(qapp.exec_()) I was expecting the picture to stay in place, instead it is dragged. Nothing happens when dragging with the mouse or with just one finger.
https://forum.qt.io/user/nobru
CC-MAIN-2019-39
refinedweb
333
63.05
This first assignment I didnt have any problems with. The following assignment that is linked to this first assignment is the one i could use some help with. Create a small program based on the following (calculate area of a circle and a rectangle): //use integers for the length, width, areaR, and radius, //use double for areaC 1. Declare variables: length, width, areaR, radius, and areaC. 2. Ask the user to enter the length and width of the rectangle and the radius of the circle. 3. Get the length, width, and radius from the user. 4. Calculate the area of the rectangle. 5. Display the length, width, and area of the rectangle onto the screen. 6. Calculate the area of the circle (use 3.14). 7. Display the radius and area of the circle onto the screen. #Code: include<stdio.h>; //defines PI at 3.14 #define PI 3.14 int main(void){ // five declared variables int length = 0; int width = 0; int areaR = 0; int radius = 0; double areaC = 0.0; //Ask the user the length and width of the rectangle printf("Please enter the legnth and width of the rectangle: \n"); //Get the length and width from the user scanf("%d%d", &length, &width); //Calculate the area of the rectangle areaR = length * width; //Print the area of the rectangle printf("Area of the rectangle is: %d\n", areaR); //Ask the useer for the radius of the circle printf("Please enter the radius of the circle: \n"); //Get the radius of the circle scanf("%d", &radius); //Calculate and are of the circle areaC = PI * radius * radius; //Print the area of the circle printf("The area of the circle is: %lf\n", areaC); // %lf is for double return 0; } This is the assignment I need some help with. I have attached what i have written so far for this second assignment and am not sure what steps to take next, or if i have written it properly so far. Break up the program (areas) from last week into one main function and 4 user-defined functions: // gets an integer from the user and returns it // make 3 calls to this function: // get the length of the rectangle from the user and return it to main // get the width of the rectangle from the user and return it to main // get the radius of the circle from the user and return it to main int GetNum(void); // takes two arguments, the length and width of the rectangle and returns the area int CalculateAreaR(int length, int width); // takes one argument, the radius of the circle and returns the area double CalculateAreaC(int radius); Code: #include <stdio.h> #define PI 3.14 int GetNum(void); int CalculateAreaR (int length, int width); double CalculateAreaC(int radius); int main() { int length; int width; double radius; printf( "Please enter the length of the rectangle: \n"); scanf("%d", &length); printf( "Please enter the width of the rectangle: \n"); scanf("%d", &width); printf("The area of the rectangle is %d\n", CalculateAreaR(length, width)); getchar(); printf("PLease enter the radius of the circle: \n"); scanf("%lf", &radius); printf("The area of the circle is %lf\n", CalculateAreaC(radius)); getchar(); } int CalculateAreaR (int length, int width){ return length * width; } double CalculateAreaC(int radius){ return PI * radius * radius; }
http://cboard.cprogramming.com/c-programming/154037-multi-function-assignment-printable-thread.html
CC-MAIN-2014-23
refinedweb
549
59.37
Hello everyone, I’m in this exercise of Numpy. And I was experimenting with the code. I tried this : import numpy as np my_list = [1, 2, 3, 4, 5, 6,9.45,'foo'] my_array = np.array(my_list) print(my_array) If you remove the string, then all the elements become a floating-point number, and with the string, the elements change into a string. But the lesson mentioned that arrays can be of any type. But Numpy makes sure that everything is of the same type. In general, aren’t arrays supposed to be of the same type ?? And I’m also confused between a list and an array.
https://discuss.codecademy.com/t/numpys-array/543724
CC-MAIN-2020-50
refinedweb
108
77.33
Google Genomics is now Cloud Life Sciences. This page explains how to migrate from the v2alpha1 version of Google Genomics to the v2beta version of Cloud Life Sciences. Cloud Life Sciences as a regionalized service Google Genomics was a global service that could not run in specific Google Cloud locations. The Cloud Life Sciences API is a regionalized service that lets you align with locality needs for your data. When you select the location where the Cloud Life Sciences API runs, the metadata for the operation you run is stored in that location. For information on how to make requests to the Cloud Life Sciences API and specify a location, see REST and RPC paths. REST and RPC paths The following changes were made to the REST and RPC paths to the Cloud Life Sciences API: - All paths now use lifesciences.googleapis.cominstead of genomics.googleapis.com. - All paths now require you to specify a Google Cloud location, such as us-central1, when calling the Cloud Life Sciences API. For example: v2alpha1 Google Genomics: GET v2beta Cloud Life Sciences API: GET Batch requests must be made to regional endpoints: Google Cloud CLI changes The Cloud Life Sciences gcloud CLI commands now use gcloud beta lifesciences instead of gcloud alpha genomics. For example: v2alpha1 Google Genomics: gcloud alpha genomics operations describe OPERATION_ID v2beta Cloud Life Sciences API: gcloud beta lifesciences operations describe OPERATION_ID The --cpu and --memory flags have been removed. Instead, use the --machine-type flag. When you choose a machine type, you can specify the amount of memory and the number of CPU cores. IAM changes Cloud Life Sciences uses the following Identity and Access Management (IAM) roles and permissions in the lifesciences namespace instead of the genomics namespace used by Cloud Genomics v2alpha1. In addition to the namespace change, the role roles/lifesciences.workflowsRunner replaced roles/genomics.pipelinesRunner, and the permission lifesciences.workflows.run replaced genomics.pipelines.run. v2alpha1 Google Genomics: v2beta Cloud Life Sciences API: Migrate requests and responses The process of migrating Google Genomics v2alpha1 requests to Cloud Life Sciences API v2beta requests primarily consists of replacing field names and changing field structures within requests and responses. Each of the following sections contains information about a Cloud Life Sciences API object and any differences that the object has between the Google Genomics v2alpha1 and Cloud Life Sciences API v2beta. Action The name field has changed to containerName: Previously, the flags field was an enum that let you specify values in a Flag object. These values are now fields in the Action object. The following sample shows how to migrate your request when setting the ignoreExitStatus flag in Action: Event Machine-readable event details are now stored as a specific message type inside a oneof rather than a protobuf.Any typed message. The underlying message types have not changed. The following sample shows how to migrate your request when configuring a DelayedEvent: Network The name field has changed to network: Resources The Resources object no longer takes a projectId field; the operation detects the Google Cloud project ID from the request URL instead. operations.get and operations.list responses In v2alpha1, the projects.locations.operations.get and projects.locations.operations.list methods returned a response containing a Google Cloud project ID in operation names. In v2beta, the operation name in the response contains a Google Cloud project number instead of a project ID. See Creating and managing projects for information on the differences between a project ID and a project number. When submitting a request to the Cloud Life Sciences API, you can use either the project ID or the project number. However, the response always contains the project number.
https://cloud.google.com/life-sciences/docs/how-tos/migration
CC-MAIN-2022-40
refinedweb
613
53.92
Command Line tool for i18next A simple command line and gulp plugin that lets you parse your code and extract the translations keys in it. The idea is to parse code files to retrieve the translation keys and create a catalog. You can use the command line or run in the background with Gulp while coding. It removes the pain of maintaining your translation catalog. namespace_old.jsonfile. It is useful to avoid losing translations you may want to reuse. _oldfile if the one in the translation file is empty. key_context key_pluraland key_plural_0 npm install i18next-parser -g mocha --reporter nyan test/test.js Any contribution is welcome. Just follow those quick guidelines: git checkout -b feature/my-feature(use feature/ or hotfix/ branch prefixes accordingly) feature/my-featurebranch to this repository master. Do not create both an issue ticket and a Pull Request. Thanks a lot to all the previous contributors. i18next /path/to/file/or/dir [-orapfnl] t data-i18n translation : . en,fr Gulp defines itself as the streaming build system. Put simply, it is like Grunt, but performant and elegant. var i18next = require'i18next-parser';gulptask'i18next'gulpsrc'app/**'pipei18nextlocales: 'en' 'de'functions: '__' '_e'output: '../locales'pipegulpdest'locales';; app/). Defaults to locales ['t'] ['data-i18n'] translation : . ['en','fr'] true .json You can inject the locale tag in either the prefix, suffix or extension using the $LOCALE variable. The way gulp works, it take a src(), applies some transformations to the files matched and then render the transformation using the dest() command to a path of your choice. With i18next-parser, the src() takes the path to the files to parse and the dest() takes the path where you want the catalogs of translation keys. The problem is that the i18next() transform doesn't know about the path you specify in dest(). So it doesn't know where the catalogs are. So it can't merge the result of the parsing with the existing catalogs you may have there. gulp.src('app/**').pipe(i18next()).pipe(gulp.dest('custom/path')); If you consider the code above, any file that match the app/** pattern will have of base set to app/. As per the vinyl-fs documentation (which powers gulp), the base is the folder relative to the cwd and defaults is where the glob begins. Bare with me, the output option isn't defined, it defaults to locales. So the i18next() transform will look for files in the app/locales directory (the base plus the output directory). But in reality they are located in custom/path. So for the i18next-parser to find your catalogs, you need the output option: gulp.src('app/**').pipe(i18next({output: '../custom/path'})).pipe(gulp.dest('custom/path')); The output option is relative to the base. In our case, we have app/ as a base and we want custom/path. So the output option must be ../custom/path. The transform emit a reading event for each file it parses: .pipe( i18next().on('reading', function(path) { }) ) The transform emit a writing event for each file it passes to the stream: .pipe( i18next().on('reading', function(path) { }) ) The transform emit a json_error event if the JSON.parse on json files fail. It is passed the error name (like SyntaxError) and the error message (like Unexpected token }): .pipe( i18next().on('reading', function(name, message) { }) ) i18next-parser does not support variables in translation functions, use a string literal: i18next-parser can't parse keys if you use variables like t(variable), you need to use strings like t('string'). Change the output directory (cli and gulp) Command line (the options are identical): i18next /path/to/file/or/dir -o /output/directory i18next /path/to/file/or/dir:/output/directory Gulp: .pipe(i18next({output: 'translations'})) It will create the file in the specified folder (in case of gulp it doesn't actually create the files until you call dest()): /output/directory/en/translation.json... Change the locales (cli and gulp) Command line: i18next /path/to/file/or/dir -l en,de,sp Gulp: .pipe(i18next({locales: ['en', 'de', 'sp']})) This will create a directory per locale in the output folder: locales/en/...locales/de/...locales/sp/... Change the default namespace (cli and gulp) Command line: i18next /path/to/file/or/dir -n my_default_namespace Gulp: .pipe(i18next({namespace: 'my_default_namespace'})) This will add all the translation from the default namespace in the following file: locales/en/my_default_namespace.json... Change the namespace and key separators (cli and gulp) Command line: i18next /path/to/file/or/dir -s '?' -k '_' Gulp: .pipe(i18next({namespaceSeparator: '?', keySeparator: '_'})) This parse the translation keys as follow: namespace?key_subkeynamespace.json{key: {subkey: ''}}... Change the translation functions (cli and gulp) Command line: i18next /path/to/file/or/dir -f __,_e Gulp: .pipe(i18next({functions: ['__', '_e']})) This will parse any of the following function calls in your code and extract the key: __('key'__ 'key'__("key"__ "key"_e('key'_e 'key'_e("key"_e "key" Note1: we don't match the closing parenthesis as you may want to pass arguments to your translation function. Note2: the parser is smart about escaped single or double quotes you may have in your key. Change the regex (cli and gulp) Command line: i18next /path/to/file/or/dir -p "(.*)" Gulp: .pipe(i18next({parser: '(.*)'})) If you use a custom regex, the functions option will be ignored. You need to write you regex to parse the functions you want parsed. You must pass the regex as a string. That means that you will have to properly escape it. Also, the parser will consider the translation key to be the first truthy match of the regex; it means that you must use non capturing blocks (?:) for anything that is not the translation key. The regex used by default is: [^a-zA-Z0-9_](?:t)(?:\\(|\\s)\\s*(?:(?:\'((?:(?:\\\\\')?[^\']+)+[^\\\\])\')|(?:"((?:(?:\\\\")?[^"]+)+[^\\\\])"))/g Filter files and folders (cli) i18next /path/to/file/or/dir --fileFilter '*.hbs,*.js' --directoryFilter '!.git' In recursive mode, it will parse *.hbs and *.js files and skip .git folder. This options is passed to readdirp. To learn more, read their documentation. Work with Meteor TAP-i18N (gulp) .pipe(i18next({ output: "i18n", locales: ['en', 'de', 'fr', 'es'], functions: ['_'], namespace: 'client', suffix: '.$LOCALE', extension: ".i18n.json", writeOld: false })) This will output your files in the format $LOCALE/client.$LOCALE.i18n.json in a i18n/ directory.
https://www.npmjs.com/package/i18next-parser
CC-MAIN-2016-22
refinedweb
1,062
66.13
Popularity 5.9 Stable Activity 0.0 Stable 81 5 4 Monthly Downloads: 10 Programming language: Elixir License: MIT License river alternatives and similar packages Based on the "HTTP" category. Alternatively, view river alternatives based on common mentions on social networks and blogs. mint9.6 5.8 river VS mintFunctional HTTP client for Elixir with support for HTTP/1 and HTTP/2. gun9.6 4.5 river VS gunHTTP/1.1, HTTP/2 and Websocket client for Erlang/OTP. Crawler9.4 4.5 river VS CrawlerA high performance web crawler / scraper in Elixir. finch9.4 8.8 river VS finchElixir HTTP client, focused on performance Crawly9.1 4.6 river VS CrawlyCrawly, a high-level web crawling & scraping framework for Elixir. scrape8.5 0.0 river VS scrapeScrape any website, article or RSS/Atom Feed with ease! PlugAttack8.3 3.1 river VS PlugAttackA plug building toolkit for blocking and throttling abusive requests Ace8.2 0.0 river VS AceHTTP web server and client, supports and neuron8.2 0.8 river VS neuronA GraphQL client for Elixir webdriver7.3 0.0 river VS webdriverWebDriver client for Elixir. spell6.4 0.0 river VS spellSpell is a Web Application Messaging Protocol (WAMP) client implementation in Elixir. WAMP is an open standard WebSocket subprotocol that provides two application messaging patterns in one unified protocol: Remote Procedure Calls + Publish & Subscribe: web_socket6.3 0.0 river VS web_socketAn exploration into a stand-alone library for Plug applications to easily adopt WebSockets. cauldron6.0 0.0 river VS cauldronI wonder what kind of Elixir is boiling in there. 0.0 river VS proxy with Elixir. wait request with multi port and forward to each URIs bolt4.3 0.0 river VS boltSimple and fast http proxy living in the Erlang VM explode4.2 0.0 river VS explodeAn easy utility for responding with standard HTTP/JSON error payloads in Plug- and Phoenix-based applications sparql_client4.0 2.7 river VS sparql_clientA SPARQL client for Elixir Tube3.2 0.0 river VS TubeWebSocket client library written in pure Elixir uri_template3.2 1.3 river VS uri_templateRFC 6570 compliant URI template processor for Elixir ivar3.1 0.0 river VS ivarIvar is an adapter based HTTP client that provides the ability to build composable HTTP requests. etag_plug3.1 1.0 river VS etag_plugA simple to use shallow ETag plug fuzzyurl3.0 0.0 river VS fuzzyurlAn Elixir library for non-strict parsing, manipulation, and wildcard matching of URLs. mnemonic_slugs2.6 0.0 river VS mnemonic_slugsAn Elixir library for generating memorable slugs. 0.0 river VS prot prot. uri_query2.4 0.0 river VS uri_queryURI encode nested GET parameters and array values in Elixir yuri2.1 0.0 river VS yuriElixir module for easier URI manipulation. l 0.0 river VS l used to be here -- this is a backwards-compat user and repo m( plug_wait11.1 0.0 river VS plug_wait1Plug adapter for the wait1 protocol Ralitobu.Plug1.1 0.0 river VS Ralitobu.PlugElixir Plug for Ralitobu, the Rate Limiter with Token Bucket algorithm 0.0 river VS Digest Auth Library to create auth header to be used with HTTP Digest Authentication Bot detection and prevention for Elixir/Phoenix applications Paraxial.io is bot detection and prevention for Elixir/Phoenix applications. Dealing with scrapers, card cracking, or credential stuffing? We take care of that. Promo paraxial.io Do you think we are missing an alternative of river or a related project? Popular Comparisons README River NOTE: River is a work in progress and should be considered extremely beta. River is a general-purpose HTTP client with eventual hopes of full HTTP/2 support (along with support for HTTP/1.1). It is built from the ground up with three major goals: - be fully compliant with RFC 7540 - be simple and straightforward to use, in the vein of HTTPoison - be awesome, in the same way that Go's http library (which has built-in, transparent support for HTTP/2) is awesome. Installation Add River to your list of dependencies in mix.exs: def deps do [{:river, "~> 0.0.6"}] end Ensure River is started before your application: def application do [applications: [:river]] end Caveats - Currently, River only knows how to make HTTP/2requests to Soon, I'll add the ability to make a request via the Upgrade header so that requests to will work as well. - River doesn't currently speak HTTP/1.x. Once I finish up basic HTTP/2support, HTTP1.xis next on the roadmap. The goal when using River in your project is that you should not need to know whether the underlying connection is using HTTP/2or HTTP/1.x. - River is as beta as it gets, and under active development with no promises of anything being backwards compatible 😬 (until we hit v1.0, of course) Goals - [x] Basic HTTP/2 support - [ ] HTTP/1 --> HTTP/2 upgrading - [ ] Full HTTP/2 support - [ ] Full HTTP/1.x support Basic Usage Simple GET River.get(" => {:ok, %River.Response{__status: :ok, body: "<html>\n<body>\n<h1>Go...", closed: true, code: 200, content_type: "text/html; charset=utf-8", headers: headers: [{":status", "200"}, {"content-type", "text/html; charset=utf-8"}, {"content-length", "1708"}, {"date", "Fri, 30 Sep 2016 04:26:34 GMT"}]}} Simple PUT River.put(" "hello world") => {:ok, %River.Response{...}} Request with timeout # timeout unit is milliseconds River.get(" timeout: 10) => {:error, :timeout}
https://elixir.libhunt.com/river-alternatives
CC-MAIN-2022-21
refinedweb
896
58.38
Grails UI Dependency: compile ":grails-ui:1.2.3"Custom repositories: mavenRepo "" Summary Provides a standard UI tag library for ajaxy widgets using YUI. Installation This plugin is deprecated and no longer maintained. It will see no further development. It is recommended that you use the native API of each widget toolkit to make the most out of them whether it be jQuery-UI, Dojo etc.GUI depends on the installation of the YUI Plugin (version 2.6.0) and the Bubbling Plugin (version 1.5.0). For Grails 1.1: For Grails 1.0.X: grails install-plugin grails-ui grails install-plugin yui grails install-plugin bubbling grails install-plugin grails-ui Description GrailsUIGrailsUI (GUI) is a plugin that provides an extensive tag library of rich ajax components based on the Yahoo! UI (YUI) JavaScript library and a YUI extension called the Bubbling Library. It has been designed for ease-of-use as well as configurability. Quick StartJust want some working code to get a quick start on using GUI? There is a GUI demo project on Google Code that shows working examples of every tag in the library. To check it out: This example project does not have the necessary plugins installed, so you'll have to follow the simple instructions in the "Installation" section below to install the GUI plugin and its dependencies. svn checkout guidemo-read-only ScreencastsA series of screencasts is under construction: - Part 1 - GrailsUI Basics - Part 2 - AutoComplete Basics - Part 3 - Advanced AutoComplete - Part 4 - DataTable (coming soon) - Part 5 - Advanced Concepts (coming soon) Key Concepts Configuration Pass-ThroughMost YUI widgets accept a configuration object during creation that helps to set up the widget, and most GUI tags map directly to a YUI component. Any attributes passed into a GUI tag that the tag library doesn't consume are assumed to be configuration settings for the underlying YUI widget. This allows the user to specify YUI config options directly within the tag description as attributes. For example, the YUI Dialog component accepts a config option called 'modal'. If modal="true", the dialog will not allow interaction with any other component until it is dismissed. Because of GUI's configuration pass-through, the user can specify an attribute of 'modal' when defining the <gui:dialog/> tag: Both the 'title' and 'modal' attributes values are passed directly into the YUI Dialog config upon construction. The 'triggers' attributed is processed and consumed by the tag library, and will be explained below in the Dialog component section. All attributes specified that are not recognized by the GUI tag processing logic are assumed to be YUI config values and will be passed along to the underlying YUI component. If the YUI component is constructed with config values it doesn't understand, this will not affect the component's behavior. <gui:dialog This message will appear in a modal dialog. </gui:dialog> Component AccessibilityEach YUI component that is created can be referenced via JavaScript in other areas of the page. This allows users to create event listeners to trigger custom actions. If an 'id' attribute is passed along to the component tag, this can be used to access the YUI component. Here is an example of attaching a custom select handler to a GUI datePicker component: Because an id attribute was specified on the datePicker component, the YUI Calendar object that was created by GUI is accessible through the GRAILSUI namespace. <gui:datePicker <script> YAHOO.util.Event.onDOMReady(function() { function selectHandler(){ alert('date selected on myDatePicker!'); } GRAILSUI.myDatePicker.selectEvent.subscribe(selectHandler); }); </script> Simple Dependency MappingYUI dependencies include JavaScript and CSS files, and can sometimes be complicated. YUI has a dependency configurator on their website to help developers define the proper includes in the correct order. GUI is designed to make dependency declaration as easy as possible by providing one resources tag for users to define what will be used within each GSP. In this manner, users can declare what components are active in a page in the HEAD once, and use their tags throughout the page. Any additions or changes to dependencies should only require a simple update of the resources tag in HEAD. More information is below in the resources section. The resources tag also makes it easy to include a specific CSS or JavaScript file existing within the YUI or Bubbling Library. Even when multiple components and individual files are marked for inclusion, any redundant dependencies are filtered out. UsageThere are two steps to using a GUI component. First, users must specify what components will be used within a GSP in the Resources tag in the HEAD of the page. Users can then declare the tags anywhere else within the page. Resources tagEach GSP page that uses any GUI component must declare what components are used on the page within the HEAD tag. This is done with the <gui:resources/> tag. Very simply, the components uses on the page must be declared within a 'components' attribute in order for GUI to know what YUI and Bubbling Library files to include with the page. For example, for a page that will use a richEditor and a dialog: or <gui:resources Either array or comma-separated lists will work within the components attribute. With most YUI JavaScript source files, there are different modes of file, like 'debug' or 'minimal'. The 'minimal' mode is a compressed (unreadable) version of the source code, while 'debug' is handy for stepping through JavaScript in a debugging tool. To include the debug versions of component JavaScript source files: <gui:resources The 'minimal' mode has the smallest footprint and is the default mode. <gui:resources Component tagsComponent tags provide UI widgets for users. Each tag, if not specifically given an 'id' attribute, will have a unique id generated for it. As noted in the section on Configuration Pass-Through above, any attributes not consumed by the tag library will be passed on as configuration options to the YUI component that is created. Some components, like the dataTable, also allow specific attributes that will pass on configuration to additional supporting YUI components that are created to support it (see the section below on DataTable Paginator Configuration). Styling ComponentsAll GUI component styles can be overridden with local CSS. For more information on skinning YUI, see here.If you want to use the built-in skin of the components, then you need to attach the class "yui-skin-sam" to the HTML element that contains the GrailsUI components (such as a "div" container). For instance, if you want to use the skin on all components in the entire application, then you can modify the "body" element in the "grails-app/views/layouts/main.gsp" as follows: <body class="yui-skin-sam"> List of Component tags Dialog Attribute definitions Dialog attribute: triggersAllows user to define how the dialog will be triggered to show or hide. This can be used to declare existing components as triggers, or to have GUI create new triggers (currently only links or buttons). If there is already a component with id on the page, it can be specified by id as a trigger. Here is a code example of using the triggers attribute to create a new button to trigger the dialog to show: The 'triggers' attribute is a map, with each key representing a method that will be called on the dialog, and each value representing a trigger definition. In this case, we want a button with the text 'Press for Dialog', that will be a trigger when it is clicked. Valid values for type are currently 'link' and 'button'. Valid trigger keys are any public methods of the YUI Dialog object, but mainly 'show' and 'hide'. Valid 'on' values include 'click' and 'mouseover'.To use an existing HTML element within the page as the dialog trigger, use the triggers attribute in this manner: triggers="[show:[type:'button', text:'Press for Dialog', on:'click']]" Only the id of the triggering element is required, as well as the 'on' trigger. If there is no element with the specified id in the trigger, the trigger will not exist. triggers="[ show:[id:'show4', on:'click'], hide:[id:'hide4', on:'click'] ]" Dialog attribute: formIf the form attribute is true, the dialog will set itself up to contain a form. Default buttons are created with the proper event handling to remotely submit any form data that is contained within the form. If form="true", there should be enough information within the rest of the attributes to create a URL (either controller and action, or url attribute). The attributes are sent into createLink to resolve to a URL. There should also be an 'update' attribute set if there is data passed back from the asynchronous call to the form's URL. Dialog attributes: controller, action, params, urlIf form="true", there must be enough data within the remaining attributes to resolve to a URL if sent to the createLink Grails method. Dialog attribute: updateThis should be the id of an element to update with response text from the server. This element should contain HTML, as its innerHTML value is set. If any <script/> tags are contained in the response, they are executed after the innerHTML is loaded. Dialog Usage Examples Modal confirm dialog (local)This script provides an event handler and passes it into the dialog tag attached to a button. This dialog does not make a server call, but transfers the confirmation back to a local script. <script> var yesHandler = function(o) { alert('You clicked "Yes"'); this.cancel(); } </script> <gui:dialog Are you sure? </gui:dialog> Modal confirm dialog (remote)Here is an example of a confirm dialog that calls a URL when the 'Submit' button is clicked. This dialog doesn't bother passing control back to the page, but rather creates itself as a form, with the submit button calling a URL. On cancel, the dialog dismisses itself. <gui:dialog Are you sure? </gui:dialog> Remote form dialogAny form elements can be placed inside the dialog tag without defining a form. When form="true", the dialog creates its own form. <gui:dialog <input type="text" id="input1" name="input1"/> </gui:dialog> Other Useful Dialog Configuration Pass-Throughs Tab View Defining tabsTabs can be defined either by declaring static content within a tab element, or by defining enough attributes in the tab to resolve to a URL that will load the tab. A tab has only one required attribute, a 'label' that will define what is displayed on the tab marker. Setting the 'active' attribute to true will render the tab open initially. You may also specify a CSS class attribute for tabs that will be applied to the underlying 'li' element. TabView Usage Examples Static TabsAny HTML, JavaScript, or GUI tags can be placed inside a tabView tab. Here is an example of several different components being rendered in a tabView. <gui:tabView> <gui:tab <h1>Inside Tab 1</h1> <p/>You can put whatever markup you want here. </gui:tab> <gui:tab <h1>Inside Tab 2</h2> <gui:richEditor </gui:tab> </gui:tabView> Dynamic TabsIf enough data is provided within the tab attributes to create a URL, the tab will be loaded by the URL.If the dataSrc attribute is set, it will be used explicitly to render the tab content Or a controller and action can be passed. <gui:tab <gui:tab TabView Resources AutoComplete Setting up your controllerProper controller setup is essential to use the autoComplete effectively. The autoComplete will send along a query string attached to the HTTP request that defines what the user has input into the text box.Currently, the GUI autoComplete is only configured to use JSON data format. This means that your controller must use the Grails JSON translation to convert autoComplete data into a usable format. Following is an example of a dummy controller action that provides data for an autoComplete. The list must contain an id as well as a label value, because the GUI autoComplete renders hidden input elements that contain the id. This allows the selection label and id to be passed along with form data on submission. import grails.converters.JSON def autoCompleteJSON = { def list = DummyDomain.list(params.query) def jsonList = list.collect { [ id: it.id, name: it.name ] } def jsonResult = [ result: jsonList ] render jsonResult as JSON } Local AutoCompleteAn autoComplete can be setup using only local data. <gui:autoComplete Remote AutoCompleteOnce an autoComplete is setup to use a server to provide the data, the configuration gets more complex. The 'resultName' attribute refers to the root node of the JSON data that will contain the autoComplete list to render to the view. This value defaults to 'result', so if the controller is set up like the one above, this attribute is not necessary to override.The autoComplete expects an array of maps containing one key for an id, and one for a label. The default map keys for these are 'id' and 'name', but they can be changed as shown above by the 'labelField' and 'idField' attributes. <gui:autoComplete Filtering AutoCompleteThere are some cases where the autoComplete data may need additional filtering at the server. This filtering data must be used by the controller to filter the data. Following are two examples of ways to specify additional filtering data for the server. <gui:autoComplete <gui:autoComplete AutoComplete DependencyThere are many cases where the value from another form input element needs to be send along with the autoComplete request. In this manner, an autoComplete can be dependent on the value of another form input. The value of the 'dependedOn' text input will be retrieved and sent along with the autoComplete request as the 'dependedValue'. The controller that populates the autoComplete can now use this data as part of its process to filter the results. <input id="dependedOn" type="text" value="my value"/> <gui:autoComplete AutoComplete Common ConfigurationsYou probably don't want your autoComplete querying the server every few milliseconds (queryDelay), or maybe you want to restrict it from querying until a certain number of keys have been typed (minQueryLength). Use these attributes to control this: <gui:autoComplete minQueryLength='3' queryDelay='1.5' … </gui:autoComplete> Accordion Accordion attributes - multiple: allows multiple panels open at once - bounce: provides a bounce effect on open and close - persistent: panels stay open - slow: slows down the open/close animation - fade: provides a fade effect on open and close Standard Accordion <gui:accordion> <gui:accordionElement Accordion element 1 content </gui:accordionElement> <gui:accordionElement <h3>Markup is fine in here</h3> </gui:accordionElement> </gui:accordion> Accordion with Options <gui:accordion <gui:accordionElement Accordion element 1 content </gui:accordionElement> <gui:accordionElement <g:eachHello ${i}<br/></g:each> </gui:accordionElement> </gui:accordion> Expandable Panel Expandable Panel Examples To allow a close action and icon, specify closable="true". By default, this is false. <gui:expandablePanel I am expanded. Close me. </gui:expandablePanel><br/><br/><gui:expandablePanel I was not expanded at first, but if you are reading this, I must be expanded now. </gui:expandablePanel> Data Table Column DefinitionsTo create a dataTable, GUI needs to know what the columns are going to look like, so it needs a map of data for each column. This list of maps representing the columns to be displayed is the column definitions. You can see a table of all the possible values of column definitions here in the instantiating section. Here is example of what a columnDefs attribute on a dataTable might look like: NOTE: The 'type' value is currently not being used, but a JIRA issue is logged to enable type formatting for GrailsUI 1.1.It is essential that these column definition keys match the JSON data returned by the controller supplying the DataTable with data. See the section below on setting up the controller. ConfigurationA default Paginator is created, so no paginator config is necessary, but users may specify the config object for a custom Paginator. More information on Paginator configuration attributes can be found here. Following is an example of a custom Paginator configuration: This specifies a template the Paginator to use, and also adds total records. paginatorConfig="[ template:'{PreviousPageLink} {PageLinks} {NextPageLink} {CurrentPageReport}', pageReportTemplate:'{totalRecords} total records' ]" Setting up the ControllerFollowing is an example of a controller action that prepares data for a dataTable: In order for the dataTable's Paginator to know how many pages there are, the totalRecords must be passed back within the JSON data for the table to consume. import grails.converters.JSON def dataTableDataAsJSON = { def list = [] response.setHeader("Cache-Control", "no-store") def data = [ totalRecords: Demo.count(), results: Demo.list(params) ] render data as JSON } Row Click NavigationIf you want your dataTable to navigate to a certain URL when a row is clicked, you need to ensure a 'dataUrl' field is included in your JSON data that defines the URL. Then you just need to add rowClickNavigation='true' to your dataTable attributes, and each row click will navigate to the dataUrl specified in the row. Row ExpansionThe dataTable provides a row expansion feature. When configured properly, each row will expand on screen to provide more data about the row. This is done by inspecting the data for the clicked row and looking for a dataUrl.If the JSON data received by the dataTable contains a row value called 'dataUrl', it assumes that this URL should be called and rendered on row click. Row expansion is turned off by default, so the rowExpansion attribute must be set to true as well.Following is an example of how a controller might attach this dataUrl within the JSON return value: Here, instead of transforming domain objects into JSON, each object is manually broken down into a map, including the proper URL to use for the dataUrl value.If rowExpansion="true", and the JSON data loaded into the table provides a dataUrl for each row, every row click will expand and populate a new panel just below the clicked row. One second click, pagination, or resort, the expanded row is contracted. import grails.converters.JSON def dataTableDataAsJSON = { def list = [] def demoList = Demo.list(params) response.setHeader("Cache-Control", "no-store") demoList.each { list << [ id: it.id, name: it.name, birthDate: it.birthDate.toString(), age: it.age, netWorth: it.netWorth, dataUrl: g.createLink(action: 'dataDrillDown') + "/$it.id" ] } def data = [ totalRecords: Demo.count(), results: list ] render data as JSON } DataTable ExampleThe following markup defines a dataTable and expects the controller to provide dataUrl values for row expansion. It also sends an additional parameter to the controller to use for filtering (maxAge=52). <gui:dataTable Other Useful Dialog Configuration Pass-Throughs Other DataTable Resources Rich Editor The richEditor uses a YUI SimpleEditor, which has a wealth of config options. <gui:richEditor Draggable List When formReady="true", hidden input elements will be rendered that represent the orders of the draggableLists, so if the lists are within a form, this data will be submitted with the form. <gui:draggableListWorkArea <gui:draggableList <li id="a1">A:1</li> <li id="a2">A:2</li> <li id="a3">A:3</li> <li id="a4">A:4</li> </gui:draggableList> <gui:draggableList <li>B:1</li> <li>B:2</li> <li>B:3</li> <li>B:4</li> </gui:draggableList> </gui:draggableListWorkArea> Tooltip For simple tooltips, it is only necessary to wrap the content that deservers a tooltip within the <gui:toolTip/> tags, providing a 'text' attribute, like this: <gui:resources This works well for tool tips that contain a small amount of plain text. For more complex, dynamic tool tips, the toolTip tag can refer to a controller to populate the tool tip text: <gui:toolTip <img src="/myImg.png"/> </gui:toolTip> The controller can be set up to pitch HTML directly into the tool tip: <gui:toolTip <div style="width: 200px; height: 200px; background:#8EC3E2; padding: 10px; border: 1px solid black"> Hover over this box for a server rendred tooltip. </div> </gui:toolTip> def toolTipLoader = { render """ <h3>Tool tip markup from the server!</h3> <p/>Here are my params: ${params}. """ } Date Picker This will allow form access to the selected date by the id passed into it. A date value can also be passed into the date picker, in the form of a Date or Calendar object: <gui:datePicker The format string takes the same format as the Java SimpleDateFormat. If time input is needed, inputs will be added to the bottom of the datePicker with the 'includeTime' attribute: <gui:datePicker <gui:datePicker <gui:datePicker
http://grails.org/plugin/grails-ui
CC-MAIN-2015-35
refinedweb
3,403
52.7
Debugging During Testing When writing tests, use the PyCharm "visual debugger" to interactively poke around in the context of a problem. Primary guardian, great! But what happens if there are no guardians yet? In this step we tackle that problem by combining two great features: visual testing with visual debugging. Note: In the next tutorial step we do some coding for this situation. Guardian-less Let's imagine we were working on our test_primary_guardian test and did something like this: def test_primary_guardian(player_one):assert player_one.primary_guardian It raises an error: This error message is quite helpful, but let's imagine you're confused about it. "What do you mean, there's no primary guardian?" You'd like to explore a little to see what's going wrong. Your print() Will Not Help You Here What's the number one debugger in Python? Alas, the humble def test_primary_guardian(player_one):print(player_one.guardians)assert player_one.primary_guardian Your tests run, but...nada. pytest is capturing output. Besides, you've now changed your code as part of investigating a problem. Not smart. You could learn how to generate console output in pytest, but that would be the same issue -- writing debugging statements into your code which you might accidentally check in. The debugger was meant for this. Fortunately PyCharm has a great "visual" debugger, tightly integrated with "visual testing." Let's use that. Breakpoint, Step...Cha-Cha-Cha Remove the assert player_one.primary_guardian: Let's now run that one test, but under the debugger, which itself will then run pytest. Click the green triangle in the gutter to the left of test_primary_guardian and choose `Debug 'pytest for test_play...'. This brings up a new tool window in the bottom, running the debugger: Our test execution has stopped on the line with the assertion. Our Variables pane shows that player_one exists in the scope and we can take a look at it: Aha, that's the problem! But let's say we still couldn't spot it. We want to poke around interactively. Highlight player_one.primary_guardian in that line of code -- the one triggering the error -- then right click and choose Evaluate Expression. When you click the Evaluate button, you see the output: You can now poke around interactively in the state at that point by typing in the Expression: field. You can even overwrite the value of a variable in the scope or create new variables. You're currently poking around in the code for the test, but perhaps you want to poke around in the implementation. Click the Step Into button and the debugger will put you on the return self.guardians[0] line in the primary_guardian property/method. With this we quickly see what is the problem: there is no item zero in self.guardians. Finish up by removing the assert player_one.primary_guardian line and clearing the breakpoint by clicking on the red circle. Close the debugger tool window by clicking on its (x). The debugger is a fantastic tool when doing TDD -- not just when you have a problem, but whenever you want to explore.
https://www.jetbrains.com/pycharm/guide/tutorials/visual_pytest/debugging_during_testing/
CC-MAIN-2021-43
refinedweb
513
66.44
Introduction:. Since I have been fooling around with a Nokia 5110 display for quite a while, I have decided to write my fourth blog on interfacing this display with the Arduino MicroController board. Step 1: Grab the Requisites! For this instructable, you'd need: 1.A Nokia 5110 LCD display. You could salvage it from an old Nokia 5110, or you could buy it online. Here's a link which would help you purchase it: Arduino 5110 display-Ebay 2.An Arduino board. [I've used NANO in this case] 3. (5 x [1000 ohm resistors]) 4.( 1 x [330 ohm resistors]) 5.A 10kohm potentiometer. 6.A bunch of jumper wires. 7.A breadboard. Step 2: Wire It Up! Since I couldn't find the Nokia 5110 component in the Fritzing library, I decided to sketch out the schematic myself. Remember to use the 1000 ohm resistors while connecting the RST, CE, DC, Din and CLK pin to the arduino board and the 380 ohm resistor with the potentiometer. Step 3: Installing the Nokia 5110 Library You'd need to install the Nokia 5110 library first. Here's the link to the library. Download it, unzip it and move it into the Arduino Libraries folder. Nokia 5110 Library for Arduino Step 4: Converting the Image You'd need to convert the image you want to display into a bitmap file. Also you'd need to change the resolution of the image to 84*48, to suite the displays resolution. Here's a site that would help you to do that: Step 5: Converting the Bitmap Image to C Array Now, you'd have to use a software to convert the .bmp image to a C array. Windows users can use LCDAssistant(File attached) and MacBook users can use LCDCreator(File Attached). Once you convert the image, copy the array. I'll explain in further steps how exactly the array would be used. Step 6: The Code. #include <LCD5110_Graph.h> //Including library LCD5110 myGLCD(8,9,10,12,11); //Creating LCD object extern uint8_t graphic[]; //Including the graphics void setup() { myGLCD.InitLCD(); //Intializing LCD } void loop() { myGLCD.clrScr(); //Clearing screen myGLCD.drawBitmap(0, 0, graphic, 84, 48); //Drawing out bitmap myGLCD.update(); //Updating the LCD } Create a seperate tab and name it as Graphics.C Now for the custom graphic part. We will save the C code in the program emory instead of the SRAM as we always need small RAM usage. To do this we will have to include a library and a word like the pictures or like this: (We include the PROGMEM word and the library)[Enter this in the Graphics.C part]; #include const unsigned char graphic [] PROGMEM= { //Enter the C array you copied earlier over here //Else, insert my graphics.c file which i have included } Step 7: You're All Done! Upload the program to the arduino via usb cable, and you're good to go. Here's another example where I have displayed StoneSour's logo on the Nokia 5110 display. I'm open to criticism, so please feel free to comment your views on this instructable. Do message me if you have any problems related to this project. Facebook- Moksh Jadhav Discussions
https://www.instructables.com/id/Custom-Graphics-on-Nokia-5110-Display/
CC-MAIN-2018-34
refinedweb
540
76.01
ASP.NET MVC 2 first preview is released to public now and you can download it from Microsoft download site. Here is short overview of technical requirements and main new features of ASP.NET MVC 2 Preview 1. References:: If you have suggestions then please feel free to drop a comment here. Classes for people and organizations are often modeled incorrectly in object models. These faults cause heavy implementation problems. In this posting I will analyze some legacy models, introduce class Party as generalization of people and organizations and provide some implementation details. Information here is based on my own experience, like always. Pretty often we can see something like these in legacy object models. I added associations with address classes too because this way you get better picture of problems I will describe. Both of these models introduce some pretty bad problems when we are going to implement these models. Major problem is that we have two classes – Person and Company – and we want to use them in same contexts. Their separation introduces additional association on the left model and one additional class with association on the right model. Let’s see one scenarios where problems occur. Handling Person and Company as one It is pretty common requirement. We may sell products to persons and companies, so they can be both our customers. We need simple dropdown list on order form to let sales person select customer. This is easy thing to do (although the solution looks awful to me): This SQL contains something I don’t like. Take a look at these ID-s. These ID-s must contain something we can use when we save customer data. We have to know which table has this ID so we can ask person or company and then assign it to appropriate property of order. Well… we got it done but taste of something awful is in mouth. People and organizations can be handled as parties of different deals. Ordering, buying, signing a contract – all these activities are made by different parties. We can generalize people and organizations using class named Party. You can read more about it and other related models from The Data Model Resource Book – Universal Data Models. I suggest you to buy these books because they help you a lot when you analyze or model your applications. Here you can see class diagram where Person and Category are generalized using Party. Class Party helps us a lot. All associations that are common for people and organizations we can define between Party and other classes. This way we can handle people and organizations as they are same. You maybe notices that Party has property called DisplayName. DisplayName is important design detail. Besides unique identifier we need also something to show as name when we list parties (we want to select customer from dropdown). Database Here is the example of tables in database that support our generalization. Tables shown here are simple and give you the point how to do it. Table party is joined to other tables using one-to-one relationship. Party defines the value of primary key, other tables just use it. We can also see that display name is field of table party. Why? I have found it very convenient if we can see the name of party when we need to manage data in tables manually. DBA is able to solve different issues faster when he or she has all the information that describes rows in table. Of course, not using the display name in database level is also allowed. Just make your decision considering your current context and needs. Code On the code level we have to do some tricks. We define DisplayName as abstract property in Party. Person and Company both have override for this property. Setter of overridden properties is empty because we don’t want to let users change DisplayName manually. public abstract class Party public virtual int Id { get; set; } public abstract string DisplayName { get; internal set; } public class Person : Party public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public override string DisplayName { get { return LastName + ", " + FirstName; } internal set { } } public class Company : Party public virtual string Name { get; set; } return Name; Now we have successfully implemented basic structure of people and organizations in our code. I have used the same solution actively over past couple of years and I am pretty happy with it. Visual! Preface.. I Ayende @ Rahien just announces NHibernate.Linq 1.0. Cite: .” Let’s go and try it! :).).
http://weblogs.asp.net/gunnarpeipman/archive/2009/07.aspx
CC-MAIN-2014-10
refinedweb
758
56.66
When I recently told someone that I often use awk when I write shell scripts, the comment, together with a raised eyebrow, was "So, anyone's still using this?!" I felt a bit old-fashioned, and when today I caught myself writing a one-liner in awk I decided to give Python a chance (yes, I know, there is also Perl, but I never give Perl another chance :-/). The problem I wanted to solve was splitting a comma-separated list into a whitespace separated list of words that I could use for iteration in the shell. The awk one-liner I came up with after 1 minute was this: echo a,b,c | awk '{split($0,parts,",");for (i in parts) {print parts[i]}}' I imagined the corresponding Python program would look something like this: import sys for part in sys.stdin.read().split(","): print part I spent maybe half an hour trying to figure out how to convert this into a one-liner, then I gave up. Incredible but true: The issue here is Python's indentation-defines-scope syntax. This syntax makes it virtually impossible to write any kind of control structure (i.e. something that uses if, for or while) in a single line. The bad (or sad) thing about this is that it makes Python a lot less useful for quick-and-dirty shell scripting. The good thing is... awk lives on, and I feel a lot less old-fashioned :-) one liner from sys import stdin; print('\n'.join(a for a in stdin.read().split(',')) hah! Thanks for that! Not that I am too happy about the syntax, but that's probably me, I am just not used to Python. Two remarks (in case someone else is reading this): Add new comment
https://www.herzbube.ch/blog/2009/10/one-liners-python-not-chance
CC-MAIN-2019-18
refinedweb
297
70.23
RC2 block cipher. More... #include "core/crypto.h" #include "cipher/rc2.h" #include "debug.h" Go to the source code of this file. Detailed Description RC2 block cipher. RC2 is a block encryption algorithm, which may be considered as a proposal for a DES replacement. The input and output block sizes are 64 bits each. The key size is variable, from one byte up to 128 bytes. Refer to RFC 2268 for more details - Version - 1.9.6 Definition in file rc2.c. Macro Definition Documentation ◆ TRACE_LEVEL Function Documentation ◆ rc2DecryptBlock() ◆ rc2EncryptBlock() ◆ rc2Init() ◆ rc2InitEx() Variable Documentation ◆ rc2CipherAlgo Definition at line 71 of file rc2.c.
https://oryx-embedded.com/doc/rc2_8c.html
CC-MAIN-2020-34
refinedweb
104
54.29
- Type: Bug - Status: Closed - Priority: P2 - Resolution: Fixed - Affects Version/s: 2.16.0, 2.18.0, 2.19.0 - - Component/s: sdk-py-core - Labels:None A user reported following issue. ------------------------------------------------- I have a set of tfrecord files, obtained by converting parquet files with Spark. Each file is roughly 1GB and I have 11 of those. I would expect simple statistics gathering (ie counting number of items of all files) to scale linearly with respect to the number of cores on my system. I am able to reproduce the issue with the minimal snippet below import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.runners.portability import fn_api_runner from apache_beam.portability.api import beam_runner_api_pb2 from apache_beam.portability import python_urns import sys pipeline_options = PipelineOptions(['--direct_num_workers', '4'])> beam.io.tfrecordio.ReadFromTFRecord(file_pattern) | beam.combiners.Count.Globally() | beam.io.WriteToText('/tmp/output')) p.run() Only one combination of apache_beam revision / worker type seems to work (I refer to for the worker types) - beam 2.16; neither multithread nor multiprocess achieve high cpu usage on multiple cores - beam 2.17: able to achieve high cpu usage on all 4 cores - beam 2.18: not tested the mulithreaded mode but the multiprocess mode fails when trying to serialize the Environment instance most likely because of a change from 2.17 to 2.18. I also tried briefly SparkRunner with version 2.16 but was no able to achieve any throughput. What is the recommnended way to achieve what I am trying to ? How can I troubleshoot ? ---------------------------------------------------------------------------------------------------------------------------------------------- This is caused by this PR. A workaround is tried, which is rolling back iobase.py not to use _SDFBoundedSourceWrapper. This confirmed that data is distributed to multiple workers, however, there are some regressions with SDF wrapper tests. - links to -
https://issues.apache.org/jira/browse/BEAM-9228
CC-MAIN-2020-24
refinedweb
295
60.11
One Challenge With 10 Solutions One Challenge With 10 Solutions Get a blast from the past with these data analysis tools. Join the DZone community and get the full member experience.Join For Free Technologies we use for Data Analytics have evolved a lot, recently. Good old relational database systems become less popular every day. Now, we have to find our way through several new technologies, which can handle big (and streaming) data, preferably on distributed environments. Python : Together they represent the last 30+ years ! Using these technologies, we'll list the 10 most popular movies, using the two CSV datasets provided by Grouplens website. You may also like: How-To: The PGExercises PostgreSQL Tutorial Running on a Distributed SQL Database. The Dataset We'll use MovieLens 100K Dataset. Actually, only the following two files from the archive: u.data is tab delimited file, which keeps the ratings, and contains four columns: user_id (int), movie_id (int), rating (int), time (int) u.item is a pipe (|) delimited file. We only need to fetch movie titles from here, but there are several columns: Our Goal We'll aggregate the ratings data (u.data) to calculate an average rating per movie_id and find the ten movies with the highest average rating. We'll ignore the movies, which have less than a hundred ratings. Otherwise, we'll find lots of 5-star movies, which are rated by only one or two users. So, we'll filter them out. Then, we'll use a join with to the movie data (u.item) to fetch the movie title. The result will contain the movie_id, movieTitle, and averageRating, as seen below. Now, we are ready to go. 1. AWK AWK is almost as old as me, but it's still the most powerful tool around for text processing under *nix. You shall not necessarily think of it as a replacement for your favorite programming language, but it's definitely worth giving it a try, especially when you deal with huge text processing tasks. A lot of people use AWK in combination with other technologies to make use of its great capabilities in text-processing. Here's the AWK solution for our challenge. And that's a one-liner --- No uploads, no temporary files, we don't even need a script file for that! join -1 2 -2 1 <(sort -n -k2 u.data) <(sort -n -k1 u.item | tr '|' '\t' | tr ' ' '~') | sort -n -k1 | cut -d' ' -f1,2,3,4,5 | tr ' ' '|' | awk 'BEGIN{FS="|";lastMovie=0;totalRating=0;countRating=0;lastMovieTitle=""} {if(lastMovie==$1) {totalRating=totalRating+$3;countRating=countRating+1} else {if(lastMovie!=0) {print(lastMovie " " lastMovieTitle " " totalRating " " countRating " " (totalRating/countRating))};lastMovie=$1;lastMovieTitle=$5;countRating=1;totalRating=$3}} END{print(lastMovie " " lastMovieTitle " " totalRating " " countRating " " (totalRating/countRating))}' | awk '{if($4>=100) print}' | sort -r -k5 | head -10 | tr ' ' '\t' | tr '~' ' ' This might look like line noise to those who are not familiar with Linux. So, let's give some explanation here. Step 1: Join the Datasets Join receives the two datasets, u.data and u.item, and joins them to produce one single dataset. It uses the second column from the first dataset (-1 2) to match the first column in the second dataset (-2 1) as a join condition. u.data is sorted on second column (-k2) before the join operation. u.item is pipe-delimited, but we'd better change it to a tab-delimited format before the join. I do it with tr. I use a second tr to replace spaces (in movie titles) with tilde (~) characters. It's because join command output is space delimited by default, and it's buggy when you customize the separator. That's why we get rid of the spaces here. We'll change them back to spaces later. Step 2: Sort, Cut and TR The joined dataset is sorted on movie id. That's a numeric sort on the first column. (-n -k1). Then, I use cut to fetch the first five columns. I don't need all those columns from the u.item file. Finally, tr converts the space delimited file to a pipe-delimited file. Step 3: AWK AWK loops through the joined dataset, which is sorted by movieid. I have two variables to keep movieID and movieTitle, and if they are different that what I read from the current row, awk prints an output line. The output of this first awk command has one row per movie, with average ratings. Step 4: Again AWK The second awk is used to filter out movies with less than a hundred ratings. Step 5: Sort, Head and TR Then we sort the movies by their ratings, and use head to fetch the top 10 movies. Finally, we use tr to convert the output to a tab-delimited format, and to convert tilde's back to spaces. 2. PERL Why so many people hate Perl is beyond my understanding. It's a cute programming language in which you don't need to encrypt your code. That's because, as Keith Bostic already stated, Perl is "the only language that looks the same before and after RSA encryption." Recently, Perl's decreased popularity triggered discussions on Perl slowly fading away. No doubt, it's far less popular than it used to be in the 90's. But still.. It's much faster than Bash... It's pre-installed in most of the linux distributions... Also, Perl's focus was on report processing from the very beginning. So why not? Let's see how we process our this top-10 movies report using Perl. #!/usr/bin/perl use strict; use warnings; open (fle_ratings, '<', 'u.data') or die "Could not open u.data: $!"; my %hash1 = (); while () { chomp $_; my ($user_id, $movie_id, $rating) = split(/\t/, $_); if(exists($hash1{$movie_id})){ $hash1{$movie_id}[0]+=$rating; $hash1{$movie_id}[1]+=1; } else { $hash1{$movie_id}[0]=$rating; $hash1{$movie_id}[1]=1; } #print "$hash1{$movie_id}[0] *** $hash1{$movie_id}[1] \n" } my %hash2 = (); foreach my $key (keys %hash1) { if ($hash1{$key}[1] >= 100) { $hash2{$key}=$hash1{$key}[0] / $hash1{$key}[1]; } } close fle_ratings; my $counter=0; foreach my $movid (sort { $hash2{$b} <=> $hash2{$a} or $a cmp $b } keys %hash2) { my $movie=''; open(fle_movies, '<', 'u.item') or die "Could not open u.item: $!"; while () { chomp $_; my ($movie_id, $movie_title) = split(/\|/, $_); if($movid==$movie_id){ $movie=$movie_title; last; } } print "$movid $movie $hash2{$movid}\n"; last if ++$counter == 10; } Ok, this was a Perl script, and I don't see a good reason to hate that. Maybe that weird parameter for sort, but I can live with it. I'll put it in a text file, make it executable, and directly execute it. You might think all these loops are performance-killers, but it's not the case: Perl returns the results in no time. After the one-liner in AWK, this script looks over-sized, doesn't it ? Let's dive into this code a little bit. While Loop We loop through ratings dataset, and populate a hash named hash1. Hash1 will keep the sum and count of ratings. The First Foreach Loop Now we process each member of hash1, and populate a new hash named hash2 with the average values. The Second Foreach Loop We process each member of hash2, but after applying a descending sort on values. Then, we iterate through this loop only until our counter hits 10. So these are the top 10 rated movies. For each of these movies, we search for the movie title in movies dataset. This is a while loop inside our foreach loop. The moment we find our movie, we break the loop. 3. BASH The most popular Linux shell does not need any introduction. I'll directly jump into the BASH script solution. fle="u.data" declare -a ratings for movid in $(cut -f2 $fle | sort | uniq) do countLines=$(grep "^[0-9]*\s$movid\s" $fle | cut -f3 | wc -l) sumRatings=$(grep "^[0-9]*\s$movid\s" $fle | cut -f3 | paste -sd+ | bc) avgRating=$(eval "echo 'scale=6; $sumRatings/$countLines' | bc") if [ $countLines -gt 100 ] then ratings[$movid]=$avgRating fi done for k in "${!ratings[@]}" do echo $k'|'${ratings["$k"]}'|'$(grep ""^$k\|"" u.item | cut -d"|" -f2) done | sort -r -t'|' -k2 | head -10 This time it's a different approach. cut -f2 $fle | sort | uniq It will give me the sorted distinct list of movie ids. I loop through each movie id and calculate the count of lines, which is, the count of ratings given for that movie. The regular expression ^[0-9]*\s$movid\s gives me the lines that contain a specific movie id in the second column. ^ stands for line beginning, [0-9]* will match any number of integers, and \s is for the tab characters. I also calculate the sum of ratings here. cut -f3 after grep will return all the rating values for a specific movie. paste will help me produce a single text combining these rating values, with the delimiter "+", and bc will calculate the result of this summation. Then I'll loop through my ratings array, find movie titles for each, and print the 10 values with the highest rating. Although it looks like a simpler solution, it takes up to 30 seconds to finish. The ugly Perl easily outperformed Bash! 4. SQL (PostgreSQL) The easiest for most of us, would be loading the data into our favorite RDBMS and writing a SQL query to generate the results. I'll use PostgreSQL. First, I'll change the encoding of u.item file (you may need this if you encounter encoding issues with movie titles): iconv -f ISO-8859-1 -t UTF-8 u.item > movie_def.txt Then, let's create tables and load data into them: postgres=# \c olric You are now connected to database "olric" as user "postgres". olric=# create table ratings (userId int, movieId int, rating int, timestamp int); CREATE TABLE olric=# create table movies ); CREATE TABLE olric=# COPY ratings FROM '/home/oguz/10_Solutions/u.data'; COPY 100000 olric=# COPY movies FROM '/home/oguz/10_Solutions/movie_def.txt' with (format csv, DELIMITER '|', force_null(videoReleaseDate)); COPY 1682 And here's the SQL to give the results : olric=# WITH avgRatings AS (SELECT movieId, AVG(rating) AS avgRating FROM ratings GROUP BY movieId HAVING COUNT(*) >= 100) SELECT m.movieId, m.movieTitle, a.avgRating FROM movies m JOIN avgRatings a ON m.movieId=a.movieId ORDER BY a.avgRating DESC LIMIT 10; 408 | Close Shave, A (1995) | 4.4910714285714286 318 | Schindler's List (1993) | 4.4664429530201342 169 | Wrong Trousers, The (1993) | 4.4661016949152542 483 | Casablanca (1942) | 4.4567901234567901 64 | Shawshank Redemption, The (1994) | 4.4452296819787986 603 | Rear Window (1954) | 4.3875598086124402 12 | Usual Suspects, The (1995) | 4.3857677902621723 50 | Star Wars (1977) | 4.3584905660377358 178 | 12 Angry Men (1957) | 4.3440000000000000 134 | Citizen Kane (1941) | 4.2929292929292929 5. Python with Pandas Python is already extremely popular as a choice for Data Science. If it keeps the pace, Python can probably become the most popular programmig language of the world, in couple of years. Currently Python holds the third place, after Java and C. The following Python solution uses pandas library, which makes data analytics tasks so easy. import pandas as pd ratings = pd.read_csv('u.data', delimiter='\t', names = ['userId', 'movieId', 'rating', 'ratingTime']) movies = pd.read_csv('u.item', delimiter='|', usecols=[0,1], names = ['movieId', 'movieTitle']) joined=pd.merge(ratings, movies, how='inner', on='movieId') averages=joined.groupby(['movieId','movieTitle']).agg({'rating':'mean', 'userId':'count'}) averages.columns=['avgRating', 'countRating'] print(averages[averages.countRating>=100].sort_values(by=['avgRating'], ascending=False).head(10)) So this is even more readable code than a SQL query, isn't it? 6. MapReduce With MRJob in Python You'd probably better use less complex tools like Pig, Hive, or Spark, but MapReduce is the quintessential way of processing data under Apache Hadoop. Let's take a look at how we deal with our challenge, using MapReduce. For this purpose, I'll again use Python, but this time, with the MRJob library. from mrjob.job import MRJob from mrjob.step import MRStep import csv class RatingsBreakdown(MRJob): def movie_title(self, movid): with open("/home/oguz/10_Solutions/u.item", "r") as infile: reader = csv.reader(infile, delimiter='|') next(reader) for line in reader: if int(movid) == int(line[0]): return line[1] def steps(self): return [ MRStep(mapper=self.mapper1, reducer=self.reducer1), MRStep(mapper=self.mapper2, reducer=self.reducer2) ] def mapper1(self, _, line): (userID, movieID, rating, timestamp) = line.split('\t') yield movieID, rating def reducer1(self, key, values): i,totalRating,cnt=0,0,0 for i in values: totalRating += int(i) cnt += 1 if cnt>=100: yield key, totalRating/float(cnt) def mapper2(self, key, values): yield None, (values, key) def reducer2(self, _, values): i=0 for rating, key in sorted(values, reverse=True): i+=1 if i<=10: yield (key,rating), self.movie_title(int(key)) if __name__ == '__main__': RatingsBreakdown.run() I think here we need some explanations. steps(self) gives an overline of our mapreduce job. There are two steps defined in our case. Each step can consist of a mapper, a combiner, and a reducer. Though they are all optional, a step will consist of at least one of them. Both of our steps consist of one mapper and one reducer. Mapper of our first step (mapper1) splits all lines of the u.data file using tab as the delimiter. We now have all four columns in hand, but we are interested only in movie id's and ratings, so the mapper returns these two values. Mappers don't aggregate data. So, if there are n rows coming in, mapper output is also n rows. Reducer of our first step (reducer1) is used to calculate average rating per movie id. The reducer receives movie id as key and rating as values. Aggregation is by default on key value. We'll just need to calculate the aggregated value and return it using yield. All mappers and reducers return key and value pairs. The return value of reducer1 gives movie id's as keys, and average ratings as values. Now the data is aggregated, output of reducer1 has one (and only one) row per movie id. Mapper of our second step (mapper2) moves the movie id out of the key. key becomes a null value (None) and value is now a list of movie id and average ratings. That's because we want to find the highest rated movies. The next reducer shall scan the entire data set and find the top rated movies. To make sure all the data is scanned, we have to empty the key - otherwise reducer will operate for all keys separately. reducer1 sorts the data on values. Values is a list, and its first member is the average rating, so our reverse ordered loop will begin with the highest rated movie, and stops at the tenth row. 7. Pig Latin Pig Latin gives us the chance to use a much simpler notation than MapReduce itself. So, it's a high-level tool that can execute jobs in MapReduce (or Tez or Spark) The Pig Latin solution to our challenge is here : ratings = LOAD '/user/hdpuser/movies/u.data' AS (userid:int, movieid:int, rating:int, time:int); grp_ratings = GROUP ratings BY movieid; avg_rat = FOREACH grp_ratings GENERATE group as movid, AVG(ratings.rating) as avgRating , COUNT(ratings.movieid) as cnt_rat; avg_ratings = FILTER avg_rat BY cnt_rat >= 100; movies = LOAD '/user/hdpuser/movies/u.item' USING PigStorage('|') AS (movieid:int, moviename:chararray); joined = JOIN avg_ratings BY movid, movies BY movieid; dataset = FOREACH joined GENERATE movies::moviename as movnam, avg_ratings::avgRating as avgRating; ordered = ORDER dataset BY avgRating desc; top10 = LIMIT ordered 10; DUMP top10; The code itself is pretty self-explanatory, so I'll skip the explanations here. 8. Hive Just like Pig, Hive provides an easier platform to deal with data on Apache Hadoop. Unlike Pig, Hive is Data warehouse Infrastructure. So we'll create tables under Hive console, and physically store our data under Hive. create database olric; CREATE EXTERNAL TABLE IF NOT EXISTS olric.ratings_temp (userId INT, movieId INT, rating INT, ratingTime INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE LOCATION 'hdfs://dikanka:8020/user/oguz/MovieData/ratings'; select * from olric.ratings_temp limit 10; CREATE EXTERNAL TABLE IF NOT EXISTS olric.movies_temp ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION 'hdfs://dikanka:8020/user/oguz/MovieData/movies'; select movieId, movieTitle from movies_temp limit 10; I created a database named olric in Hive console. I created two external tables, which point to my u.data and u.item files on Hadoop. We still didn't store the data physically under Hive. We'll do that now: CREATE TABLE IF NOT EXISTS olric.ratings (userId INT, movieId INT, rating INT) STORED AS ORC; INSERT OVERWRITE TABLE olric.ratings SELECT userId, movieId, rating FROM olric.ratings_temp; select count(*) from olric.ratings; CREATE TABLE IF NOT EXISTS olric.movies (movieId int, movieTitle varchar(200)) STORED AS ORC; INSERT OVERWRITE TABLE olric.movies SELECT movieId, movieTitle FROM olric.movies_temp; select count(*) from olric.movies; Now that we have our Hive tables, we can use good-old SQL skills, to write the following HiveQL: with rat as (select movieId, avg(rating) as avgRating, count(*) as cnt from olric.ratings GROUP BY movieId) select rat.movieId, mov.movieTitle, rat.avgRating from rat join olric.movies mov on rat.movieId=mov.movieId where cnt >= 100 order by avgRating desc limit 10; INFO : OK +--------------+-----------------------------------+---------------------+ | rat.movieid | mov.movietitle | rat.avgrating | +--------------+-----------------------------------+---------------------+ | 408 | Close Shave, A (1995) | 4.491071428571429 | | 318 | Schindler's List (1993) | 4.466442953020135 | | 169 | Wrong Trousers, The (1993) | 4.466101694915254 | | 483 | Casablanca (1942) | 4.45679012345679 | | 64 | Shawshank Redemption, The (1994) | 4.445229681978798 | | 603 | Rear Window (1954) | 4.3875598086124405 | | 12 | Usual Suspects, The (1995) | 4.385767790262173 | | 50 | Star Wars (1977) | 4.3584905660377355 | | 178 | 12 Angry Men (1957) | 4.344 | | 134 | Citizen Kane (1941) | 4.292929292929293 | +--------------+-----------------------------------+---------------------+ 9. Spark with Scala According to Tiobe index listings, Scala is still not as popular as Cobol :) — But you can easily see that the hype continues on Scala. It's a functional programming language, and is the other language which runs on JVM (Java virtual machine). Spark, itself, is written in Scala. If you want to learn Spark, this is the popular reason to prefer Scala over Python. Spark introduces RDDs, (resilient distributed dataset). See our solution below, to get an idea. package com.olric.samplePackage01 import org.apache.spark._ import org.apache.spark.rdd.RDD object top10Movies extends App { val sc = new SparkContext("local[*]", "WordCount") val moviesFile = sc.textFile("hdfs://dikanka:8020/user/oguz/MovieData/u.item") val movies: RDD[(Int, String)] = moviesFile.map { line => val col = line.split('|') (col(0).toInt, col(1)) }.sortByKey() val ratingsFile = sc.textFile("hdfs://dikanka:8020/user/oguz/MovieData/u.data") val ratings: RDD[(Int, Int)] = ratingsFile.map { line => val col = line.split("\t") (col(1).toInt, col(2).toInt) } val ratingsPerMovieId = ratings.mapValues(x => (x,1)).reduceByKey((x,y) => (x._1 + y._1, x._2 + y._2)).filter(x => x._2._2 >= 100) val avgRatings=ratingsPerMovieId.map(x => (x._1, (x._2._1 / x._2._2.toFloat))) val joinedDataset=avgRatings.join(movies) joinedDataset.sortBy(_._2, false).take(10).foreach(println) } So, we read the movies file from Hadoop, and populate an RDD named movies. We do the same for ratings. Movies RDD contains movie id and movie title, whereas ratings RDD has movie id and rating. Up to now it's simple. The line where ratingsPerMovieId is populated might be bit complex for those who are not familiar. We begin with ratings RDD. Each row here is a list of two values : (movieId, rating) The expression: x => (x,1) is a shortcut to write a function. It is actually a function which takes x as a parameter, and returns (x, 1) value list as a return value. X, here, represents the line read from input, ratings RDD. Therefore, output of mapvalue is as follows: (movieID, (rating, 1)) Then we use reduceByKey, which needs to know how to reduce multiple rows with the same key value. X and y represents two rows with same key value, and we give the following function to reduceByKey, so that it knows how to reduce these rows: (x,y) => (x._1 + y._1, x._2 + y._2) x._1 stands for the first value of input row x, which is the rating. Similarly, x._2 points to the second value, which is always one. So, the first values and second values are summed up here, to find the total rating and count of ratings, per movie id. Then we use another function, x => x._2._2 >= 100 to filter our data set. x._2 is an (Int, Int) list which holds our Rating Total and Rating Counts. x._2._2 is an Int value for the rating count. So this function will get rid of the moves with less than 100 ratings. The rest of the code is easier. We join two RDD's, sort the result based on ratings, take first 10 rows and list them. 10. MongoDB This post could be incomplete without a NoSQL database. So here is mongodb, a document-oriented NoSQL database from 2009. MongoDB stores data as JSON documents. So I'll upload my CSV files now as collection of documents. First, let's create our database, using mongodb command-line interface. > use olric switched to db olric The use command creates the database if it doesn't already exist. So now, we have a database. Now let's get back to BASH, and use mongoimport utility to upload our CSV files. oguz@dikanka:~/moviedata$ cat /home/oguz/moviedata/u.data | mongoimport --db olric --collection "ratings" --drop --type tsv --fields userId,movieId,rating,ratingTime --host "127.0.0.1:27017" 2019-10-09T16:24:24.477+0200 connected to: 127.0.0.1:27017 2019-10-09T16:24:24.478+0200 dropping: olric.ratings 2019-10-09T16:24:25.294+0200 imported 100000 documents oguz@dikanka:~/moviedata$ cut -d"|" -f 1,2 /home/oguz/moviedata/u.item | tr "|" "\t" | mongoimport --db olric --collection "movies" --drop --type tsv --fields movieId,movieTitle --host "127.0.0.1:27017" 2019-10-09T16:26:00.812+0200 connected to: 127.0.0.1:27017 2019-10-09T16:26:00.812+0200 dropping: olric.movies 2019-10-09T16:26:01.118+0200 imported 1682 documents TSV stands for tab-delimited CSV files. Since u.item was pipe delimited, I use tr to convert it to a tab delimited format, and cut to fetch only the first two columns. Back inside mongodb console, to confrm the uploads. > use olric switched to db olric > db.ratings.find() { "_id" : ObjectId("5d9ded98c233e200b842a850"), "userId" : 253, "movieId" : 465, "rating" : 5, "ratingTime" : 891628467 } { "_id" : ObjectId("5d9ded98c233e200b842a851"), "userId" : 305, "movieId" : 451, "rating" : 3, "ratingTime" : 886324817 } ... > db.movies.find() { "_id" : ObjectId("5d9dedf8c233e200b8442f66"), "movieId" : 7, "movieTitle" : "Twelve Monkeys (1995)" } { "_id" : ObjectId("5d9dedf8c233e200b8442f67"), "movieId" : 8, "movieTitle" : "Babe (1995)" } { "_id" : ObjectId("5d9dedf8c233e200b8442f68"), "movieId" : 9, "movieTitle" : "Dead Man Walking (1995)" } ... And here is the mongodb solution to our challenge: > db.ratings.aggregate([{$group: {_id: "$movieId", avgRating: {$avg : "$rating"}, count: {$sum : 1} }}, {$match : {count : {$gte : 100}}}, {$sort : {avgRating : -1}}, {$limit : 10}, {$lookup : {from: "movies", localField: "_id", foreignField: "movieId", as: "movieDef"}}, {$unwind : "$movieDef"}]).forEach(function(output) {print(output._id + "\t" + output.avgRating + "\t" + output.movieDef.movieTitle) }) You may get lost with all the brackets used here. We use the method aggregate on our collection named ratings. Aggregate is a collection method of mongodb; it accepts several pipeline stages, as a list, such as $group, $match, $sort, $limit, $lookup and $unwind. You can see these are the ones I used. Stage $group grouped the collection of documents by movieID and adds a couple of computed fields into these documents. These are named avgRating and count. Stage $match filters the documents out, which have count less than 100. Guess what stage $sort and $limit do? Ok, I'll skip these ones. $lookup does a lookup to another collection matching our _id with the field movieId from the lookup data. It brings us the entire matched row into an array named movieDef. $unwind gets rid of this array, and each field from the lookup field becomes a separate field in our collection of documents. forEach loops through the documents, now only 10, and sorted by rating. We use function(output) to print the results. It was a long post, I know, but we covered ten different technologies, to prepare an aggregated report from two datasets. I hope it helped to take a quick glance over these technologies. Further Reading Published at DZone with permission of Oguz Eren . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/one-challenge-with-10-solutions
CC-MAIN-2019-47
refinedweb
4,137
67.35
loader.loadModel m = loader.loadModel("mymodel.egg") The path name specified in the loadModel can be an absolute path, or a relative path. Relative is recommended. If a relative path is used, then Panda3D will search its model path to find the egg file. The model path is controlled by panda's configuration file. Do not forget that loading the model does not, by itself, cause the model to be visible. To cause Panda3D to render the model, you must insert it into the scene graph: m.reparentTo(render) You can read more about The Scene Graph. The path used in the loadModel call must abide by Panda3D's filename conventions. For easier portability, Panda3D uses Unix-style pathnames, even on Microsoft Windows. This means that the directory separator character is always a forward slash, not the Windows backslash character, and there is no leading drive letter prefix. (Instead of a leading drive letter, Panda uses an initial one-letter directory name to represent the drive.) There is a fairly straightforward conversion from Windows filenames to panda filenames. Always be sure to use Panda filename syntax when using a Panda3D library function, or one of the panda utility programs: # WRONG: loader.loadModel("c:\\Program Files\\My Game\\Models\\Model1.egg") # RIGHT: loader.loadModel("/c/Program Files/My Game/Models/Model1.egg") Panda uses the Filename class to store Panda-style filenames; many Panda functions expect a Filename object as a parameter. The Filename class also contains several useful methods for path manipulation and file access, as well as for converting between Windows-style filenames and Panda-style filenames; see the API reference for a more complete list. Filename To convert a Windows filename to a Panda pathname, use code similar to the following: from panda3d.core import Filename winfile = "c:\\MyGame\\Model1.egg" pandafile = Filename.fromOsSpecific(winfile) print pandafile To convert a Panda filename into a Windows filename, use code not unlike this: from panda3d.core import Filename pandafile = Filename("/c/MyGame/Model1.egg") winfile = pandafile.toOsSpecific() print winfile The Filename class can also be used in combination with python's built-in path manipulation mechanisms. Let's say, for instance, that you want to load a model, and the model is in the "model" directory that is in the same directory as the main program's "py" file. Here is how you would load the model: import sys,os import direct.directbase.DirectStart from panda3d.core import Filename # Get the location of the 'py' file I'm running: mydir = os.path.abspath(sys.path[0]) # Convert that to panda's unix-style notation. mydir = Filename.fromOsSpecific(mydir).getFullpath() # Now load the model: model = loader.loadModel(mydir + "/models/mymodel.egg") You need to keep in mind that the standard python functions (like os.remove()) work with OS specific paths. So do not forget to convert your OS Generic paths back to OS Specific paths when using built-in functions. In cases that Panda offers equivalent functions through the Filename class, it is recommended to use that instead.
http://www.panda3d.org/manual/index.php/Loading_Models
CC-MAIN-2018-39
refinedweb
507
50.53
Prev C++ VC ATL STL Set Code Index Headers Your browser does not support iframes. Specializing std::less without an operator < From: Kevin McCarty <kmccarty@gmail.com> Newsgroups: comp.lang.c++.moderated Date: Mon, 5 Dec 2011 15:52:28 -0800 (PST) Message-ID: <f3e1857d-1fd3-4dfa-8ad3-b1ea93fcd2b0@l24g2000yqm.googlegroups.com>. 1) Providing an operator < () would be misleading, for the following reasons. (For explicative purposes please presume the existence of a constructor Unit::Unit(const char *) that does the right thing when used as in the code fragments below.). 2) However, given the function unsigned Unit::get() const, which simply returns the wrapped unsigned value, writing an override of std::less<Unit> to provide a well-defined (but in general arbitrary) total ordering is trivial, as below. And since no one writes "std::less<T>()(a, b)" when what they really mean is to write "a < b" -- that is, std::less is seldom used explicitly aside from in the context of needing a total ordering -- I feel it is not misleading. #include <functional> .... namespace std { template <> struct less<Unit> : binary_function <Unit, Unit, bool> { bool operator() (const Unit & x, const Unit & y) const { return x.get() < y.get(); } }; } Should I nevertheless feel morally bound by the advice of _Effective STL_, and require that all users of this class use their own functor (even if a default one is provided), despite the excess verbiage? //? Thanks in advance for any responses, - Kevin B. McCarty -- [ See for info about ] [ comp.lang.c++.moderated. First time posters: Do.
https://preciseinfo.org/Convert/Articles_CPP/Set_Code/C++-VC-ATL-STL-Set-Code-111206015228.html
CC-MAIN-2022-05
refinedweb
254
54.12
scala runs on the jvm and the syntax should be quite familiar to java developers. hello world in scala looks like this: object HelloWorld { def main(args: Array[String]) { println("Hello, world!") } } there are some interesting things about this. first, the object declaration means that HelloWorld is a singleton object. that is also why the main 'method' is not declared static. as HelloWorld is a singleton main is 'globally available' and doesn't need to be static. static members (methods) or fields do not exist in scala. second, main doesn't declare a return type. it's called a procedure method. as said, scala runs on the jvm. because of that scala classes can interact with existing java classes. for example you can import and use java classes: import java.util.{Date, Locale} import java.text.DateFormat import java.text.DateFormat._ these import statements show that you can import multiple classes from one packge with that curly braces notation. and you can import all names from a package or class with the underscore notation. the above import makes members of DateFormat available, so you can call the static method getDateInstance() directly. Why not use the * notation for the imports? * is a valid identifier in scala. you'll see why, soon. Methods with a single argument can be called in infix syntax in scala that means the method format of DateFormat can be called like in the following example: val now = new Date val df = getDateInstance() val str = df format now this call is equivalent to df.format(now). at the first look this is not quite an interesting feature but it offers some possibilities, one of which i'll show next. in scala 'everything is an object', e.g. primitive types as well. then remember that * is a valid identifier and the infix syntax for methods. then look at this code and try to figure out how it works: 2 + 3 * 4 / 2 this is equivalent to 2.+(3.*(4./(2))) all the integers are objects and the operators are methods of these objects. operators are method calls. this was my first look at scala. more looks coming soon, for sure ...
http://stronglytypedblog.blogspot.com/2008/06/first-look-at-scala_05.html
CC-MAIN-2017-17
refinedweb
362
68.36
Hi all, I have a project that I've been trying to do where you need to create a triangle using String objects as well as loops. The end result has to look like this: (the triangle needs to be complete at the bottom line but I can't get it to turn out right on this message board) Code : * * * * * * * ****** We are given the example using this code: Code : public class Triangle { public static void main(String[] args) { int length = 16; String border = new String(""); //declaring string object named border for(int n = 1; n <= length; n++) { border+='*'; } //*************************************** String spaces = new String(""); for(int n = 1; n <= border.length()-2; n++) { spaces+=' '; } //**************************************** int slength = 2; String sides = new String(""); for(int n = 1; n <= slength; n++) { sides+='*' + spaces + "*\n"; } String box = border + '\n' + sides + border + '\n'; System.out.println(box); } } I have no clue how get each row to descend by one space. It's really confusing me logically especially since for loops are involved. This is what I have so far... even though it's not much. Code : public class Triangle2 { public static void main(String[] args) { int length = 1; String tip = new String(""); for(int n=1; n<=1; n++) { tip+='*'; } //****************************************** System.out.println(tip); String spaces = new String(""); for(int n=1; n<=1; n++) { } } } Any help would be appreciated!
http://www.javaprogrammingforums.com/%20loops-control-statements/16135-building-triangle-using-strings-loops-program-trouble-printingthethread.html
CC-MAIN-2015-27
refinedweb
223
74.32
I cannot lie, but I just learned to distinguish an African elephant from an Asian elephant not long ago. -- EleaidCharity On the other hand, the state of the art ImageNet classification model can detect 1000 classes of objects at an accuracy of 82.7% including those two types of elephants of course. ImageNet models are trained over 14 millions of images and can figure out the differences between objects. Have you wondered where does the model focusing on when looking at images, or shall we ask should we trust the model instead? There are two approaches we can take to solve the puzzle. The hard way. Crack open the state of the art ImageNet model by studying the paper, figuring out the math, implementing the model and hopefully, in the end, understand how it works. Or The easy way. Become model agnostic, and we treat the model as a black box. We have control of the input image, so we tweak it. We change or hide some parts of the image that make sense to us. Then we feed the tweaked image to the model and see what it think of it. The second approach is what we will be experimenting Install it with pip as usual, pip install lime Let's jump right in. There are many Keras models for image classification with weights pre-trained on ImageNet. You can pick one here at Available models. I am going to try my luck with InceptionV3. It might take a while to download the pre-trained weight for the first time. from keras.applications import inception_v3 as inc_net inet_model = inc_net.InceptionV3() We choose the photo with two elephants walking side by side, it's a great example to test our model with. The code pre-processes the image for the model, and the model makes the prediction. images = transform_img_fn([os.path.join('data','asian-african-elephants.jpg')]) preds = inet_model.predict(images) for x in decode_predictions(preds)[0]: print(x) The output is not so surprising, the Asian a.k.a the India elephant is standing in front, taking quite a lot of space of the image, no wonder it gets the highest score. ('n02504013', 'Indian_elephant', 0.9683744) ('n02504458', 'African_elephant', 0.01700191) ('n01871265', 'tusker', 0.003533815) ('n06359193', 'web_site', 0.0007669711) ('n01694178', 'African_chameleon', 0.00036488983) The model is making the right prediction, now let's ask it to give us an explanation. We do this by first creating a LIME explainer, it only asks for our test image model.predict import lime from lime import lime_image explainer = lime_image.LimeImageExplainer() explanation = explainer.explain_instance(images[0], inet_model.predict, top_labels=5, hide_color=0, num_samples=1000) Let's see what the model is looking at to predict the Indian elephant. model.predict And the LIME explainer need to know which class by class index we want an explanation. I wrote a simple function to make it easy Indian_elephant = get_class_index("Indian_elephant") It turns out the class index of "Indian_elephant" equals to 385. Let's ask the explainer to show the magic where the model is forcing on. from skimage.segmentation import mark_boundaries temp, mask = explanation.get_image_and_mask(Indian_elephant, positive_only=True, num_features=5, hide_rest=True) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) This is interesting, the model is also paying attention to the small ear of the Indian elephant. What about the African elephant? African_elephant = get_class_index("African_elephant") temp, mask = explanation.get_image_and_mask(African_elephant, positive_only=True, num_features=5, hide_rest=True) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) Cool, the model is also looking at the big ear of the African elephant when predicting it. It is also looking at part of the text "AFRICAN ELEPHANT". Could it be a coincidence or the model is smart enough to figure out clue by reading annotations on the image? And Finally, let's take a look at what are the 'pros and cons' when the model is predicting an Indian elephant. temp, mask = explanation.get_image_and_mask(Indian_elephant, positive_only=False, num_features=10, hide_rest=False) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) (pros in green, cons in red) It looks like the model is focusing on what we do as well. So far, you might be still agnostic about how the model works, but at least developed some trust towards it. That might not be a terrible thing since now you have one more tool to help you distinguish a good model from a poor one. An example bad model might be focusing on the non-sense background when predicting an object. I am bearly scratching the surface of what LIME can do, I encourage you to explore on other applications like a text model where LIME will tell you what part of the text the model is focusing on when making a decision. Now go ahead, re-exam some deep learning models before they betray you. Introduction to Local Interpretable Model-Agnostic Explanations (LIME) My full source code for this experiment is available here in my GitHub repository.Share on Twitter Share on Facebook
https://www.dlology.com/blog/can-you-trust-keras-to-tell-african-from-asian-elephant/
CC-MAIN-2020-24
refinedweb
831
67.15
The author selected Girls Who Code to receive a donation as part of the Write for DOnations program. Introduction Keras is a neural network API that is written in Python. It runs on top of TensorFlow, CNTK, or Theano. It is a high-level abstraction of these deep learning frameworks and therefore makes experimentation faster and easier. Keras is modular, which means implementation is seamless as developers can quickly extend models by adding modules. TensorFlow is an open-source software library for machine learning. It works efficiently with computation involving arrays; so it’s a great choice for the model you’ll build in this tutorial. Furthermore, TensorFlow allows for the execution of code on either CPU or GPU, which is a useful feature especially when you’re working with a massive dataset.. Prerequisites Before you begin this tutorial you’ll need the following: Step 1 — Data Pre-processing Data Pre-processing is necessary to prepare your data in a manner that a deep learning model can accept. If there are categorical variables in your data, you have to convert them to numbers because the algorithm only accepts numerical figures. A categorical variable represents quantitive data represented by names. In this step, you’ll load in your dataset using pandas, which is a data manipulation Python library. Before you begin data pre-processing, you’ll activate your environment and ensure you have all the necessary packages installed to your machine. It’s advantageous to use conda to install keras and tensorflow since it will handle the installation of any necessary dependencies for these packages, and ensure they are compatible with keras and tensorflow. In this way, using the Anaconda Python distribution is a good choice for data science related projects. Move into the environment you created in the prerequisite tutorial: Run the following command to install keras and tensorflow: - conda install tensorflow keras Now, open Jupyter Notebook to get started. Jupyter Notebook is opened by typing the following command on your terminal: Note: If you're working from a remote server, you'll need to use SSH tunneling to access your notebook. Please revisit step 2 of the prerequisite tutorial for detailed on instructions on setting up SSH tunneling. You can use the following command from your local machine to initiate your SSH tunnel: - ssh -L 8888:localhost:8888 your_username@your_server_ip After accessing Jupyter Notebook, click on the anaconda3 file, and then click New at the top of the screen, and select Python 3 to load a new notebook. Now, you'll import the required modules for the project and then load the dataset in a notebook cell. You'll load in the pandas module for manipulating your data and numpy for converting the data into numpy arrays. You'll also convert all the columns that are in string format to numerical values for your computer to process. Insert the following code into a notebook cell and then click Run: import pandas as pd import numpy as np df = pd.read_csv("") You've imported numpy and pandas. You then used pandas to load in the dataset for the model. You can get a glimpse at the dataset you're working with by using head(). This is a useful function from pandas that allows you to view the first five records of your dataframe. Add the following code to a notebook cell and then run it: df.head() You'll now proceed to convert the categorical columns to numbers. You do this by converting them to dummy variables. Dummy variables are usually ones and zeros that indicate the presence or absence of a categorical feature. In this kind of situation, you also avoid the dummy variable trap by dropping the first dummy. Note: The dummy variable trap is a situation whereby two or more variables are highly correlated. This leads to your model performing poorly. You, therefore, drop one dummy variable to always remain with N-1 dummy variables. Any of the dummy variables can be dropped because there is no preference as long as you remain with N-1 dummy variables. An example of this is if you were to have an on/off switch. When you create the dummy variable you shall get two columns: an on column and an off column. You can drop one of the columns because if the switch isn't on, then it is off. Insert this code in the next notebook cell and execute it: feats = ['department','salary'] df_final = pd.get_dummies(df,columns=feats,drop_first=True) feats = ['department','salary'] defines the two columns for which you want to create dummy variables. pd.get_dummies(df,columns=feats,drop_first=True) will generate the numerical variables that your employee retention model requires. It does this by converting the feats that you define from categorical to numerical variables. You've loaded in the dataset and converted the salary and department columns into a format the keras deep learning model can accept. In the next step, you will split the dataset into a training and testing set. Step 2 — Separating Your Training and Testing Datasets You'll use scikit-learn to split your dataset into a training and a testing set. This is necessary so you can use part of the employee data to train the model and a part of it to test its performance. Splitting a dataset in this way is a common practice when building deep learning models. It is important to implement this split in the dataset so the model you build doesn't have access to the testing data during the training process. This ensures that the model learns only from the training data, and you can then test its performance with the testing data. If you exposed your model to testing data during the training process then it would memorize the expected outcomes. Consequently, it would fail to give accurate predictions on data that it hasn't seen. You'll start by importing the train_test_split module from the scikit-learn package. This is the module that will provide the splitting functionality. Insert this code in the next notebook cell and run: from sklearn.model_selection import train_test_split With the train_test_split module imported, you'll use the left column in your dataset to predict if an employee will leave the company. Therefore, it is essential that your deep learning model doesn't come into contact with this column. Insert the following into a cell to drop the left column: X = df_final.drop(['left'],axis=1).values y = df_final['left'].values Your deep learning model expects to get the data as arrays. Therefore you use numpy to convert the data to numpy arrays with the .values attribute. You're now ready to convert the dataset into a testing and training set. You'll use 70% of the data for training and 30% for testing. The training ratio is more than the testing ratio because you'll need to use most of the data for the training process. If desired, you can also experiment with a ratio of 80% for the training set and 20% for the testing set. Now add this code to the next cell and run to split your training and testing data to the specified ratio: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) You have now converted the data into the type that Keras expects it to be in ( numpy arrays), and your data is split into a training and testing set. You'll pass this data to the keras model later in the tutorial. Beforehand you need to transform the data, which you'll complete in the next step. Step 3 — Transforming the Data When building deep learning models it is usually good practice to scale your dataset in order to make the computations more efficient. In this step, you'll scale the data using the StandardScaler; this will ensure that your dataset values have a mean of zero and a unit variable. This transforms the dataset to be normally distributed. You'll use the scikit-learn StandardScaler to scale the features to be within the same range. This will transform the values to have a mean of 0 and a standard deviation of 1. This step is important because you're comparing features that have different measurements; so it is typically required in machine learning. To scale the training set and the test set, add this code to the notebook cell and run it: from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) Here, you start by importing the StandardScaler and calling an instance of it. You then use its fit_transform method to scale the training and testing set. You have scaled all your dataset features to be within the same range. You can start building the artificial neural network in the next step. Step 4 — Building the Artificial Neural Network Now you will use keras to build the deep learning model. To do this, you'll import keras, which will use tensorflow as the backend by default. From keras, you'll then import the Sequential module to initialize the artificial neural network. An artificial neural network is a computational model that is built using inspiration from the workings of the human brain. You'll import the Dense module as well, which will add layers to your deep learning model. When building a deep learning model you usually specify three layer types: - The input layer is the layer to which you'll pass the features of your dataset. There is no computation that occurs in this layer. It serves to pass features to the hidden layers. - The hidden layers are usually the layers between the input layer and the output layer—and there can be more than one. These layers perform the computations and pass the information to the output layer. - The output layer represents the layer of your neural network that will give you the results after training your model. It is responsible for producing the output variables. To import the Keras, Sequential, and Dense modules, run the following code in your notebook cell: import keras from keras.models import Sequential from keras.layers import Dense You'll use Sequential to initialize a linear stack of layers. Since this is a classification problem, you'll create a classifier variable. A classification problem is a task where you have labeled data and would like to make some predictions based on the labeled data. Add this code to your notebook to create a classifier variable: classifier = Sequential() You've used Sequential to initialize the classifier. You can now start adding layers to your network. Run this code in your next cell: classifier.add(Dense(9, kernel_initializer = "uniform",activation = "relu", input_dim=18)) You add layers using the .add() function on your classifier and specify some parameters: The first parameter is the number of nodes that your network should have. The connection between different nodes is what forms the neural network. One of the strategies to determine the number of nodes is to take the average of the nodes in the input layer and the output layer. The second parameter is the kernel_initializer.When you fit your deep learning model the weights will be initialized to numbers close to zero, but not zero. To achieve this you use the uniform distribution initializer. kernel_initializeris the function that initializes the weights. The third parameter is the activationfunction. Your deep learning model will learn through this function. There are usually linear and non-linear activation functions. You use the reluactivation function because it generalizes well on your data. Linear functions are not good for problems like these because they form a straight line. The last parameter is input_dim, which represents the number of features in your dataset. Now you'll add the output layer that will give you the predictions: classifier.add(Dense(1, kernel_initializer = "uniform",activation = "sigmoid")) The output layer takes the following parameters: The number of output nodes. You expect to get one output: if an employee leaves the company. Therefore you specify one output node. For kernel_initializeryou use the sigmoidactivation function so that you can get the probability that an employee will leave. In the event that you were dealing with more than two categories, you would use the softmaxactivation function, which is a variant of the sigmoidactivation function. Next, you'll apply a gradient descent to the neural network. This is an optimization strategy that works to reduce errors during the training process. Gradient descent is how randomly assigned weights in a neural network are adjusted by reducing the cost function, which is a measure of how well a neural network performs based on the output expected from it. The aim of a gradient descent is to get the point where the error is at its least. This is done by finding where the cost function is at its minimum, which is referred to as a local minimum. In gradient descent, you differentiate to find the slope at a specific point and find out if the slope is negative or positive—you're descending into the minimum of the cost function. There are several types of optimization strategies, but you'll use a popular one known as adam in this tutorial. Add this code to your notebook cell and run it: classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"]) Applying gradient descent is done via the compile function that takes the following parameters: optimizeris the gradient descent. lossis a function that you'll use in the gradient descent. Since this is a binary classification problem you use the binary_crossentropy lossfunction. - The last parameter is the metricthat you'll use to evaluate your model. In this case, you'd like to evaluate it based on its accuracy when making predictions. You're ready to fit your classifier to your dataset. Keras makes this possible via the .fit() method. To do this, insert the following code into your notebook and run it in order to fit the model to your dataset: classifier.fit(X_train, y_train, batch_size = 10, epochs = 1) The .fit() method takes a couple of parameters: The first parameter is the training set with the features. The second parameter is the column that you're making the predictions on. The batch_sizerepresents the number of samples that will go through the neural network at each training round. epochsrepresents the number of times that the dataset will be passed via the neural network. The more epochs the longer it will take to run your model, which also gives you better results. You've created your deep learning model, compiled it, and fitted it to your dataset. You're ready to make some predictions using the deep learning model. In the next step, you'll start making predictions with the dataset that the model hasn't yet seen. Step 5 — Running Predictions on the Test Set To start making predictions, you'll use the testing dataset in the model that you've created. Keras enables you to make predictions by using the .predict() function. Insert the following code in the next notebook cell to begin making predictions: y_pred = classifier.predict(X_test) Since you've already trained the classifier with the training set, this code will use the learning from the training process to make predictions on the test set. This will give you the probabilities of an employee leaving. You'll work with a probability of 50% and above to indicate a high chance of the employee leaving the company. Enter the following line of code in your notebook cell in order to set this threshold: y_pred = (y_pred > 0.5) You've created predictions using the predict method and set the threshold for determining if an employee is likely to leave. To evaluate how well the model performed on the predictions, you will next use a confusion matrix. Step 6 — Checking the Confusion Matrix In this step, you will use a confusion matrix to check the number of correct and incorrect predictions. A confusion matrix, also known as an error matrix, is a square matrix that reports the number of true positives(tp), false positives(fp), true negatives(tn), and false negatives(fn) of a classifier. - A true positive is an outcome where the model correctly predicts the positive class (also known as sensitivity or recall). - A true negative is an outcome where the model correctly predicts the negative class. - A false positive is an outcome where the model incorrectly predicts the positive class. - A false negative is an outcome where the model incorrectly predicts the negative class. To achieve this you'll use a confusion matrix that scikit-learn provides. Insert this code in the next notebook cell to import the scikit-learn confusion matrix: from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm The confusion matrix output means that your deep learning model made 3305 + 375 correct predictions and 106 + 714 wrong predictions. You can calculate the accuracy with: (3305 + 375) / 4500. The total number of observations in your dataset is 4500. This gives you an accuracy of 81.7%. This is a very good accuracy rate since you can achieve at least 81% correct predictions from your model. Outputarray([[3305, 106], [ 714, 375]]) You've evaluated your model using the confusion matrix. Next, you'll work on making a single prediction using the model that you have developed. Step 7 — Making a Single Prediction In this step you'll make a single prediction given the details of one employee with your model. You will achieve this by predicting the probability of a single employee leaving the company. You'll pass this employee's features to the predict method. As you did earlier, you'll scale the features as well and convert them to a numpy array. To pass the employee's features, run the following code in a cell: new_pred = classifier.predict(sc.transform(np.array([[0.26,0.7 ,3., 238., 6., 0.,0.,0.,0., 0.,0.,0.,0.,0.,1.,0., 0.,1.]]))) These features represent the features of a single employee. As shown in the dataset in step 1, these features represent: satisfaction level, last evaluation, number of projects, and so on. As you did in step 3, you have to transform the features in a manner that the deep learning model can accept. Add a threshold of 50% with the following code: new_pred = (new_pred > 0.5) new_pred This threshold indicates that where the probability is above 50% an employee will leave the company. You can see in your output that the employee won't leave the company: Outputarray([[False]]) You might decide to set a lower or higher threshold for your model. For example, you can set the threshold to be 60%: new_pred = (new_pred > 0.6) new_pred This new threshold still shows that the employee won't leave the company: Outputarray([[False]]) In this step, you have seen how to make a single prediction given the features of a single employee. In the next step, you will work on improving the accuracy of your model. Step 8 — Improving the Model Accuracy If you train your model many times you'll keep getting different results. The accuracies for each training have a high variance. In order to solve this problem, you'll use K-fold cross-validation. Usually, K is set to 10. In this technique, the model is trained on the first 9 folds and tested on the last fold. This iteration continues until all folds have been used. Each of the iterations gives its own accuracy. The accuracy of the model becomes the average of all these accuracies. keras enables you to implement K-fold cross-validation via the KerasClassifier wrapper. This wrapper is from scikit-learn cross-validation. You'll start by importing the cross_val_score cross-validation function and the KerasClassifier. To do this, insert and run the following code in your notebook cell: from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score To create the function that you will pass to the KerasClassifier, add this code to the next cell: def make_classifier(): classifier = Sequential() classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18)) classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid")) classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"]) return classifier Here, you create a function that you'll pass to the KerasClassifier—the function is one of the arguments that the classifier expects. The function is a wrapper of the neural network design that you used earlier. The passed parameters are also similar to the ones used earlier in the tutorial. In the function, you first initialize the classifier using Sequential(), you then use Dense to add the input and output layer. Finally, you compile the classifier and return it. To pass the function you've built to the KerasClassifier, add this line of code to your notebook: classifier = KerasClassifier(build_fn = make_classifier, batch_size=10, nb_epoch=1) The KerasClassifier takes three arguments: build_fn: the function with the neural network design batch_size: the number of samples to be passed via the network in each iteration nb_epoch: the number of epochs the network will run Next, you apply the cross-validation using Scikit-learn's cross_val_score. Add the following code to your notebook cell and run it: accuracies = cross_val_score(estimator = classifier,X = X_train,y = y_train,cv = 10,n_jobs = -1) This function will give you ten accuracies since you have specified the number of folds as 10. Therefore, you assign it to the accuracies variable and later use it to compute the mean accuracy. It takes the following arguments: estimator: the classifier that you've just defined X: the training set features y: the value to be predicted in the training set cv: the number of folds n_jobs: the number of CPUs to use (specifying it as -1 will make use of all the available CPUs) Now you have applied the cross-validation, you can compute the mean and variance of the accuracies. To achieve this, insert the following code into your notebook: mean = accuracies.mean() mean In your output you'll see that the mean is 83%: Output0.8343617910685696 To compute the variance of the accuracies, add this code to the next notebook cell: variance = accuracies.var() variance You see that the variance is 0.00109. Since the variance is very low, it means that your model is performing very well. Output0.0010935021002275425 You've improved your model's accuracy by using K-Fold cross-validation. In the next step, you'll work on the overfitting problem. Step 9 — Adding Dropout Regularization to Fight Over-Fitting Predictive models are prone to a problem known as overfitting. This is a scenario whereby the model memorizes the results in the training set and isn't able to generalize on data that it hasn't seen. Typically you observe overfitting when you have a very high variance on accuracies. To help fight over-fitting in your model, you will add a layer to your model. In neural networks, dropout regularization is the technique that fights overfitting by adding a Dropout layer in your neural network. It has a rate parameter that indicates the number of neurons that will deactivate at each iteration. The process of deactivating nerurons is usually random. In this case, you specify 0.1 as the rate meaning that 1% of the neurons will deactivate during the training process. The network design remains the same. To add your Dropout layer, add the following code to the next cell: from keras.layers import Dropout classifier = Sequential() classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18)) classifier.add(Dropout(rate = 0.1)) classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid")) classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"]) You have added a Dropout layer between the input and output layer. Having set a dropout rate of 0.1 means that during the training process 15 of the neurons will deactivate so that the classifier doesn't overfit on the training set. After adding the Dropout and output layers you then compiled the classifier as you have done previously. You worked to fight over-fitting in this step with a Dropout layer. Next, you'll work on further improving the model by tuning the parameters you used while creating the model. Step 10 — Hyperparameter Tuning Grid search is a technique that you can use to experiment with different model parameters in order to obtain the ones that give you the best accuracy. The technique does this by trying different parameters and returning those that give the best results. You'll use grid search to search for the best parameters for your deep learning model. This will help in improving model accuracy. scikit-learn provides the GridSearchCV function to enable this functionality. You will now proceed to modify the make_classifier function to try out different parameters. Add this code to your notebook to modify the make_classifier function so you can test out different optimizer functions: from sklearn.model_selection import GridSearchCV def make_classifier(optimizer): classifier = Sequential() classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18)) classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid")) classifier.compile(optimizer= optimizer,loss = "binary_crossentropy",metrics = ["accuracy"]) return classifier You have started by importing GridSearchCV. You have then made changes to the make_classifier function so that you can try different optimizers. You've initialized the classifier, added the input and output layer, and then compiled the classifier. Finally, you have returned the classifier so you can use it. Like in step 4, insert this line of code to define the classifier: classifier = KerasClassifier(build_fn = make_classifier) You've defined the classifier using the KerasClassifier, which expects a function through the build_fn parameter. You have called the KerasClassifier and passed the make_classifier function that you created earlier. You will now proceed to set a couple of parameters that you wish to experiment with. Enter this code into a cell and run: params = { 'batch_size':[20,35], 'epochs':[2,3], 'optimizer':['adam','rmsprop'] } Here you have added different batch sizes, number of epochs, and different types of optimizer functions. For a small dataset like yours, a batch size of between 20–35 is good. For large datasets its important to experiment with larger batch sizes. Using low numbers for the number of epochs ensures that you get results within a short period. However, you can experiment with bigger numbers that will take a while to complete depending on the processing speed of your server. The adam and rmsprop optimizers from keras are a good choice for this type of neural network. Now you're going to use the different parameters you have defined to search for the best parameters using the GridSearchCV function. Enter this into the next cell and run it: grid_search = GridSearchCV(estimator=classifier, param_grid=params, scoring="accuracy", cv=2) The grid search function expects the following parameters: estimator: the classifier that you're using. param_grid: the set of parameters that you're going to test. scoring: the metric you're using. cv: the number of folds you'll test on. Next, you fit this grid_search to your training dataset: grid_search = grid_search.fit(X_train,y_train) Your output will be similar to the following, wait a moment for it to complete: OutputEpoch 1/2 5249/5249 [==============================] - 1s 228us/step - loss: 0.5958 - acc: 0.7645 Epoch 2/2 5249/5249 [==============================] - 0s 82us/step - loss: 0.3962 - acc: 0.8510 Epoch 1/2 5250/5250 [==============================] - 1s 222us/step - loss: 0.5935 - acc: 0.7596 Epoch 2/2 5250/5250 [==============================] - 0s 85us/step - loss: 0.4080 - acc: 0.8029 Epoch 1/2 5249/5249 [==============================] - 1s 214us/step - loss: 0.5929 - acc: 0.7676 Epoch 2/2 5249/5249 [==============================] - 0s 82us/step - loss: 0.4261 - acc: 0.7864 Add the following code to a notebook cell to obtain the best parameters from this search using the best_params_ attribute: best_param = grid_search.best_params_ best_accuracy = grid_search.best_score_ You can now check the best parameters for your model with the following code: best_param Your output shows that the best batch size is 20, the best number of epochs is 2, and the adam optimizer is the best for your model: Output{'batch_size': 20, 'epochs': 2, 'optimizer': 'adam'} You can check the best accuracy for your model. The best_accuracy number represents the highest accuracy you obtain from the best parameters after running the grid search: best_accuracy Your output will be similar to the following: Output0.8533193637489285 You've used GridSearch to figure out the best parameters for your classifier. You have seen that the best batch_size is 20, the best optimizer is the adam optimizer and the best number of epochs is 2. You have also obtained the best accuracy for your classifier as being 85%. You've built an employee retention model that is able to predict if an employee stays or leaves with an accuracy of up to 85%. Conclusion In this tutorial, you've used Keras to build an artificial neural network that predicts the probability that an employee will leave a company. You combined your previous knowledge in machine learning using scikit-learn to achieve this. To further improve your model, you can try different activation functions or optimizer functions from keras. You could also experiment with a different number of folds, or, even build a model with a different dataset. For other tutorials in the machine learning field or using TensorFlow, you can try building a neural network to recognize handwritten digits or other DigitalOcean machine learning tutorials.
https://www.xpresservers.com/tag/keras/
CC-MAIN-2019-22
refinedweb
4,909
55.44
Comparing Artificial Artists Last Wednesday, “A Neural Algorithm of Artistic Style” was posted to ArXiv, featuring some of the most compelling imagery generated by deep convolutional neural networks (DCNNs) since Google Research’s “DeepDream” post. On Sunday, Kai Sheng Tai posted the first public implementation. I immediately stopped working on my implementation and started playing with his. Unfortunately, his results don’t quite match the paper, and it’s unclear why. I’m just getting started with this topic, so as I learn I want to share my understanding of the algorithm here, along with some results I got from testing his code. In two parts, the paper describes an algorithm for rendering a photo in the style of a given painting: - Run an image through a DCNN trained for image classification. Stop at one of the convolutional layers, and extract the activations of every filter in that layer. Now run an image of noise through the net, and check its activations at that layer. Make small changes to the noisy input image until the activations match, and you will eventually construct a similar image. They call this “content reconstruction”, and depending what layer you do it at you get varying accuracy. 2. Instead of trying to match the activations exactly, try to match the correlation of the activations. They call this “style reconstruction”, and depending on the layer you reconstruct you get varying levels of abstraction. The correlation feature they use is called a Gram matrix: the dot product between the vectorized feature activation matrix and its transpose. If this sounds confusing, see the footnotes. Finally, instead of optimizing for just one of these things, they optimize for both simultaneously: the style of one image, and the content of another image. Here is an attempt to recreate the results from the paper using Kai’s implementation: Not quite the same, and possibly explained by a few differences between Kai’s implementation and the original paper: - Using SGD , while the original paper does not specify what optimization technique is used. In an earlier texture synthesis paper the authors use L-BFGS. - Initializing with the content image rather than noise. - Using the Inception network instead of VGG-19. - To balance the content reconstruction with the style reconstruction, the paper uses a weighting of 1:10e1 or 1:10e2, while Kai uses 1:5e9, which is a huge and unexplained difference. Running even slightly lower around 1:10e8 it converges mainly on the content reconstruction, only vaguely matching the palette of the style image: - As I was writing this, Kai added total variational smoothing. This certainly helps with the high frequency noise, but the fact that the original paper does not mention any similar regularization makes me wonder if they achieve this another way. As a final comparison, consider the images Andrej Karpathy posted from his own implementation. The same large-scale, high-level features are missing here, just like in the style reconstruction of “Seated Nude” above. Beside’s Kai’s, I’ve seen one more implementation from a PhD student named Satoshi: a brief example in Python with Chainer. I haven’t spent as much time with it, as I had to adapt it to run on my CPU due to lack of memory. But I did notice: - It uses content to style ratios in a more similar range to the original paper. Changing this by an order of magnitude doesn’t seem to have as big an effect. - It tries to initialize with noise. - It uses VGG-19. After running Tübingen in the style of The Starry Night with a 1:10e3 ratio and 100 iterations, it seems to converge on something matching the general structure but lacking the overall palette: I’d like to understand this algorithm well enough to generalize it to other media (mainly thinking about sound right now), so if you have an insights or other implementations please share them in the comments! Update I’ve started testing another implementation that popped up this morning from Justin Johnson. His follow the original paper very closely, except for using unequal weights when balancing different layers used for style reconstruction. All the following examples were run for 100 iterations with the default ratio of 1:10e0. Final Update Justin switched his implementation to use L-BFGS and equally weighted layers, and to my eyes this matches the results in the original paper. Here are his results for one of the harder content/style pairs: Other implementations that look great, but I haven’t tested enough: - neural_artistic_style by Anders Boesen Lindbo Larsen, in Python using DeepPy and cuDNN. Example images look great, and are very high resolution. - styletransfer by Eben Olson, in Python using Lasagne. Replicates both style transfer and style reconstruction from noise. Footnotes The definition of the Gram matrix confused me at first, so I wrote it out as code. Using a literal translation of equation 3 in the paper, you would write in Python, with numpy: def gram(layer): N = layer.shape[1] F = layer.reshape(N, -1) M = F.shape[1] G = np.zeros((N, N)) for i in range(N): for j in range(N): for k in range(M): G[i,j] += F[i,k] * F[j,k] return G It turns out that the original description is computed more efficiently than this literal translation. For example, Kai writes in Lua, with Torch: function gram(input) local k = input:size(2) local flat = input:view(k, -1) local gram = torch.mm(flat, flat:t()) return gram end Satoshi computes it for all the layers simultaneously in Python with Chainer: conv1_1F,conv2_1F,conv3_1F,conv4_1F,conv5_1F, = [ reshape2(x) for x in [conv1_1,conv2_1, conv3_1, conv4_1,conv5_1]] conv1_1G,conv2_1G,conv3_1G,conv4_1G,conv5_1G, = [ Fu.matmul(x, x, transa=False, transb=True) for x in [conv1_1F,conv2_1F, conv3_1F, conv4_1F,conv5_1F]] Or again in Python, with numpy and Caffe layers: def gram(x): F = layer.reshape(layer.shape[1], -1) return np.dot(F, F.T)
https://medium.com/@kcimc/comparing-artificial-artists-7d889428fce4?source=post_page---------------------------
CC-MAIN-2019-35
refinedweb
998
52.39
Trey Nash here again, and I would like to discuss a scenario we are all too familiar with. You’ve worked your tail off for the past year, and for the past couple of months you even worked evenings and weekends. Management rewarded your team with two weeks off in appreciation of your efforts. But now that you’re back in the office, you’re hearing rumors percolating up from your tech support department that there are some cases where your application is crashing in the field for mysterious reasons. What do you do? The application happens to have been built with an unhandled exception filter registered via the AppDomain.UnhandledException event. Therefore, you at least know that the application is failing with an InvalidCastException, but you cannot imagine why this is happening. Wouldn’t it be nice if you could live debug on the affected system? Unless you work on-site for your customer or your software is on a laptop and your customer is willing to send it to you, then I doubt you will get this opportunity. What you need is a tool to capture the state of your application while it is failing. Then the customer could capture this information and send it to you. Enter ADPlus. ADPlus is a free tool in the Debugging Tools for Windows package that scripts the CDB debugger allowing you to capture dumps for one or multiple processes on a system. It also offers the following advantages: · ADPlus can monitor desktop applications, services, etc. · ADPlus can monitor multiple processes on the system. When it collects a dump of those processes, it freezes and dumps them simultaneously. This is essential for tracking down problems with inter-process communications. · ADPlus supports xcopy deployment meaning the customer does not need to install anything via Windows Installer, etc. This minimizes configuration changes on the machine, and that is music to customers’ ears. Note: Although ADPlus is xcopy installable, you still have to install the Debugging Tools for Windows package via Windows Installer as that is the only way Microsoft distributes it. However, once you have installed Debugging Tools for Windows once, you can xcopy deploy ADPlus or the entire Debugging Tools for Windows package to another machine. In fact, during development, I find it extremely handy to check in the development tools into the source repository. Debugging Tools for Windows supports this by virtue of the fact that it is xcopy installable.For those of you that are savvy with Windows Installer, you can perform an admin install using the msi for Debugging Tools for Windows and that will allow you to extract the files without actually installing the package on the machine, for example:msiexec /a dbg_x86_6.11.1.404.msi With all of that said, let’s see how ADPlus can help you diagnose problems with .NET applications. In the rest of this post, I will reference the C# 3.0 sample application that I have put together to illustrate using ADPlus to capture an unhandled exception in a .NET application. The code is listed below: using System; using System.Linq; using System.Runtime.Serialization; class A { public void SaySomething() { Console.WriteLine( "Yeah, Peter...." ); throw new BadDesignException(); } } class B : A class C class EntryPoint static void Main() { DoSomething(); static void DoSomething() { Func<int, object> generatorFunc = (x) => { if( x == 7 ) { return new C(); } else { return new B(); } }; var collection = from i in Enumerable.Range( 0, 10 ) select generatorFunc(i); // Let's iterate over each of the items in the collection // // ASSUMING THEY ARE ALL DERIVED FROM A !!!! foreach( var item in collection ) { A a = (A) item; try { a.SaySomething(); catch( BadDesignException ) { // Swallow these here. The programmer chose to // use exceptions for normal control flow which // is *very* poor design. } public class BadDesignException : Exception public BadDesignException() { } public BadDesignException( string msg ) : base( msg ) { } public BadDesignException( string msg, Exception x ) : base( msg, x ) { } protected BadDesignException( SerializationInfo si, StreamingContext ctx ) :base( si, ctx ) { } You can compile this sample code easily by putting it in a file, such as test.cs, and then from either a Visual Studio Command Prompt or from a Windows SDK Command Shell, execute the following: csc /debug+ test.cs Note: The code is contrived and there are many bad things about this code from a design/coding standpoint, but that is intentional for the sake of illustration. For example, one may want to re-think introducing a collection that contains references to System.Object.The code above also uses features that are new to C# 3.0 (for brevity) including lambda expressions and LINQ. If you would like to become more familiar with them, visit the C# website on MSDN or reference one of the excellent books on C# such as Pro LINQ or Accelerated C# 2008. The notable section of the code to focus on is the foreach loop within the EntryPoint.Main() method. In that foreach loop, we are iterating over a collection of objects where we are assuming that they all derive from type A. Thus the code attempts to cast all instances to a reference of type A and since I have intentionally put an instance of type C within that collection, it will fail at some point with an exception of type System.InvalidCastException. Using ADPlus in crash mode, we can capture the exception in the customer’s environment. To illustrate this in action, let’s launch ADPlus in crash mode and monitor the test.exe example application built above by using the following command line: adplus -crash -o c:\temp\test -FullOnFirst -sc c:\temp\test\test.exe Note: I built and tested this code in a directory named c:\temp\test on my machine as I wrote this. Therefore, you will see references to it throughout this post. Incidentally, I have found ADPlus is a lot easier to use if you feed it fully qualified paths. If you get mysterious errors or behavior, and you are using relative paths with ADPlus, try fully qualifying the paths before you beat your head against the wall for too long trying to figure out what could be going wrong. ADPlus is easier to launch if you add the directory where the Debugging Tools for Windows was installed to your PATH environment variable. The command line above assumes this has been done. Now, let me explain the command line options that I used above. I highly recommend that you become familiar with all of the ADPlus command line options. · -crash launches ADPlus in crash mode. This is the mode you want to use if your application is failing because of an unhandled exception. · -o c:\temp\test tells ADPlus that I want the output to be placed in c:\temp\test, which is the directory in which I built the test.exe application. · -FullOnFirst is very important for managed applications.. This tells ADPlus to grab a full process dump on first chance exceptions. It is essential that you capture full dumps for managed applications, otherwise, all of the necessary data regarding the execution engine and the managed heap will be absent from the dump making it impossible to debug effectively. · -sc c:\temp\test.exe is only one of the ways you can point ADPlus to an application to monitor. In this case, we’re instructing ADPlus to tell the debugger to explicitly launch the application. Had it been a service, or if the application you want to monitor is already running, we would probably have used –p to attach to its PID or –pn to attach to the process by name. Notice that I provided the full path to the application. After you launch ADPlus, you are presented with the following dialog unless you use the –quiet option. Once the application is finished executing, go to the directory that you specified in the ADPlus –o command line option and you should see a subdirectory named similarly to what you see in the previous dialog snapshot. For the instance of test.exe I just executed, that directory is named Crash_Mode__Date_05-11-2009__Time_21-12-54PM. Under that directory, there are quite a few files that I have listed below: C:\temp\test\Crash_Mode__Date_05-11-2009__Time_21-12-54PM>dir /bADPlus_report.txtCDBScriptsPID-0__Spawned0__1st_chance_CPlusPlusEH__full_15ac_2009-05-11_21-13-01-332_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-55-482_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-56-527_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-57-370_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-58-103_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-58-867_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-12-59-663_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-13-00-505_0eac.dmpPID-0__Spawned0__1st_chance_NET_CLR__full_15ac_2009-05-11_21-13-02-689_0eac.dmpPID-0__Spawned0__1st_chance_Process_Shut_Down__full_15ac_2009-05-11_21-13-27-743_0eac.dmpPID-0__Spawned0__2nd_chance_NET_CLR__full_15ac_2009-05-11_21-13-04-140_0eac.dmpPID-0__Spawned0__Date_05-11-2009__Time_21-12-54PM.logProcess_List.txt For each first chance exception, a dump file (.dmp) has been collected. Notice that the dump files can tend to be very large because of the –FullOnFirst option. You can subsequently load these dump files into either Windbg (or its variants) or the Visual Studio Debugger. Personally, I prefer Windbg because I can then load the SOS extension along with the SOSEX extension and dig into the state of the application and the CLR. Using the ADPlus default configuration as we have above, you can see that ADPlus generated dumps for one first-chance C++ exception, eight first-chance CLR exceptions, one second-chance CLR exception and one dump collected during process shut down. If you look in your Debugging Tools for Windows directory, you’ll notice that ADPlus is really just a very complex VBScript. It generates quite a bit of useful information along with any problem dump files. ADPlus_report.txt reports the configuration for ADPlus for this run. This is handy if you need to know what it will do for a specific type of exception. Process_list.txt is generated from executing tlist.exe, another tool that comes with Debugging Tools for Windows. And finally, the CDBScripts subdirectory contains a .cfg file which is the exact debugger script ADPlus generated and subsequently fed to CDB to get the job done. On my machine, when running against the sample application using the command line from the previous section, this file is named PID‑0__Spawned0.cfg. If you’re ever curious or need to know exactly what ADPlus instructed the debugger to do, this file is the source. Note: The reason the name contains a zero PID is because we used the –sc option to launch the application. Had we used the –p or –pn option, the PID in the filename would not be zero. I don’t recommend executing this debugger script in a live debugger as doing so could overwrite the data that you already have collected. Instead, if you need to couple ADPlus with live debugging, you should use the –gs option which I will describe shortly. You’ll notice that in the previous run of ADPlus, quite a few dump files were generated and each one was fairly large. In reality, there are times where a first chance exception does not always indicate a fatal situation. In Accelerated C# 2008, I go into detail regarding how it is poor design practice to implement any sort of control flow or otherwise reasonably expected behavior using exceptions. This is because exceptions are supposed to be truly exceptional events! For more juicy details on how expensive exceptions are, I invite you to read an excellent blog post by Tess Ferrandez on the topic. At any rate, you may encounter situations where you get a lot of dumps for first chance exceptions you are not interested in as this example shows.. To alleviate this situation, you can create an ADPlus configuration file coupled with the !StopOnException command provided by the SOS extension to instruct ADPlus to filter out only the exceptions you’re interested in. To do this, I created a configuration file named filter.config with the following contents: <ADPlus> <!-- Configuring ADPlus to log only exceptions we're interested in --> <Exceptions> <Config> <!-- This is for the CLR exception --> <Code> clr </Code> <Actions1> Log </Actions1> <CustomActions1> .loadby sos mscorwks; !StopOnException System.InvalidCastException 1; j ($t1 = 1) '.dump /ma /u c:\dumps\InvalidCastException.dmp; gn' ; 'gn' </CustomActions1> <ReturnAction1> VOID </ReturnAction1> <Actions2> Log </Actions2> </Config> </Exceptions> </ADPlus> Note: The <CustomActions1> element above is meant to be in one line in the config file. The <CustomActions1> element is the element of interest in this configuration file. This element allows you to specify which commands the debugger should execute on first chance exceptions. Within this element, you can put any valid debugger commands (except windbg GUI-related commands). If you need to execute multiple commands, as I have above, you simply separate them with semicolons. In the <CustomActions1> element shown above, I first load the SOS extension using the .loadby command. Then, I use the !StopOnException command in predicate mode to set the $t1 pseudo-register to 1 if the exception is of type System.InvalidCastException and 0 otherwise. Then, the following j command is used to create a full dump if $t1 is 1 and do nothing otherwise. The gn command in both paths of the j command tells the debugger to go unhandled so that the exception is propagated up rather than swallowed by the debugger. If you were to go handled, thus swallowing the exception, the exception handlers in the code would never see it and the debugger would alter the behavior of the application. And finally, note that the .dump command used to create the dump indicates the path where the dump will be stored. In this case, I am placing it in the c:\dumps directory on my machine. Now you can provide this configuration file to ADPlus using the following command: adplus -crash -o c:\temp\test -FullOnFirst -c c:\temp\test\filter.config -sc c:\temp\test\test.exe Notice that there are fewer dumps collected. Along with the dump for the InvalidCastException, it also captures a C++ exception as well as when the application shuts down. If you open the C++ exception dump in the debugger and inspect the stack, it shows that the CLR is in the process of generating the InvalidCastException. The CLR catches the C++ exception and converts it into a managed exception. The C++ exception dump was generated because I left the –FullOnFirst option in the previous command line. You can eliminate the C++ exception dump by removing -FullOnFirst. I already hinted at the –gs ADPlus command line option earlier in this post. This option allows you to create the debugger scripts ADPlus creates without actually running them in the debugger. For example, if you execute the following command (I executed mine from my c:\temp\test directory): adplus -crash -o c:\temp\test -FullOnFirst -c c:\temp\test\filter.config -gs livedebug You will notice that ADPlus does not actually launch any debugger or your application. Rather, it creates a subdirectory named livedebug. When you go into that directory, you’ll notice it looks similar in layout to the crash directories created in the previous demonstration. On my machine, I end up with the following two files: C:\temp\test\livedebug\ADPlus_report.txtC:\temp\test\livedebug\CDBScripts\PID-0__livedebug.cfg The PID-0__livedebug.cfg file is actually a debugger script file containing debugger commands. All we have to do now is launch our test application in the debugger and then execute this script. Within my c:\temp\test directory, I can launch the debugger using the following command: windbg test.exe Once inside the debugger, I can invoke the ADPlus debugger script by executing the following $$< command: $$<C:\temp\test\livedebug\CDBScripts\PID-0__livedebug.cfg Once you execute the $$< command, you will notice that the script takes over and performs the same actions as ADPlus does when run conventionally from the command line. As a further experiment, edit the filter.config file and remove the gn commands. Now the debugger will wait for user input after executing the custom commands rather than continuing execution of the application. This could be handy if you want the opportunity to perform debugging by hand if ADPlus encounters a certain situation. Throughout this post, I have been focusing on first chance exceptions for the sake of illustration. However, many times, you are only interested in the truly unhandled exceptions. In that case, you would want to capture only second chance exceptions. You certainly would want to capture first chance exceptions if you needed to capture the state at the exact point the exception was thrown and before the OS searches for any handlers. Additionally, if you suspect that the application you are debugging may be handling an exception inappropriately (maybe even blindly swallowing it), then you certainly want to catch first chance exceptions in that case. In this blog post I have introduced you to ADPlus and the utility it provides when troubleshooting problems in the field. ADPlus lends itself well to capturing exceptional situations in the field since it requires no configuration changes on the affected machine thus making it an easy pill for your customers to swallow. You may also find this very useful when working with your Quality Assurance team during the development phase. For example, how many times have you encountered a situation where a problem only presents itself on a dusty old rarely-used machine in the back corner of a random QA engineer’s office? How many times, in such situations, have you felt like the only way to trouble shoot the problem effectively is to install the Visual Studio debugger and start working on that machine? Furthermore, what if the problem only happens on that dusty old machine about once a week. ADPlus can help you avoid that madness by providing an easy mechanism for capturing full process dumps on the troubled machine so you can then take those dumps to your trusty development machine for further debugging and analysis. Have fun out there! You've sold me on ADPlus! Thank you for this informative article. BTW, any ideas on why it's named "ADPlus"? .NET What's New in the BCL in .NET 4 Beta 1 How CLR maps SEH exceptions to managed exception types
http://blogs.msdn.com/b/ntdebugging/archive/2009/05/18/capturing-adplus-clr-crashes.aspx
CC-MAIN-2015-18
refinedweb
3,031
54.12
Source pyramid_sqla / docs / usage.rst Usage and API The pyramid_sqla package contains three things: a SQLAlchemy scoped session, a place for registering SQLAlchemy engines by name, and a declarative base object. You can use all of these together in your application, or just use some of them and ignore the others, as you prefer. Installation Install pyramid_sqla like any Python package, using either "pip install pyramid_sqla" or "easy_install pyramid_sqla". To check out the development repository: "hg clone". Usage Create an application: $ paster create -t pyramid_sqla MyApp It should work out of the box: $ cd MyApp $ python setup.py egg_info $ paster serve development.ini The default application doesn't define any tables or models so it doesn't actually do anything except display some help links. In development.ini change the default database URL to your database: sqlalchemy.url = sqlite:///%(here)s/db.sqlite The default creates a SQLite database in the application directory. If you're using SQLite, you'll have to install it. (If you're still using Python 2.4, you'll also have to install the pysqlite package.) You can add other SQLALchemy engine options such as: sqlalchemy.pool_recycle = 3600 sqlalchemy.convert_unicode = true Engine options are listed under Engine Configuration in the SQLAlchemy manual, and in the Dialects section for particular databases. If you want your engine to always convert String and Text columns to unicode regardless of what the INI file says, edit myapp/__init__.py and change the line: pyramid_sqla.add_engine(settings, prefix="sqlalchemy.") to: pyramid_sqla.add_engine(settings, prefix="sqlalchemy.", convert_unicode=True) In models or views or wherever you need them, access the database session, engine, and declarative base this way: import pyramid_sqla Session = pyramid_sqla.get_session() engine = pyramid_sqla.get_dbengine() Base = pyramid_sqla.get_base() Note that get_session() returns a SQLAlchemy scoped session. This is traditionally assigned to Session with a capital S to remind us it's not a plain session. (Don't confuse SQLAlchmey sessions with HTTP sessions, which are completely different things.) If the application needs to create the database and add initial data, customize myapp/scripts/create_db.py and run it: $ python -m myapp.scripts.create_db development.ini See model examples for examples of model code, and application templates for a detailed description of the differences between this application template and a basic Pyramid template. API (.add_static_route is explained on the Non-Database Features page.) Managed transactions pyramid_sqla has managed transactions. After the view is called, it will automatically commit all database changes unless an uncaught exception occurs, in which case it will roll back the changes. It will also clear the Session for the next request. You can still commit and roll back explicitly in your view, but you'll have to use the transaction module instead of calling the Session methods directly: import transaction transaction.commit() # Or: transaction.abort() You may want to do this if you want to commit a lot of data a little bit at a time. You can also poison the transaction to prevent any database writes during this request, including those performed by other parts of the application or middleware. To do this, call: transaction.doom() Of course, this doesn't affect changes that have already been committed. The implementation is a combination of three packages that work together. transaction is a generic transaction manager. zope.sqlalchemy applies this to SQLAlchemy by exposing a ZopeTransactionExtension, which is a SQLAlchemy session extension (a class that enhances the session's behavior). The repoze.tm2 middleware takes care of the commit or rollback at the end of the request processing. Starting with pyramid_sqla 1.0rc1 and repoze.tm2 1.0b1, the transaction will also be rolled back if the application returns a 4xx or 5xx status, or if the response header 'X-TM-Abort' is present. This is done by a "commit veto" callback in repoze.tm2. You can customize the veto criteria by overriding the callback function in development.ini; see Using a Commit Veto for an example. Disabling the transaction manager If you don't want managed transactions, reconfigure the Session to not have the extension: Session.config(extension=None) and also delete the "egg:repoze.tm2#tm" line in the "[pipeline:main]" section in development.ini. If you disable the manager, you'll have to call Session.commit() or Session.rollback() yourself in your views. You'll also have to configure the application to remove the session at the end of the request. This would be in an event subscriber but I'm not sure which one. Caveat: adding your own session extensions If you modify the extension session option in any way you'll lose the transaction extension unless you re-add it. The extension lives in the semi-private _zte variable in the library. Here's how to add your own extension while keeping the transaction extension: Session.configure(extension=[MyWonderfulExtension(), pyramid_sqla._zte]) Bypassing the transaction manager without disabling it In special circumstances you may want to do a particular database write while allowing the transaction manager to roll back all other writes. For instance, if you have a separate access log database and you want to log all responses, even failures. In that case you can create a second SQLAlchemy session using sqlalchemy.orm.sessionmaker -- one that does not use the transaction extension -- and use that session with that engine to insert and commit the log record. Multiple databases The default configuration in myapp/__init__.py configures one database: import pyramid_sqla as psa psa.add_engine(settings, prefix="sqlalchemy.") To connect to multiple databases, list them all in development.ini under distinct prefixes. For instance: Or: Then modify myapp/__init__.py and put an add_engine() call for each database. The examples below elaborate on the API docs. A default engine plus other engines In this scenario, the default engine is used for most operations, but two other engines are also used occasionally: # Initialize the default engine. pyramid_sqla.add_engine(settings, prefix="sqlalchemy.") # Initialize the other engines. pyramid_sqla.add_engine(settings, name="engine1", prefix="engine1.") pyramid_sqla.add_engine(settings, name="engine2", prefix="engine2.") Queries will use the default engine by default. To use a different engine you have to use the bind= argument the method that executes the query, engine.execute(sql) to run a SQL SELECT or command in a particular engine. Two engines, but no default engine In this scenario, two engines are equally important, and neither is predominent enough to deserve being the default engine. This is useful in applications whose main job is to copy data from one database to another. pyramid_sqla.init_dbsession() pyramid_sqla.add_engine(settings, name="engine1", prefix="engine1.") pyramid_sqla.add_engine(settings, name="engine2", prefix="engine2.") Because there is no default engine, queries will fail unless you specify an engine every time using the bind= argument or engine.execute(sql). Different tables bound to different engines It's possible to bind different ORM classes to different engines in the same database session. Configure your application with no default engine, and then call the Session's .configure method with the binds= argument to specify which classes go to which engines. For instance: pyramid_sqla.add_engine(settings, name="engine1", prefix="engine1.") pyramid_sqla.add_engine(settings, name="engine2", prefix="engine2.") Session = pyramid_sqla.get_dbsession() import myapp.models as models binds = {models.Person: engine1, models.Score: engine2} Session.configure(binds=binds) The keys in the binds dict can be SQLAlchemy ORM classes, table objects, or mapper objects. Logging The default application template is configured to log SQL queries. To change this, adjust the "level" line in the "[logger_sqlalchemy]" section. [logger_sqlalchemy] level = INFO handlers = qualname = sqlalchemy.engine # "level = INFO" logs SQL queries. # "level = DEBUG" logs SQL queries and results. # "level = WARN" logs neither. (Recommended for production systems.) SQLAlchemy has many other loggers; e.g., to show connection pool activity or ORM operations. For details see Configuring Logging in the SQLAlchemy manual. Caution: Don't set the 'echo' engine option (i.e., don't do "sqlalchemy.echo = true"). This sets up a duplicate logger which may cause double logging. Declarative base The library includes a declarative base for convenience, but some people may choose to define their own declarative base in their model instead. And there's one case where you have to create your own declarative base; namely, if you want to modify its constructor args. The cls argument is the most common: it specifies a superclass which all ORM object should inherit. This allows you to define class methods and other methods which are available in all your ORM classes. Reflected tables Reflected tables pose a dilemma because it depends on a live database connection in order to be initialized. But the engine may not be configured yet when the model is imported. pyramid_sqla does not address this issue directly. Pylons 1 models traditionally have an init_model(engine) function which performs any initialization that requires a live connection. Pyramid applications typically do not need this function because the Session, engines, and base are initialized in the pyramid_sqla library before the model is imported. But in the case of reflection, you may need an init_model function. When not using declarative, the ORM classes can be defined at module level in the model, but the table definitions and mappers will have to be set up inside the init_model function using a global statement to set the module globals. When using declarative, we think the entire ORM class must be defined inside the function, again using a global statement to project the values into the module scope. That's unfortunate but we can't think of a way around it. If you can, please tell us.
https://bitbucket.org/sluggo/pyramid_sqla/src/bbcc561358a7/docs/usage.rst
CC-MAIN-2014-23
refinedweb
1,592
50.63
Speech recognition, as the name suggests, refers to automatic recognition of human speech. Speech recognition is one of the most important tasks in the domain of human computer interaction. If you have ever interacted with Alexa or have ever ordered Siri to complete a task, you have already experienced the power of speech recognition. Speech recognition has various applications ranging from automatic transcription of speech data (like voicemails) to interacting with robots via speech. In this tutorial, you will see how we can develop a very simple speech recognition application that is capable of recognizing speech from audio files, as well as live from a microphone. So, let's begin without further ado. Several speech recognition libraries have been developed in Python. However we will be using the SpeechRecognition library, which is the simplest of all the libraries. Installing SpeechRecognition Library Execute the following command to install the library: $ pip install SpeechRecognition Speech Recognition from Audio Files In this section, you will see how we can translate speech from an audio file to text. The audio file that we will be using as input can be downloaded from this link. Download the file to your local file system. The first step, as always, is to import the required libraries. In this case, we only need to import the speech_recognition library that we just downloaded. import speech_recognition as speech_recog To convert speech to text the one and only class we need is the Recognizer class from the speech_recognition module. Depending upon the underlying API used to convert speech to text, the Recognizer class has following methods: recognize_bing(): Uses Microsoft Bing Speech API recognize_google(): Uses Google Speech API recognize_google_cloud(): Uses Google Cloud Speech API recognize_houndify(): Uses Houndify API by SoundHound recognize_ibm(): Uses IBM Speech to Text API recognize_sphinx(): Uses PocketSphinx API Among all of the above methods, the recognize_sphinx() method can be used offline to translate speech to text. To recognize speech from an audio file, we have to create an object of the AudioFile class of the speech_recognition module. The path of the audio file that you want to translate to text is passed to the constructor of the AudioFile class. Execute the following script: sample_audio = speech_recog.AudioFile('E:/Datasets/my_audio.wav') In the above code, update the path to the audio file that you want to transcribe. We will be using the recognize_google() method to transcribe our audio files. However, the recognize_google() method requires the AudioData object of the speech_recognition module as a parameter. To convert our audio file to an AudioData object, we can use the record() method of the Recognizer class. We need to pass the AudioFile object to the record() method, as shown below: with sample_audio as audio_file: audio_content = recog.record(audio_file) Now if you check the type of the audio_content variable, you will see that it has the type speech_recognition.AudioData. type(audio_content) Output: speech_recognition.AudioData Now we can simply pass the audio_content object to the recognize_google() method of the Recognizer() class object and the audio file will be converted to text. Execute the following script: recog.recognize_google(audio_content) Output: 'Bristol O2 left shoulder take the winding path to reach the lake no closely the size of the gas tank degrees office 30 face before you go out the race was badly strained and hung them the stray cat gave birth to kittens the young girl gave no clear response the meal was called before the bells ring what weather is in living' The above output shows the text of the audio file. You can see that the file has not been 100% correctly transcribed, yet the accuracy is pretty reasonable. Setting Duration and Offset Values Instead of transcribing the complete speech, you can also transcribe a particular segment of the audio file. For instance, if you want to transcribe only the first 10 seconds of the audio file, you need to pass 10 as the value for the duration parameter of the record() method. Look at the following script: sample_audio = speech_recog.AudioFile('E:/Datasets/my_audio.wav') with sample_audio as audio_file: audio_content = recog.record(audio_file, duration=10) recog.recognize_google(audio_content) Output: 'Bristol O2 left shoulder take the winding path to reach the lake no closely the size of the gas' In the same way, you can skip some part of the audio file from the beginning using the offset parameter. For instance, if you do not want to transcribe the first 4 seconds of the audio, pass 4 as the value for the offset attribute. As an example, the following script skips the first 4 seconds of the audio file and then transcribes the audio file for 10 seconds. sample_audio = speech_recog.AudioFile('E:/Datasets/my_audio.wav') with sample_audio as audio_file: audio_content = recog.record(audio_file, offset=4, duration=10) recog.recognize_google(audio_content) Output: 'take the winding path to reach the lake no closely the size of the gas tank web degrees office dirty face' Handling Noise An audio file can contain noise due to several reasons. Noise can actually affect the quality of speech to text translation. To reduce noise, the Recognizer class contains adjust_for_ambient_noise() method, which takes the AudioData object as a parameter. The following script shows how you can improve transcription quality by removing noise from the audio file: sample_audio = speech_recog.AudioFile('E:/Datasets/my_audio.wav') with sample_audio as audio_file: recog.adjust_for_ambient_noise(audio_file) audio_content = recog.record(audio_file) recog.recognize_google(audio_content) Output: 'Bristol O2 left shoulder take the winding path to reach the lake no closely the size of the gas tank web degrees office 30 face before you go out the race was badly strained and hung them the stray cat gave birth to kittens the younger again no clear response the mail was called before the bells ring what weather is in living' The output is quite similar to what we got earlier; this is due to the fact that the audio file had very little noise already. Speech Recognition from Live Microphone In this section you will see how you can transcribe live audio received via a microphone on your system. There are several ways to process audio input received via microphone, and various libraries have been developed to do so. One such library is PyAudio. Execute the following script to install the PyAudio library: $ pip install PyAudio Now the source for the audio to be transcribed is a microphone. To capture the audio from a microphone, we need to first create an object of the Microphone class of the Speach_Recogniton module, as shown here: mic = speech_recog.Microphone() To see the list of all the microphones in your system, you can use the list_microphone_names() method: speech_recog.Microphone.list_microphone_names() Output: ['Microsoft Sound Mapper - Input', 'Microphone (Realtek High Defini', 'Microsoft Sound Mapper - Output', 'Speakers (Realtek High Definiti', 'Microphone Array (Realtek HD Audio Mic input)', 'Speakers (Realtek HD Audio output)', 'Stereo Mix (Realtek HD Audio Stereo input)'] This is a list of microphones available in my system. Keep in mind that your list will likely look different. The next step is to capture the audio from the microphone. To do so, you need to call the listen() method of the Recognizer() class. Like the record() method, the listen() method also returns the speech_recognition.AudioData object, which can then be passed to the recognize_google() method. The following script prompts the user to say something in the microphone and then prints whatever the user has said: with mic as audio_file: print("Speak Please") recog.adjust_for_ambient_noise(audio_file) audio = recog.listen(audio_file) print("Converting Speech to Text...") print("You said: " + recog.recognize_google(audio)) Once you execute the above script, you will see the following message: Please say something At this point of time, say whatever you want and then pause. Once you have paused, you will see the transcription of whatever you said. Here is the output I got: Converting Speech to Text... You said: hello this is normally from stack abuse abuse this is an article on speech recognition I hope you will like it and this is just a test speech and when I will stop speaking are you in today thank you for Reading It is important to mention that if recognize_google() method is not able to match the words you speak with any of the words in its repository, an exception is thrown. You can test this by saying some unintelligible words. You should see the following exception: Speak Please Converting Speech to Text... --------------------------------------------------------------------------- UnknownValueError Traceback (most recent call last) <ipython-input-27-41218bc8a239> in <module> 8 print("Converting Speech to Text...") 9 ---> 10 print("You said: " + recog.recognize_google(audio)) 11 12 ~\Anaconda3\lib\site-packages\speech_recognition\__init__.py in recognize_google(self, audio_data, key, language, show_all) 856 # return results 857 if show_all: return actual_result --> 858 if not isinstance(actual_result, dict) or len(actual_result.get("alternative", [])) == 0: raise UnknownValueError() 859 860 if "confidence" in actual_result["alternative"]: UnknownValueError: A better approach is to use the try block when the recognize_google() method is called as shown below: with mic as audio_file: print("Speak Please") recog.adjust_for_ambient_noise(audio_file) audio = recog.listen(audio_file) print("Converting Speech to Text...") try: print("You said: " + recog.recognize_google(audio)) except Exception as e: print("Error: " + str(e)) Conclusion Speech recognition has various useful applications in the domain of human computer interaction and automatic speech transcription. This article briefly explains the process of speech transcription in Python via speech_recognition library and explains how to translate speech to text when the audio source is an audio file or live microphone.
https://stackabuse.com/introduction-to-speech-recognition-with-python/
CC-MAIN-2021-10
refinedweb
1,573
52.19
07 September 2012 15:22 [Source: ICIS news] LONDON (ICIS)--NYMEX light sweet crude futures fell by more than $1.00/bbl on Friday after the US Labor Depart published a report presenting disappointing employment growth in the country. By 13:56 GMT, the front-month October NYMEX WTI contract had touched an intra-day low of $94.08/bbl, a loss of $1.45/bbl compared with Thursday's settlement. The contract then edged a little higher to trade around $94.35/bbl. At the same time the front-month October ICE Brent contract was trading around $112.70/bbl, having hit an intra-day low of $112.48/bbl, a loss of $1.01/bbl against Thursday’s close. The Labor Department said on Friday that the ?xml:namespace> Economists had been predicting employment growth of around 130,000
http://www.icis.com/Articles/2012/09/07/9593841/crude-prices-fall-on-disappointing-employment-data.html
CC-MAIN-2014-41
refinedweb
141
70.09
PyGSL: Python interface for GNU Scientific Library Status of GSL-Library: The gsl-library is since version 1.0 stable and for general use. Read more about it at the [1]GSL homepage. Status of this interface: We are collecting implementations of parts of gsl. So the interface is not complete. We are looking forward to contributions of new submodules, while maintaining the awailabel code. Requirements: To build the interface, you will need - [2]gsl-1.2 or better, - [3]python2.2 or better and - [4]NumPy, and - a ANSI C compiler (i.e. gcc). Retrieving the Interface: You can download pygsl at [5] Installing GSL interface: Uninstall the old version of pygsl gsl-config must be on your path gzip -d -c pygsl-x.y.z.tar.gz|tar xvf- cd pygsl-x.y.z #do this with your prefered python version #to set the gsl location explicitly use setup.py --gsl-prefix=/path/to/gsl #If your are using cvs, remove your build directory. python setup.py build # Running only python setup.py # can result in an error. So if you see an error runing setup.py please run python setup.py build # change to a id, that is allowed to do installation python setup.py install Ready.... Using pygsl Do NOT test the interface in the distribution root directory! -- please install it first and then change to the tests directory and execute python run_test.py If you want to execute it in the distribution root directory, please run python setup.py build_ext -i first! It will put the requried binary files into the pygsl directory. Just write in python import pygsl.sf print "%g+/-%g"%pygsl.sf.erf(1) or import pygsl.rng rng=pygsl.rng.rng() print rng.gaussian(1.0) You may set the environment variable LD_LIBRARY_PATH to find the gsl shared object. Uninstall GSL interface: rm -r "python install path"/lib/python"version"/site-packages/pygsl Supported Platforms: - Linux with python2.* and gsl-1.x - Mac OS X with python2.* and gsl-1.x - Win32 with python2.* and gsl-1.x but is supposed to compile and run on any posix platform Testing: The directory test will contain several testsuites, based on python unittest. Change to this directory to run the tests. What is implemented (versus gsl1.10): - Blas - Chebyshev - Combination - Const - Diff - Eigen - Fit - Ieee - Integrate - Interpolation - Linalg - Math - Minimize - Multifit - Multifit_nlin - Multimin - Multiroots - Odeiv - Permutation - Poly - Qrng - Rng - Roots - Siman - Sf - Spline - Statistics What is not implemented yet (versus gsl1.10): - GSL Capabilites not yet wrapped by PyGSL - Sorting - N-tuples - GSL Capabilites partly implemented - Discrete Hankel Transforms See also the [6]TODO file in the distribution. For the exact function volume for a module, please type: import pygsl.sf dir(pygsl.sf) You can do this with the other modules, too. The naming scheme is the same as in gsl. Documentation: There will be a small reference, but the function reference can be found in the doc strings and at the gsl-reference. See also the "examples" directory. Support: Please send mails to [7]the pygsl mailinglist. Developement: You can browse our [8]cvs tree. Type this to check out the actual version: cvs -d:pserver:anonymous@cvs.pygsl.sourceforge.net:/cvsroot/pygsl login Hit return for no password. cvs -z3 -d:pserver:anonymous@cvs.pygsl.sourceforge.net:/cvsroot/pygsl co pygsl The script tools/extract_tool.py generates most of the special function code. ToDo: See TODO. History: - a gsl-interface for python was needed for a project at [9]center for applied informatics cologne. - pygsl-0.0.3 was released at May 23, 2001 - pygsl-0.0.4 was released at January 8, 2002 - pygsl-0.1 was released at August 28, 2002 - pygsl-0.1b was released at May 17, 2003 - oygsl-0.3.0 was released - oygsl-0.3.1 was released - oygsl-0.3.3 was released - oygsl-0.9.0 was released - oygsl-0.9.3 was released on 15. June 2008 - oygsl-0.9.4 will be released soon Thanks: - Fabian Jakobs for blas, linalg and eigen - [10]Jochen Kuepper for pygsl.statistics functions - Leonardo Milano for rpm build support and test - Michael Forbes for Series Acceleration Maintainers: PyGSL is currently maintained by [11]Achim Gaedke and [12]Pierre Schnizer. Thanks to: [13]SourceForge Logo References Visible links 1. 2. 3. 4. 5. 6. 7. mailto:pygsl-discuss@lists.sourceforge.net 8. 9.
https://bitbucket.org/mforbes/pygsl/src/71b97a98a5ce/?at=default
CC-MAIN-2015-27
refinedweb
738
61.73
Crystal will be instantly recognisable as a function to calculate the nth Fibonacci number. It’s so recognisable that if you were to run this with Ruby the program would run correctly. Of course you can run it with Crystal too. Pretty amazing, right? And Crystal is orders of magnitude quicker than Ruby in this test. Of course, the Fibonacci sequence is a terrible benchmark and this isn’t exactly a fair test either, but it’s interesting to see the similarities. As you start to dig into Crystal, however, the differences between this language and Ruby start to emerge. Let’s do that now and make some calls to the Twilio API using Crystal. Getting started with Crystal To use Crystal, you’ll first need to install the compiler. For Linux, follow the installation instructions in the Crystal docs. If you’re on a Mac you can install Crystal with homebrew with the following: $ brew update $ brew install crystal-lang Sadly, Crystal doesn’t currently compile on Windows operating systems. You can check that Crystal is successfully installed by running: $ crystal --version Crystal 0.19.4 (2016-10-21) We need a Twilio account for the next part, so if you haven’t got an account, sign up for free here. We will also need a Twilio number that can send SMS messages. Time to make some API calls with Crystal! Sending an SMS message with Crystal Normally when I write about sending an SMS I would point you towards our official helper libraries. As Crystal is still relatively new, there is no such library, so we’re going to have to roll up our sleeves and investigate the HTTP module. Create a new Crystal file named sms.cr: $ touch sms.cr Open the file and start by requiring the http/client class from the standard library. Start a new function for sending SMS messages using Ruby’s familiar def syntax. We’ll need three arguments; a number to send the message to, the number we’re sending from and the body of the message. require "http/client" def send_sms(to, from, body) end Setting up the HTTP Client Initialise an HTTP::Client with the base URL of the API the port number and whether the application uses TLS. We can then pass a block to this initialiser and the client will be closed once we are done with it. require "http/client" def send_sms(to, from, body) HTTP::Client.new("api.twilio.com", 443, true) do |client| end end Now that we have our client, we need to add our authentication header using our Twilio Account SID and Auth Token as the username and password. I’m keeping my credentials as environment variables so that I don’t accidentally share them publicly. You can set environment variables on the command line with the export command. $ export TWILIO_ACCOUNT_SID=AC123456789abcdef $ export TWILIO_AUTH_TOKEN=1q2w3e4r5t6y7u8i9op0 Use the basic_auth method on the HTTP::Client instance to authenticate the client. require "http/client" def send_sms(to, from, body) HTTP::Client.new("api.twilio.com", 443, true) do |client| client.basic_auth(ENV["TWILIO_ACCOUNT_SID"], ENV["TWILIO_AUTH_TOKEN"]) end end Making the HTTP request Now we can make our request. To send an SMS message we need to make a POST request to the Messages resource. We build the URL using our Account SID. We need to send the properties of our message as application/x-www-form-urlencoded parameters, and we can use Crystal’s built in post_form method for this: require "http/client", }) end end We could now call this function and send our first SMS message from Crystal. But we would only know whether it was a success once we received the message. Let’s do a little more work to check whether the API call was successful and if so print out the message SID otherwise print out the error message. Handling the response Include the JSON module from the standard library and parse the response body. If it’s a success we can look for the SID otherwise the error response will have a message field to tell us what went wrong. require "http/client" require "json", }) result = JSON.parse(response.body) if response.success? puts result["sid"] else puts result["message"] end end end Now add one final line to sms.cr to call the method with your own phone number in the to position, a Twilio number in the from position and your SMS message. # sms.cr def send_sms(to, from, body) # function code end send_sms(ENV["MY_NUMBER"], ENV["MY_TWILIO_NUMBER"], "Hello from Crystal!") Compiling and running Compile and run the function and celebrate your first SMS sent from Crystal! $ crystal sms.cr SMadca071a5bab4848892d9f24863e99e6 The crystal command compiles and runs the file you pass to it. You can also build a fully optimised executable that’s ready for production with the build command. $ crystal build sms.cr --release $ ./sms SMadca071a5bab4848892d9f24863e99e6 We’ve sent our first message, but there’s more to learn before we finish this chapter. JSON parsing using the JSON.parse method is fine, but you’ll remember that I described Crystal as type-inferred at the start of the post. Parsing arbitrary data structures like JSON in a typed language can be awkward and require you to cast your properties to the expected type before using them. As it happens, the type of each property parsed using JSON.parse is JSON::Any and when we call puts with the object it is cast under the hood to a string with to_s. Improved JSON parsing Instead of using this JSON::Any type, we can actually tell Crystal’s JSON parser what to look for with a JSON mapping. This will have the effect of being “easy, type-safe and efficient” according to the documentation. Open up sms.cr again and add simple mappings for our message and error objects. # sms.cr class Message JSON.mapping( sid: String, body: String ) end class Error JSON.mapping( message: String, status: Int32 ) end There are more fields available on each of those objects, but we are only interested in those for now. Now, instead of using JSON.parse we can use the from_json constructor for each of our Message and Error classes. The mapping creates instance variables, getters and setters for the fields we define so we can use dot notation to access them. def send_message(to, from, body) client = HTTP::Client.new("api.twilio.com", 443, true) do |client| client.basic_auth(ENV["TWILIO_ACCOUNT_SID"], ENV["TWILIO_AUTH_TOKEN"]) response = client.post_form("/2010-04-01/Accounts/#{ENV["TWILIO_ACCOUNT_SID"]}/Messages.json", { "To" => to, "From" => from, "Body" => body, }) if response.success? message = Message.from_json(response.body) puts message.sid, message.body else error = Error.from_json(response.body) puts error.status, error.message end end end If you run the file with crystal sms.cr again you will receive your message and be safe in the knowledge that your Message object is now correctly typed and efficiently parsed from the JSON response. Crystal is pretty cool In my early explorations with Crystal I found it as pleasurable to write as Ruby with the addition of type safety and speed. In this post we’ve seen how easy it is to use Crystal to make HTTP requests against the Twilio API and send SMS messages. If you want to see the full code, check out this repository on GitHub. If you like the look of Crystal you can learn more about it at the official site and by checking out the documentation. There are a bunch of interesting projects listed on the awesome-crystal repo and if you’re coming at this from Ruby, like me, take a read through the Crystal for Rubyists book. What do you think about Crystal? Does it look like a promising language to you? Let me know your thoughts in the comments below or drop me an email or message on Twitter.
https://www.twilio.com/blog/2016/11/send-sms-messages-crystal.html
CC-MAIN-2020-50
refinedweb
1,317
64.91
dnsop W. Kumari Internet-Draft Google Intended status: Informational A. Sullivan Expires: May 3,August 5, 2017 Dyn October 30, 2016February 1, 2017 The ALT Special Use Top Level Domain draft-ietf-dnsop-alt-tld-06draft-ietf-dnsop-alt-tld-07 Abstract This document reserves a string (ALT) to be used as a TLD label in non-DNS contexts, or for names that have no meaning in a global context. It also provides advice and guidance to developers developing alternate namespaces. [ Ed. NOTE: This document livesis currently a parked WG document -- as such, all changes are being handled in GitHub at: . Issuesand pull requests happily accepted.a new version will be posted once unparked. It had been suggested (off-list) that the draft should contain <TBD> instead of .ALT, and then make the WG choose a string before publication. A version of the draft like this was published on GitHub ( tree/7988fcf06100f7a17f21e6993b781690b5774472) (and generated no feedback). This version reverts to .ALT -- the chairs stated that the document was adopted with the string .alt, it has been discussed as .alt. IMO, it is more readable as .alt; it would also be a difficult consensus call, boiling down to beauty contests. If the WG selects a different string ("not-dns" had been suggested in the past), the editors will, of course, replace,August 5, . . . . . . . . . . . . . . . . . . 23 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 23 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. The ALT namespace . . . . . . . . . . . . . . . . . . . . . . 4 3.1. Choice of the ALT Name . . . . . . . . . . . . . . . . . 5 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 5 4.1. Domain Name Reservation Considerations . . . . . . . . . 5 5. Security Considerations . . . . . . . . . . . . . . . . . . . 6 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 67 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 7 7.1. Normative References . . . . . . . . . . . . . . . . . . 7 7.2. Informative References . . 6. . . . . . . . . . . . . . . 7 Appendix A. Changes / Author Notes. . . . . . . . . . . . . . . 78 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 910 "Alternate") as a Special Use Domain ([RFC6761]). This label is intended to be used as the final (rightmost)]. 1.2. Terminology This document assumes familiarity with DNS terms and concepts. Please see [RFC1034] for background and concepts, and [RFC7719] for terminology. Readers are also expected to be familiar with the discussions in [I-D.tldr-sutld-ps][I-D.ietf-dnsop-sutld-ps] (alternate) namespace. o pseudo-TLD: A label that appears in a fully-qualified domain name in the position of a TLD, but which is not registered in the global DNS. This term is in no way intended to be pejorative.. Thesuccess of the DNS makes it a natural starting point for systems that need to name entities in a non-DNS context, or that have no unique meaning in a globalcontext. These name resolutions occur in a namespace distinct from the DNS.In many cases, these systems build a DNS-style tree parallel to, but separate from, the global DNS. They often use a pseudo-TLD to cause resolution in the alternate namespace, using browser plugins, shims in the name resolution process, or simply applications that perform special handling of this particular alternate namespace. An example of such a system is the Tor network's [Dingledine2004] use of the ".onion" Special-Use Top-Level Domain Name (see [RFC7686]). In many cases, the creators of these alternatealternative namespaces have chosen a convenient or descriptive string and started using it. These strings are not registered anywhere nor are they part of the DNS. However they appearHowever, to users and to some applications they appear to be TLDs,TLDs; and issues may arise if they are looked up in the DNS.to the operators of root name servers andat a minimum as well as to any entity viewing the DNS lookups going to the root nameservers. The techniques in this document are primarily intended to address the "Experimental Squatting Problem", the "Land Rush Problem" and "Name Collisions" issues discussed in [I-D.ietf-dnsop-sutld-ps] (whiich contains much additional background, etc). 3. The ALT namespace This document reserves the ALT label, using the [RFC6761] process, for use as a pseudo-TLD. This createsan unmanaged sandboxpseudo-TLD namespace. The ALT label MAY be used in any domain name as a pseudo- TLDpseudo-TLD to signify that this is an alternate (non-DNS) namespace. Alternatenamespace, and should not be looked up in a DNS context. Alternativealternative namespace, they have no significance in the regular DNS context and so should not be looked up in the DNS context. Some of these requests will inevitably leak into the DNS context (for example, because of clicks on a link in a browser that does not have a extension installed that implements the alternate namespace resolution), and so the ALT TLD has been added to the "Locally Served DNS Zones" ( [RFC6303]) registry to limit how far these flow. Groups wishing to create new alternate namespaces MAY create their alternate namespace under a label that names their namespace under the ALT label. They SHOULD choose a label that they expect to be unique and, ideally, descriptive. There is no IANA controlled seems to bewas decidechoose to move under the ALT TLD, but this is not a requirement. Rather, the ALT TLD is being reserved so that current and future projects of a similar nature have a designated place to create alternatealternative resolution namespaces that will not conflict with the regular DNS context. 3.1. Choice of the ALT Name A number of names other than "ALT" were considered and discarded. In order for this technique to be effective the names need to continue to follow both the DNS format and conventions (a prime consideration for alternatealternativeWriters of name resolution APIs and libraries dowhich operate in the DNS context should not needattempt to perform special handing oflook these names.names up in the DNS... 5. Security Considerations One of the motivations for the creation of the alt.alt pseudo-TLD is that unmanaged labels in the managed root name space are subject to unexpected takeovertakeover. This could occur if the manager of the root name space decides to delegate the unmanaged label. Another motivation tofor implementing the .alt namespace to increase useuser privacy for those userswho do use alternate name resolution systems, by limitingsystems; it would limit how far these queries leak (e.g if used on a system which does not implement the alternate resolution system.system). The unmanaged and "registration not required" nature of labels beneath .alt provides the opportunity for an attacker to re-use the chosen label and thereby possibly compromise applications dependent on the special host name. 6. Acknowledgements We would also like to thank Joe Abley, Mark Andrews, Marc Blanchet, John Bond, Stephane Bortzmeyer, David Cake, David Conrad, Patrik Faltstrom, Olafur Gudmundsson, Bob Harold, Paul Hoffman, Joel Jaeggli, Ted Lemon, Edward Lewis, George Michaelson, Ed Pascoe, Arturo Servin, and Paul Vixie for feedback. Christian Grothoff was also very helpful. 7. References 7.1. Normative References [I-D.tldr-sutld-ps] Lemon, T., Droms, R., and W. Kumari, "Special-Use Names Problem Statement", draft-tldr-sutld-ps-04 (work in progress), September 2016., <>. 7.2. Informative References [Dingledine2004] Dingledine, R., Mathewson, N., and P. Syverson, "Tor: The Second-Generation Onion Router", , 8 2004, << tor-design.html>>. [I-D.ietf-dnsop-sutld-ps] Lemon, T., Droms, R., and W. Kumari, "Special-Use Names Problem Statement", draft-ietf-dnsop-sutld-ps-00 (work in progress), October 2016. Appendix A. Changes / Author Notes. [RFC Editor: Please remove this section before publication ] From -06 to -07: o Rolled up the GItHub releases in to a full release. From -07.2 to -07.3 (GitHub point release): Dyn 150 Dow Street Manchester, NH 03101 US Email: asullivan@dyn.com
http://zinfandel.tools.ietf.org/wg/dnsop/draft-ietf-dnsop-alt-tld/draft-ietf-dnsop-alt-tld-07-from-06.wdiff.html
CC-MAIN-2020-34
refinedweb
1,246
55.84
libssh2_base64_decode - decode a base64 encoded string #include <libssh2.h> int libssh2_base64_decode(LIBSSH2_SESSION *session, char **dest, unsigned int *dest_len, const char *src, unsigned int src_len); This function is deemed DEPRECATED and will be removed from libssh2 in a future version. Don't use it! Decode a base64 chunk and store it into a newly allocated buffer. 'dest_len' will be set to hold the length of the returned buffer that '*dest' will point to. The returned buffer is allocated by this function, but it is not clear how to free that memory! The memory that *dest points to is allocated by the malloc function libssh2 uses, but there's no way for an application to free this data in a safe and reliable way! 0 if successful, -1 if any error occurred. This HTML page was made with roffit.
https://libssh2.org/libssh2_base64_decode.html
CC-MAIN-2021-43
refinedweb
137
64.91
Science Fair: How Accurate Is the AC Line Frequency? In the developed world, electricity is delivered via AC - alternating current. One of the specifications of that electric current is the frequency at which it oscillates. Most of the world is tied into electric grids running at either 50 or 60 Hz. In the early part of the 20th century, electric clocks driven by synchronous motors were developed and became popular. Because of that, an impetus existed to insure that the long term stability of the grid frequency was maintained. As technology moved into the electronics age, the idea of synchronizing a clock to the electric grid's frequency was often designed into clocks even if they no longer used synchronous motors. There are three distinct electric grids in the continental United States: the East, the West, and Texas (it really is like a whole 'nother country). The grid frequency is not synchronized between the three grids, but it is independently maintained and synchronized within each region. But how good is it? There are nominally 60 cycles per second, but there are variations due to the loads imposed on the grid. Since the grid is so large, instantaneous control of the frequency is not feasible. But it can be generally directed, and measured. For periods of time when the frequency drifts low, it can be countered with a period of higher frequency. If it were perfectly accurate, there would be 5,184,000 cycles per day on average. We can certainly count them, but we need an unquestioned yardstick against which to compare. GPS is an excellent option. Its stability and accuracy makes it a phenomenal value for stable and accurate time comparisons. The AdaFruit Ultimate GPS module, for example, quotes a 10 ns accuracy of the PPS output. Moreover, since that output is continuously disciplined by the GPS system, it should have (for our purposes), perfect long-term stability. So if we count the number of cycles of the AC frequency that occur between two PPS leading edges, we should get 60 every time. Of course, we won't. The AC frequency is not very well disciplined on a second-by-second basis. So we're sure to see 61 or 59 every once in a while. It's not outside the realm of possibility, in fact, to get 62 or 58. But the hypothesis of the experiment is that over a long enough term, every period of 61 cycles should be balanced by a period of 59. Step 1: The Experimental Apparatus For our experiment, we will be using an Arduino UNO as the main data collector. You'll also need a breadboard, an AdaFruit Ultimate GPS module breakout board, an LM358 Op Amp, a 1N4001 diode, three 20k resistors and a 9 VAC "wall wart" power supply. WARNING DO NOT attempt this experiment without using a commercially manufactured low-voltage AC power supply! AC line voltage is very dangerous, and it's only by using a transformer to reduce it to safe levels can we contemplate performing this experiment. The circuit has two basic pieces. The first simply feeds the GPS PPS signal into digital pin 2 of the Uno, and applies +5 volt power and ground to the Vin and GND pins respectively of the GPS module. The second is more complex. We need to turn the 9 volt AC signal from the AC power supply into a 60 Hz square wave compatible with the Uno. To do that, we anchor one side of the transformer to ground. The other will swing between positive and negative 14 volts or so (remember, the 9 VAC is an RMS value. The actual peak-to-peak voltage is higher than that). That's ok - the operational amplifier we're using has a maximum input voltage of 32 volts. We'll stay well below that. But we cannot feed the negative half of the AC cycle into our amp. We need to use a 1N4001 diode to 'throw away' the negative portion. Since our op amp has high impedance inputs, however, we must not simply allow the input to float when the diode is switched off. A 20k pull-down resistor to ground will take care of that. The output of the diode will be 14 volt "humps" separated by periods of zero volts. To turn that into a logic square wave, we'll use our LM358 wired as a comparator. We'll use two 20k resistors to make a voltage divider and feed 2.5 volts into the inverting pin of one side of the amp. The diode output from the transformer will be the non-inverting input. As a comparator, the op amp will output a high voltage whenever the non-inverting input is higher than the inverting input. So whenever the line voltage is higher than 2.5 volts, the op amp will output high. All other times, it will output low. This is enough for our purposes. We will be counting rising edges. They will occur when the AC voltage climbs above 2.5 volts, which will happen once every cycle. Step 2: The Sketch The uno will be in charge of doing the counting. We will use the serial connection (over USB) from the Uno back to a host computer. This sketch will perform counts of AC cycles on a (GPS) second-by-second basis. Any time that a second has other than 60 cycles, a line with the delta will be printed on the serial port. The host can watch the serial port and gather the deltas at its leisure. We'll also indicate any loss of GPS PPS signaling, which can happen if the GPS loses its fix. #include <Serial.h> #define PPS_PIN 2 #define AC_PIN 3 #define PPS_IRQ 0 #define AC_IRQ 1 #define NOMINAL_FREQUENCY 60 #define PPS_COMPLAINT_RATE 1500 unsigned long last_pps_complaint; unsigned int ac_cycle_count, last_ac_cycle_count; boolean pps_occurred; void pps_isr() { last_ac_cycle_count = ac_cycle_count; ac_cycle_count = 0; pps_occurred = true; } void ac_isr() { ac_cycle_count++; } void setup() { pinMode(PPS_PIN, INPUT); pinMode(AC_PIN, INPUT_PULLUP); Serial.begin(9600); pps_occurred = false; ac_cycle_count = 0; last_pps_complaint = 0; attachInterrupt(PPS_IRQ, pps_isr, RISING); attachInterrupt(AC_IRQ, ac_isr, RISING); } void loop() { if (pps_occurred) { pps_occurred = false; last_pps_complaint = millis(); // no excuse to complain, actually. int delta = NOMINAL_FREQUENCY - last_ac_cycle_count; if (delta) { Serial.print(delta); Serial.print("\r\n"); } } else { if (millis() - last_pps_complaint > PPS_COMPLAINT_RATE) { last_pps_complaint = millis(); Serial.print("Missed PPS\r\n"); } } } Step 3: The Host Program I plugged the USB cable into a Raspberry Pi, but in principle any *nix computer with the appropriate USB-to-serial driver will work. The host program is very simple. It is merely a conduit from the serial port directly to syslog. syslog is an excellent choice, since it will timestamp the lines and keep them in rotated log files. You can just look for the output lines in /var/log/syslog and aggregate the output as desired. #include <stdio.h> #include <string.h> #include <syslog.h> #include <termios.h> #include <unistd.h> int main(int argc, char **argv) { daemon(0, 0); openlog(argv[0], LOG_PERROR, LOG_USER); FILE *port = fopen("/dev/ttyUSB0", "r+"); if (port == NULL) { perror("Error opening port"); return 1; } struct termios t; if (tcgetattr(fileno(port), &t)) { perror("Error getting termios"); return 1; } cfsetspeed(&t, B9600); cfmakeraw(&t); if (tcsetattr(fileno(port), 0, &t)) { perror("Error setting speed"); return 1; } while(1) { char buf[1024]; if (fgets(buf, sizeof(buf), port) == NULL) break; while (buf[strlen(buf) - 1] == '\015' || buf[strlen(buf) - 1] == '\012') buf[strlen(buf) - 1] = 0; syslog(LOG_INFO, "Line monitor reports %s", buf); } } Step 4: Some Helpful Data Mining Script-lets A typical syslog line might look like this: Aug 4 17:16:48 rpi herzmon: Line monitor reports -1 The *nix 'awk' command is very helpful for deconstructing this in preparation for graphing. I like to make CSV files and import those into Excel for graphing. For each line, we want the time and the accumulated cycle debt, which we can get by adding the offsets to a running total. This works pretty well for me: grep herzmon /var/log/syslog | awk '{total+=$9; print $3,",",total}' In the above, $9 refers to the 9th word. If you count it all, you'll find that that's "-1" in the above example. $3 is the third word, which is the "17:16:48" in the example above. The result of this will be lines with the time and the cycle debt at that particular moment, assuming that it started at 0 at the beginning of the syslog file. Excel has a nice CSV import wizard. You can tell it that your file is comma separated and it will fill column A with times and B with numbers. Select column A and B and ask for a scatter plot.
http://www.instructables.com/id/Science-fair-How-accurate-is-the-AC-line-frequency/
CC-MAIN-2017-17
refinedweb
1,459
63.59
Import C++ files directly from Python! Import C or C++ files directly from Python! Let’s try it out. First, if you’re on Linux or OS X, install with the terminal command pip install cppimport. Most cppimport users combine it with pybind11, but you can use a range of methods to create your Python extensions. Raw C extensions, Boost.Python, SWIG all work. Let’s look at a simple C++ extension: /*cppimport <% setup_pybind11(cfg) %> */ #include <pybind11/pybind11.h> namespace py = pybind11; int square(int x) { return x * x; } PYBIND11_PLUGIN(somecode) { pybind11::module m("somecode", "auto-compiled c++ extension"); m.def("square", &square); return m.ptr(); } Save this code as somecode.cpp. Open a python interpreter and run these lines [1]: >>> import cppimport >>> somecode = cppimport.imp("somecode") #This will pause for a moment to compile the module >>> somecode.square(9) 81 I’m a big fan of the workflow that this enables, where you can edit both C++ files and Python and recompilation happens transparently. I want things to be even easier! (Python import hook) Modify the first section of the .cpp file and add “cppimport” on the first line of the file. This MUST be on the first line. This is explained further down. /*cppimport <% setup_pybind11(cfg) %> */ Then import the file using the import hook: >>> import cppimport.import_hook >>> import somecode #This will pause for a moment to compile the module >>> somecode.square(9) 81 What’s actually going on? The technical description: cppimport looks for a C or C++ source file that matches the requested module. If such a file exists, the file is first run through the Mako templating system. The compilation options produced by the Mako pass are then use to compile the file as a Python extension. The extension (shared library) that is produced is placed in the same folder as the C++ source file. Then, the extension is loaded. Simpler language please: Sometimes Python just isn’t fast enough. Or you have existing code in a C++ library. So, you write a Python extension module, a library of compiled code. I recommend pybind11 for C++ to Python bindings or cffi for C to Python bindings. I’ve done this a lot over the years. But, I discovered that my productivity goes through the floor when my development process goes from Edit -> Test in just Python to Edit -> Compile -> Test in Python plus C++. So, cppimport combines the process of compiling and importing an extension in Python so that you can type modulename = cppimport.imp("modulename") and not have to worry about multiple steps. Internally, cppimport looks for a file modulename.cpp. If one is found, it’s run through the Mako templating system to gather compiler options, then it’s compiled and loaded as an extension module. Note that because of the Mako pre-processing, the comments around the configuration block may be omitted. Recompilation only happens when necessary: Compilation should only happen the first time the module is imported. The C++ source is compared with a checksum on each import to determine if the file has changed. Additional dependencies (header files!) can be tracked by adding to the Mako header: cfg['dependencies'] = ['file1.h', 'file2.h'] I need to set the compiler or linker args! cfg['linker_args'] = ['...'] cfg['compiler_args'] = ['...'] cfg['libraries'] = ['...'] cfg['include_dirs'] = ['...'] For example, to use C++11, add: <% cfg['compiler_args'] = ['-std=c++11'] %> I want multiple source files for one extension! cfg['sources'] = ['...'] I need more output! Calling cppimport.set_quiet(False) will result in output that will be helpful in debugging compile errors. Sometimes I need to force a rebuild even when the checksum matches Call cppimport.force_rebuild() before running cppimport.imp(...). I want incremental compiles on extensions with multiple sources. (For the uninitiated, incremental compilation involves only recompiling those source files that have changed or include headers that have changed.) cppimport is built on top of the setuptools and distutils, the standard library for python packaging and distribution. Unfortunately, setuptools does not support incremental compilation. I recommend following the suggestions on this SO answer. That is: - Use ccache to (massively) reduce the cost of rebuilds - Enable parallel compilation. This can be done with cfg['parallel'] = True in the C++ file’s configuration header. I need information about filepaths in my module configuration code! The module name is available as the fullname variable and the C++ module file is available as filepath. For example, <% module_dir = os.path.dirname(filepath) %> Why does the import hook need “cppimport” on the first line of the .cpp file? Modifying the Python import system is a global modification and thus affects all imports from any other package. As a result, to avoid accidentally breaking another package, the import hook uses an “opt in” system where C and C++ files can specify they are meant to be used with cppimport by having a comment including the phrase “cppimport” on the first line of the file. Windows? I don’t know if cppimport works on Windows. If you’re on Windows, try it out and I’ll happily accept a pull request for any issues that you fix. I have reports that cppimport works on Windows with Python 3.6 and Visual C++ 2015 Build Tools. cppimport uses the MIT License Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cppimport/
CC-MAIN-2018-05
refinedweb
895
67.25
I: Pretty easy, huh? You can manually write the episode file (if you look at a few of them, you'll find that it's pretty easy to do so), or you can run schemagen just to get the episode file and discard all the schemas. So that's what' we are planning to do in 2.1. Let us know what you think, while we can still change things... Maybe I'm missing something here. Do you see my error? Using latest snapshot, trying this out. here is common.xsd: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="" targetNamespace="" xmlns: <xs:complexType </xs:complexType> </xs:schema> and here is carrotlist.xsd: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="" targetNamespace="" xmlns: <xs:include <xs:element <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> </xs:schema> here I go: $ xjc.sh -d ./test/ -episode ./test/common.episode common.xsd parsing a schema... compiling a schema... bar/foo/CarrotType.java bar/foo/ObjectFactory.java bar/foo/package-info.java $ xjc.sh -d ./test/ ./carrotlist.xsd ./test/common.episode parsing a schema... [ERROR] s4s-elt-schema-ns: The namespace of element 'bindings' must be from the schema namespace, ''. line 2 of file:/home/me/test/common.episode [ERROR] s4s-elt-invalid: Element 'bindings' is not a valid element in a schema document. line 2 of file:/home/me/test/common.episode [ERROR] schema_reference.4: Failed to read schema document 'file:/home/me/test/common.episode', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not . line 2 of file:/home/me/test/common.episode Failed to parse a schema. $ Posted by: grizel on October 05, 2006 at 03:18 PM Posted by: kohsuke on October 05, 2006 at 08:56 PM Posted by: seanl on November 15, 2006 at 07:58 AM Posted by: seanl on November 30, 2006 at 12:12 PM Posted by: seanl on November 30, 2006 at 12:49 PM ~/jaxb-ri-20061115/bin/xjc.sh -d examples/jaxb2/src resources/hello.xsd example.episode parsing a schema... [ERROR] s4s-elt-schema-ns: The namespace of element 'bindings' must be from the schema namespace, ''. line 2 of file:/home/slandis/workspace/RestFramework/example.episode [ERROR] s4s-elt-invalid: Element 'bindings' is not a valid element in a schema document. line 2 of file:/home/slandis/workspace/RestFramework/example.episode [ERROR] schema_reference.4: Failed to read schema document 'file:/home/slandis/workspace/RestFramework/example.episode', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not . line 2 of file:/home/slandis/workspace/RestFramework/example.episode Failed to parse a schema. Posted by: seanl on November 30, 2006 at 12:51 PM Posted by: kohsuke on December 11, 2006 at 01:32 PM Posted by: sjd00d on June 28, 2007 at 12:14 PM "...unfortunately this version uses SCD, which makes it non-portable (there are ways to make this portable --- more work ahead.)" Posted by: rogerp on July 19, 2007 at 09:53 AM <xjc schema="src/foo.xsd" package="com.foo.gen" destdir="src/java" removeOldOutput="true" extension="true"> <arg value="-episode"/> <arg value="foo.episode"/> </xjc> Posted by: cliffwd on September 21, 2007 at 10:09 AM Thanks, Sahoo Posted by: ss141213 on October 24, 2007 at 09:58 AM Posted by: lexi on December 09, 2007 at 05:51 AM Posted by: anuragpaliwal on January 25, 2008 at 05:58 AM Posted by: kohsuke on January 25, 2008 at 11:29 AM Posted by: ryokota on March 26, 2008 at 11:30 AM Posted by: kohsuke on March 31, 2008 at 11:36 AM [ERROR] SCD "x-schema::tns" didnt match any schema component Posted by: willemsrb on May 07, 2008 at 06:42 AM
http://weblogs.java.net/blog/kohsuke/archive/2006/09/separate_compil.html
crawl-001
refinedweb
653
58.28
Disable window maximizing/showing in QML I have an application in the QML/C++ that shouldn't be able to be maximized/showed by user. It should stay minimized in taskbar whole time and when it receives a message from a server then it should maximize itself. Is it possible to do it in the QML? I was looking everywhere and I was not able to find anything similar to my issue. - p3c0 Moderators Hi, If you are using Window then set Window visibility to Maximized and then call requestActivate() when messaged is received from server. Hey, like p3c0 already said; the window visibility should do the trick. Example code with a Window-type Component: @import QtQuick 2.2 import QtQuick.Window 2.1 Window { visible: true visibility: "Minimized" }@ For the cpp part where a user shouldn't be able to maximize your application: @void MyApplication::changeEvent(QEvent *e) { if(e->type() == QEvent::WindowStateChange) { e->ignore(); } else { e->accept(); } }@ This should ignore the event by default, i haven't tested it out so you might play around with it a little bit to get it work properly.
https://forum.qt.io/topic/48600/disable-window-maximizing-showing-in-qml
CC-MAIN-2019-04
refinedweb
187
52.29
Below are the Steps to import the XSD file in BODS - Open the local object library and go to the Formats tab. - Right-click XML Schema and click New. The Import XML schema format dialog box opens. - In the Format Name box, name the XML Schema. - For the File name/URL, click Browse to navigate to the XML schema file and open it. - For Namespace, click on the drop down to select the namespace . - In the Root element name list, click on the root element. - Click OK. - The XML schema is imported and we can see it in the local object library. Some tips to avoid XML parse errors at run-time: - The order of the elements in the XML file should have the same order as in XSD. - All mandatory fields specified in the XSD must be available in XML File - The datatype of the elements in the XML file must match with the specification in XSD Thanks a lot …Very informative… 🙂
https://blogs.sap.com/2013/07/08/how-to-import-xml-schema/
CC-MAIN-2018-13
refinedweb
163
80.62
wcrtomb() Convert a wide-character code to a character Synopsis: #include <wchar.h> size_t wcrtomb( char * s, wchar_t wc, mbstate_t * ps); Since: BlackBerry 10.0.0 Arguments: - s - NULL, or a pointer to a location where the function can store the multibyte character. - wc - The wide character that you want to convert. - ps - An internal pointer that lets wcrtomb() be a restartable version of wctomb(); if ps is NULL, wcrtomb() uses its own internal variable. You can call mbsinit() to determine the status of this variable. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The wcrtomb() function determines the number of bytes needed to represent the wide character wc as a multibyte character and stores the multibyte character in the location pointed to by s, to a maximum of MB_CUR_MAX bytes. This function is affected by LC_CTYPE. Returns: The number of bytes stored, or (size_t)-1 if the variable wc is an invalid wide-character code. Errors: - EILSEQ - Invalid wide-character code. - EINVAL - The variable ps points to an invalid conversion state. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/wcrtomb.html
CC-MAIN-2019-35
refinedweb
207
58.99
Created on 2019-10-16 16:17 by fabioz, last changed 2019-11-11 12:04 by Mark.Shannon. In CPython 3.7 it was possible to do: #include "pystate.h" ... PyThreadState *ts = PyThreadState_Get(); PyInterpreterState *interp = ts->interp; interp->eval_frame = my_frame_eval_func; This is no longer possible because in 3.8 the PyInterpreterState is opaque, so, Py_BUILD_CORE_MODULE needs to be defined defined and "internal/pycore_pystate.h" must be included to set PyInterpreterState.eval_frame. This works but isn't ideal -- maybe there could be a function to set PyInterpreterState.eval_frame? It depends on how you look at the degree to which you are interacting with the runtime. This is a fairly low-level hook into the runtime. So arguably if you are using this API then you should specify being a "core" extension. That said, getting that clever about it is a bit too much. The authors or PEP 523 can correct me if I'm wrong, but it seems like there isn't a good reason to restrict access. So basically, I agree with you. :) How about one of the following? * _PyInterpreterState_SetEvalFrame(_PyFrameEvalFunction eval_frame) * _PyInterpreterState_SetFrameEval(_PyFrameEvalFunction eval_frame) The underscore basically says "don't use this unless you know what you are doing". Or perhaps that is overkill too? "_PyFrameEvalFunction" has an underscore, so perhaps not. Also, it would make sense to have a matching getter. Adding the getter/setters seems perfectly reasonable to me, and I agree they should be underscore prefixed as well. I'm strongly opposed to this change. PEP 523 does not specify what the semantics of changing the interpreter frame evaluator actually is: Is the VM obliged to call the new interpreter? What happens if the custom evaluator leaves the VM in a inconsistent state? Does the VM have to roll back any speculative optimisations it has made? What happens if it the evaluator is changed multiple times by different modules? What if the evaluator is changed when a coroutine or generator is suspended, or in another thread? I could go on... IMO this punches a big hole in the Python execution model, but provides no benefit. Fabio, OOI what are you trying achieve by changing the frame evaluator? @Mark Shannon what I do is change the code object of the frame about to be evaluated to add a programmatic breakpoint, to avoid the need to have the trace function set at contexts that would need to be traced (after changing the frame.f_code it goes on to call the regular _PyEval_EvalFrameDefault), so, the user-code runs at full speed on all contexts (there's still added overhead on a function call to decide if the code object needs to be changed, but that'd happen on the regular tracing code too). Note that this does not change the semantics of anything as it calls the regular _PyEval_EvalFrameDefault, so, the worries you're listing shouldn't be a concern in this particular use-case. Also note that until Python 3.7 this was easy to change, and that's still possible in Python 3.8 (the only thing is that now it's less straightforward). Note that my use is much simpler that the original intent of the frame evaluator -- my use case could be solved by having a callback to change the code object before the frame execution -- but as far as I know, right now, the way to do that is through the frame evaluation API. @Mark are you strongly opposed because we're providing an API for changing the eval function in the CPython API and you think it should be in the private API? Or you objecting to PEP 523 all-up (based on your list of objections)? Either way the PEP was accepted and implemented a while ago and so I'm not quite sure what you are expecting as an outcome short of a repeal of PEP 523 which would require a separate PEP. > IMO this punches a big hole in the Python execution model, but provides no benefit. This PEP is about fixing a Python 3.8 regression. In Python 3.7, it was possible to get and set frame_eval. In Python 3.8, it's no longer possible. One option to fix the regression would be to again expose PyInterpreterState structure... but we are trying to do the opposite: hide more structures, not expose more structures :-/ IMHO private getter and setter functions are perfectly fine. Please ensure that the setter can report an issue. We have too many setters which cannot report an error which is very painful :-( Victor, I don't think this is a regression. `PyThreadState` is an internal opaque data structure, which means we are free to change it. That the `eval_frame` is hard to access is a feature not a bug, as it discourages misuse and enables us to remove it easily, when a better approach becomes available. PEP 523 is quite vague, but the rationale indicates that exposing `eval_frame` is for "a method-level JIT". PEP 523 did not suggest adding an API. If it had (and I had had the time) I would have opposed it more vigorously. IMO, the correct way to change the code object is to set `function.__code__` which can be done easily from either Python or C code. @Fabioz, is there anything preventing you from doing that? @Mark I don't want to change the original function code, I just want to change the code to be executed in the frame (i.e.: as breakpoints change things may be different). Changing the actual function code is a no-go since changing the real function code can break valid user code. By the way, it's still possible to access directly ts->interp if you include "pycore_pystate.h" header, which requires to define Py_BUILD_CORE_MODULE. This header is an internal header. PEP 523 was to give user code the ability to change the eval function. While the work was motivated by our JIT work, supporting debugging was another motivating factor:. There's no C API because at the time there was no need as PyInterpreterState was publicly exposed. I don't think anyone is suggesting to add something to the stable ABI, so this comes down to whether this should be exposed as part of the CPython API or the publicly accessible internal API. Since there is no distinction of "you probably don't want to use this but we won't yank it out from underneath" you I think this belongs in the CPython API somehow. Otherwise someone should propose withdrawing PEP 523 as I think that shifts what the PEP was aiming for. You can also ask on python-dev or the steering council (which I will abstain myself from due to being a co-author of PEP 523). I heard that for debuggers, the PEP 523 is really useful. I recall a huge speedup thanks to this PEP in Jetbrain: It sounds to me like `PyInterpreterState.eval_frame` is being used to lazily modify the bytecode to support breakpoints. I can see no reason why changing the bytecode can't be done via `function.__code__`. Suppose the code-object with the breakpoint add is `bcode`, then to turn on the breakpoint: original_code = f.__code__ f.__code__ = bcode and to turn it off f.__code__ = original_code The JVM supports bytecode instrumentation (via class loaders). It works well, as it provides a clear way for third party tools to modify the behaviour of a particular piece of code without violating any of the invariants of the interpreter. We don't really advertise setting `function.__code__` as a way to add low-impact breakpoints or profiling, but it is there. If this use case is important, and it sounds like it is, then a better option would be to offer library support for adding and removing breakpoints/instrumentation. This would have the advantage of being composable in a way that changing `PyInterpreterState.eval_frame` is not; in other words, it would be possible for one tool to add profiling and another to add breakpoints and have both work correctly. I can write up a PEP if necessary. @Mark I can think of many use-cases which may break if the function code is changed (users can change the code in real-use cases and when they do that they'd loose debugging). So, as long as the function code is part of the public API of Python, the debugger can't really change it for breakpoints (which is a bit different from the frame code, which the user can't really swap and it's not so common to change). Fabio, If the user changes the `__code__` attribute of a function then, AFAICT, your debugger does the wrong thing, but bytecode modification does the right thing. Suppose we have two functions `spam` and `eggs`. Set a break point in `eggs`, set `spam.__code__ = eggs.__code__`, then call `spam`. With bytecode modification, we get the correct result. That is, execution breaks in the source code of `eggs` when `spam` is run. I think your debugger will do the wrong thing as it will execute the original code of `spam`. Could you confirm what it does? But that's not the main issue, IMO. The big problem is that changing out the interpreter is not composable, unlike bytecode modification. Suppose we have MyProfiler and YourDebugger. MyProfiler wants to record calls and YourDebugger wants to support breakpoints. With bytecode modification, and some care, we can do both. Swapping out the interpreter is likely to cause all sorts of errors and confusion.
https://bugs.python.org/issue38500
CC-MAIN-2019-47
refinedweb
1,596
72.87
Introduction to Rollup.js Roll: export function sayHello () { return "Hello!"; } export function sayHi () { return "Hi!"; } And we imported all of these functions in app.js, then we use only sayHello by logging it to the console: import * as Greetings from './greetings.js'; console.log( Greetings.sayHello() ); // Hello! Notice how the output will be, by utilizing Tree-shaking, Rollup will exclude sayHi function because it’s not used: function sayHello () { return "Hello!"; } console: npm install laravel-elixir-rollup-official --save-dev Then, use the Rollup task in your gulp file like this: elixir(mix => { mix.rollup('app.js'); });: elixir(function(mix) { mix.rollup( './resources/assets/js/app.js', './public/dist' ); });. Newsletter Join the weekly newsletter and never miss out on new tips, tutorials, and more. Burnout in Tech The story usually goes something like this: there’s a huge project at work. Maybe it’s the company’s first big client… Laracon 2017 – Location and dates have been announced Laracon 2017 has just launched its new site for the 2017 event and it will be held in NYC on July 25th and 26th. Both…
https://laravel-news.com/introduction-to-rollup-js/
CC-MAIN-2018-39
refinedweb
183
65.42
On behalf of the SDK Team, it is my pleasure to introduce the next big release for the Java SDK, the 2.2 version. The team has been working hard to bring you new awesome features, and today we think it’s time to shed some light on them by releasing a first developer preview. 2.2 brings a host of new features: - Improved N1QL support - Simple Entity Mapping - Various API enhancements - Helper classes for simplified retry handling - Version bumps of some dependencies Jump to the conclusion if you can’t wait to get your hands on the code 🙂 N1QL-related changes Recently, Couchbase Server v4.0 Developer Preview was released. Among the many features of this new version, the integration of our new query language “SQL for Documents” (codenamed N1QL) is a major one. In previous versions of the Java SDK, we already had support for N1QL (especially for the 4th separate Developer Preview that was released by the Query Team earlier this year), and we continued in this way. Since N1QL DP4, we’ve extended the DSL, added a DSL for index management and added bucket password support. DSL Extension Various parts of the DSL have been extended and improved. We’ve made it so the whole N1QL Tutorial can be implemented through the Java DSL. The FROM clause can refer to Document IDs via the USE KEYS clause, while other clauses (like JOIN or NEST) on the other hand refer to Document IDs via ON KEYS clause. The existing DSL for that was just producing KEYS keyword, which is not enough, so both variants were re-implemented as useKeys/ useKeysValues and onKeys/ onKeysValues in the DSL. In N1QL, we have the concept of collection predicates (or comprehensions) with the ANY, EVERY, ARRAY and FIRST operators. We’ve introduced a mini-DSL, Collections, to build such predicates and produce an Expression that represents them, to be used in the rest of the DSL. In some cases, N1QL offers syntax alternatives and defaults. Three of them come to mind as we’ve tackled both a bit differently: - the ASaliasing syntax can sometimes omit the keyword altogether, like in FROM default AS srcvs FROM default src. We elected to only produce the explicit form with the ASkeyword (since the user has to call the as(String alias)method anyway). USE KEYSand ON KEYShave optional alternative syntax USER PRIMARY KEYSand ON PRIMARY KEYS. These are semanticaly equivalent and we only produce the first form. ORDER BYhas a default ordering direction where one doesn’t explicitely ask for DESCnor ASC. We’ve implemented this option in the Sortoperator with the def(Expression onWhatToSort)method. A few functions, some of which were previously implemented in the Functions class, have been added/moved to the ...query.dsl.functions package, in separate helper classes that match the category of the functions inside (eg. StringFunctions). In Expression, we added a few factory methods to construct Expressions from a Statement (like doing a sub-query) and constructing a path: We also added arithmetic operators add, subtract, multiply and divide: Finally, we added another mini-DSL for the CASE statement. CASE can be used to produce alternative results depending on a condition. Either the condition is made explicit in each alternative WHEN clause (the expression there is a conditional one, and this form of CASE is called a “search case”), or the condition is just equality with an identifier/expression given at the beginning of the CASE (like CASE user.gender WHEN "male" THEN 1). Here is an example of the DSL in action: Index Management DSL If you have already played with N1QL, you know that you can create indexes to significantly speed up your queries. Up until now, the only way to create them programatically was to issue a raw String statement. Not any more! The index management DSL offers support for the various operations that relate to indexes, namely: Creating an index: Creating a primary index: Dropping indexes: Building indexes that were deferred upon creation: Notice how most identifiers (index names, namespace/keyspace) are automatically escaped by backticks. Bucket Password Support When querying the N1QL service, we now automatically enrich the request headers with authentication information obtained from the calling Bucket, which allows to query a password-protected bucket transparently (as was already the case for ViewQuerys). Entity Mapping This feature has been requested a lot in the past: the ability to easily map a domain object to a Couchbase document, and vice-versa. We have started exploring this Entity Mapping feature (or ODM, Object Document Mapping) in 2.2. The approach is to provide an API similar to Bucket dedicated to ODM. This API is described in the Repository interface, and one can obtain a repository from a bucket by calling bucket.repository(). Methods in the repository API deal with a new type of Document, the EntityDocument. It sticks to the Document semantic that separate metadata from content (your domain object in this case). The domain object is expected to be annotated (see below) for mapping to work. EntityDocument can be used to explicitely give an id, to be used as primary key in Couchbase, but contrary to the other Document implementations you’re used to, this is optional. As an alternative, you can annotate a String attribute in your domain class with @Id to use its as document id. @Id attribute is never stored into the JSON content in Couchbase… Only attributes that are marked with the @Field annotation are taken into account and made part of the JSON content in database/injected back into the object upon get. This annotation can also bear an alias for the field name: Current limitations (some of which may be lifted in the future) are: @Fieldis only supported on basic type attributes (the ones that can be added to a JsonObject), and in general this ODM is for simple cases. Support for Maps, Lists and custom Converters is in the works. - only classes with zero-arg constructors are supported for ODM. - mapping is done via reflection to get/set values of the attributes the first time (it is internally cached after). Enhancements on the API The existing Bucket API has seen a couple of enhancements. First, a new operation has been introduced to easily check for the existance of a key without paying the cost of actually getting the content of the document. This is the exist(String key) method: Secondly, a feature related to views that was present in the 1.x generation of the SDK but not the 2.x one is the ability to request a bulk get of the documents when doing a view query (the includeDocs option). This is not really hard to do in 2.x when using the async API (using the bulk pattern that relies on RxJava), but it is lacking when using sync API. Indeed, the only way to get each row’s corresponding document in sync mode is to call row.document(), which will block and fire one request per row, serially. This is pretty inefficient! So we’ve reintroduced the includeDocs option in the view querying API. This will trigger efficient async loading of each row’s document in the background, from which the blocking API benefits as well (the rows will already have the Document cached when calling document() on them). You can also use it on the asynchronous API, but there it is more or less a “noop” when you call .document() on the AsyncViewRow. Retry helper This is actually something that was part of release 2.1.2, but it deserves mention in a blog post. One of the benefits of using RxJava is that you can meaningfuly compose Rx operators and benefit from advanced error handling primitives. Among them is the retry and retryWhen variants, that allow to retry an asynchronous flow on various conditions. For example, you may want to retry a get request when receiving BackpressureException (the rate at which you request is too high for the SDK/server to cope) or TemporaryFailureException (the server notified that it was too busy and that requests should be delayed a little bit). You may only want to retry up to 4 times, and wait a bit before doing so… Maybe you even want this delay to grow between each attempt (Exponential Backoff)? Previously this required a little bit of Rx knowledge to implement, but since this is such a common use case we’ve brought you helper classes to easily do that, in the com.couchbase.client.java.util.retry package. The helper can either produce a Function that can be directly passed to an Observable‘s retryWhen operator, or it can also wrap an existing Observable intro a retrying one. To wrap an Observable, use Retry.wrapForRetry(...) static methods. These will allow you to cover basic requirements like retrying for a maximum number of attempts, describing the kind of Delay you want between attempts. For even more advanced use cases, there’s the RetryBuilder (which produces a function to be passed to Rx’s retryWhen operator). This builder allows you to: - choose on which kind of errors to retry (either any(), anyOf(...), allBut(...)). - choose how many times to retry (either once()or max(n)times). - customize the Delayand on which Schedulerto wait ( delay(...)methods). Here is a complete example that matches the requirement expressed above: Dependency updates In this version, RxJava has been bumped to version 1.0.9. We’ve also upgraded the internal dependencies (which are repackaged, so it’s transparent to your application) to the latest bugfix releases. Conclusion We hope that you will enjoy these new features of the 2.2 release. As always, we welcome feedback, especially on the Entity Mapping feature and direction. So go play with it, and tell us what you think on the forums for instance! To get your hands on the 2.2-dp, use the following in a Maven pom.xml (or directly download the jars for core and client): Of course, we also fixed bugs in this developer preview, that will also be part of 2.1.3 official release scheduled next week. We’re not done for 2.2 yet! We’ll have plenty of features upcoming, including more N1QL support, and built-in profiling and metrics. Happy Coding! – The Java SDK Team
https://blog.couchbase.com/javasdk-2-2-dp/
CC-MAIN-2018-34
refinedweb
1,726
51.78
If you are new to YUI3, you might want to visit the YUI3 website before reading. Most of us know how much of a nightmare client-side code can be. We often come into a project that has a long list of included JavaScript and CSS files in the <head> tag of a page (or several pages). Code that should be logically grouped together is often scattered across many files with not so meaningful names. In recent years, client-side debugging tools, such as Firebug, have helped us wade through such messes at runtime. Which of course means that you need to be able build, deploy, and run the application first. Let’s just say that is a less than ideal way to go about navigating and learning the client-side code for a project, and leave it at that. Using the YUI3 global object and loader combined with our own modules and appropriate use of namespacing, we can spare ourselves (and others) from having a massive headache caused by client-side spaghetti (that’s probably been in the basement refrigerator for years and has coagulated into a solid mass held together by dried sauce and moldy Parmesan cheese). The YUI global object is the root dependency for all YUI3 implementations. It is required for any page that uses YUI3, but it is the only dependency that is required. Additional dependencies are fetched by the YUI loader. A good use of the loader is to bundle your own code as custom modules and add them to the global YUI instance. The loader can then be used to fetch your own modules and their dependencies when needed. Following this model is conducive to writing clean, modular, and well thought out code. Those familiar with YUI2 are probably used to including all of the necessary files. Let’s use the TabView widget as an example. YUI2 TabView var myTabs = new YAHOO.widget.TabView("demo"); myTabs.addTab( new YAHOO.widget.Tab({ label: 'Tab One Label', content: ' Tab One Content ', active: true })); myTabs.addTab( new YAHOO.widget.Tab({ label: 'Tab Two Label', content: ' Tab Two Content ' })); myTabs.addTab( new YAHOO.widget.Tab({ label: 'Tab Three Label', content: ' Tab Three Content ' })); myTabs.appendTo(document.body); </script> OK, so that’s 5 files that we need to include for one widget. Now let’s look at YUI3. YUI3 TabView <script type=”text/JavaScript” src=””></script> <script> YUI().use('tabview', function(Y) { var tabview = new Y.TabView({ children: [{ label: 'foo', content: ' foo content ' }, { label: 'bar', content: ' bar content ' }, { label: 'baz', content: ' baz content ' }] }); tabview.render('#demo'); }); </script> The biggest obvious difference here is that in YUI3, the only file we need to include is the core file: yui-min.js. In the use() method, we tell the loader what modules we want. In this case, we only need the ‘tabview’ module. If you’ve worked with YUI2, you know that the list of included files can become quite large (easily up to 15 or more). In addition, you will probably have a lot of your own JavaScript that needs to be included. In YUI3, you can package your code as custom modules, which can then be added to the YUI global object. You can then access your custom modules via the use() method. A major difference you will notice with YUI3 is that it doesn’t have nearly as many widgets out of the box as YUI2. Developers will most likely need to use the YUI3 gallery, or other libraries, in order to find some good widgets that are easy to instantiate and use. This can add many more dependencies. The configuration for such widgets is often fairly complex, but they are added to the global object in the same manner as your own custom modules. Let’s assume that we have five custom modules, each within their own JavaScript file: foo.js, bar.js, app-config.js, app-validation.js, and app-events.js. First, let’s look at a custom module in detail. Let’s say that the example code here is all within the file app-events.js First, we use the YUI.add() method to effectively add this module to the global YUI instance. YUI.add('app-events', function(Y) { //all JavaScript code goes here },'0.0.1', { requires: ['base', 'node', 'event-delegate', 'event', 'node-base', 'app-config', 'app-validation', 'foo', 'bar'] }); The first parameter of the add() method is the module name, i.e., ‘app-events’. Anytime we want to use this module, we will pass ‘app-events’ to the use() method. The second parameter is the body of the module (the executable JavaScript code). The next parameter is the version number of the module. And the last parameter states the dependencies for the module. This tells the YUI loader that it needs to load all of these other modules before executing the code within this module. In the example above, in addition to several YUI3 modules, we are stating that the custom modules ‘app-config’, ‘app-validation’, ‘foo’, and ‘bar’ are needed for the code within ‘app-events’ to be executed. Now, we’ll add a namespace so that we can specify functions and/or variables that can be accessed from external modules. YUI.add('app-events', function(Y) { Y.namespace('com.opi.app.events'); //other JavaScript code here },'0.0.1', { requires: ['base', 'node', 'event-delegate', 'event', 'node-base', 'app-config', 'app-validation', 'foo', 'bar'] }); Then we might create something like this, which is publicly available, via the ‘com.opi.app.events’ namespace. Y.com.opi.app.events.handleRadioButtonClick = function() { Y.com.opi.app.validation.checkInputs(); }; Here we are calling a function that exists within the ‘com.opi.app.validation’ namespace. In this example, that is the namespace for the code within the ‘app-validation’ module. The handleRadioButtonClick() function can now be referenced externally, like so. <input id=”button1″ onclick=”handleClick();” name=”button1″ type=”radio” value=”1″ /> function handleClick() { YUI().use("app-events", function(Y) { Y.com.opi.app.events.handleRadioButtonClick(); }); } However, a better practice would be to leave all of that script out of the main HTML body, and handle it with the YUI ‘event’ module, like so. YUI.add('app-events', function(Y) { Y.namespace('com.opi.app.events'); Y.com.opi.app.events.handleRadioButtonClick = function(e) { Y.com.opi.app.validation.checkInputs(); }; Y.on("click", Y.com.opi.app.events.handleRadioButtonClick, "#button1"); //more JavaScript here },'0.0.1', { requires: ['base', 'node', 'event-delegate', 'event', 'node-base', 'app-config', 'app-validation', 'foo', 'bar'] }); Let’s say we had some links to other pages, and we wanted to do some validation on form inputs before allowing the user to proceed. <div id=”myDiv1″> <a href=”page2.html”>Go to page 2</a> <a href=”page3.html”>Go to page 3</a> </div> We could use the YUI ‘event-delegate’ module and do something like this: YUI.add('app-events', function(Y) { Y.namespace('com.opi.app.events'); Y.com.opi.app.events.handleRadioButtonClick = function(e) { Y.com.opi.app.validation.checkInputs(); }; Y.delegate("click", function(e) { var isValid = Y.com.opi.app.validation.isFormValid(); if (!isValid) { Y.com.opi.app.foo.showDialog("Some fields have invalid data, please correct them before continuing to the next page."); e.halt(); //prevent the click event from continuing, do not allow user to proceed } }, "#myDiv1", "a"); //more JavaScript here },'0.0.1', { requires: ['base', 'node', 'event-delegate', 'event', 'node-base', 'app-config', 'app-validation', 'foo', 'bar'] }); In the code above, we’re referencing another dependency of ‘app-events’, the ‘foo’ module, which contains the showDialog() function. The final important step is to configure the YUI global object to use our custom modules. We are basically telling it which modules we want to be available, and giving the path to the files for each module. In this example, assume the code below is in the <head> tag after all of the included files. YUI({ modules: { foo: { fullpath: '../scripts/foo.js' }, bar: { fullpath: '../scripts/bar.js' }, appConfig: { fullpath: '../scripts/app-config.js' }, appValidation: { fullpath: '../scripts/app-validation.js' }, appEvents: { fullpath: '../scripts/app-events.js' } } }).use('foo', 'bar', 'appConfig', 'appValidation', 'appEvents', function(Y){ }); Now these modules can be accessed at any time via the use() method. Note: In the use() method above, you can see that the parameter values match the name of the configuration parameters that we have specified (i.e., ‘appEvents’). When it comes time to actually use these modules, we will need to use the name that we provided in the YUI.add() method for each module. In the example above, we used ‘app-events’ as the name of the module that we created in app-events.js. Therefore, when it comes time to use that module, we will pass ‘app-events’ to the use() method and not ‘appEvents’ as shown above. If the extent of our behavioral requirements is to react to events, assuming we put all of our event handling within the app-events module, we don’t need to put any JavaScript in the <body>. That keeps things nice and tidy. To summarize, the YUI3 global object and loader gives us a good foundation for client-side code management, while providing easy and efficient means for loading dependencies. It also imposes a sense of structure that hopefully makes us more aware of how we can better organize our code into logical, purposeful, and reusable modules. For more info:
https://objectpartners.com/2011/05/19/keeping-it-clean-making-good-use-of-the-yui3-global-object-and-loader/
CC-MAIN-2019-13
refinedweb
1,568
57.27
Spooky Scary PHP Haunted Array Once upon a time in a development shop not so far away, Arthur was up late at night hacking out some code. Little did he know the array he was about to use was haunted! He felt a chill run down his spine as he typed out his statements, keystroke by keystroke, but foolishly brushed off his subtle premonition. <?php $spell = array("double", "toil", "trouble", "cauldron", "bubble"); foreach ($spell as &$word) { $word = ucfirst($word); } foreach ($spell as $word) { echo $word . "n"; } Alright, so the array wasn’t really haunted, but the output certainly was unexpected: Double Toil Trouble Cauldron Cauldron The reason for this spooky behavior lies in how PHP persists the reference outside of the first foreach loop. $word was still a reference pointing to the last element of the array when the second loop began. The first iteration of the second loop assigned “double” to $word, which overwrote the last element. The second iteration assigned “toil” to $word, overwriting the last element again. By the time the loop read the value of the last element, it had already been trampled several times. For an in-depth explanation of this behavior, I recommend reading Johannes Schlüter’s blog post on the topic, References and foreach. You can also run this slightly modified version and examine its output for better insight into what PHP is doing: <?php $spell = array("double", "toil", "trouble", "cauldron", "bubble"); foreach ($spell as &$word) { $word = ucfirst($word); } var_dump($spell); foreach ($spell as $word) { echo join(" ", $spell) . "n"; } Arthur learned a very important lesson that night and fixed his code using the array’s keys to assign the string back. <?php foreach ($spell as $key => $word) { $spell[$key] = ucfirst($word); } The Phantom Database Connection More and more, PHP is asked to do more than just generate web pages on a daily basis. The number of shell scripts written in PHP are on the rise, and the duties such scripts perform are increasingly more complex, as developers see merit in consolidating development languages. Often times the performance of these scripts are acceptable and the trade off for convenience can be justified. And so Susan was writing a parallel-processing task which resembled the following code: #! /usr/bin/env php <?php $pids = array(); foreach (range(0, 4) as $i) { $pid = pcntl_fork(); if ($pid > 0) { echo "Fork child $pid.n"; // record PIDs in reverse lookup array $pids[$pid] = true; } else if ($pid == 0) { echo "Child " . posix_getpid() . " working...n"; sleep(5); exit; } } // wait for children to finish while (count($pids)) { $pid = pcntl_wait($status); echo "Child $pid finished.n"; unset($pids[$pid]); } echo "Tasks complete.n"; Her code forked children processes to do some long-running work in parallel while the parent process continued on to monitor the children, reporting back when all of them have terminated. Fork child 1634. Fork child 1635. Fork child 1636. Child 1634 working... Fork child 1637. Child 1635 working... Child 1636 working... Fork child 1638. Child 1637 working... Child 1638 working... Child 1637 finished. Child 1636 finished. Child 1638 finished. Child 1635 finished. Child 1634 finished. Tasks complete. Instead of outputting status messages to stdout though, Susan’s supervisor asked her to log the times when processing started and when all of the children finished. Susan extended her code using the Singleton-ized PDO database connection mechanism that was already part of the company’s codebase. #! /usr/bin/env php <?php $db = Db::connection(); $db->query("UPDATE timings SET tstamp=NOW() WHERE name='start time'"); $pids = array(); foreach (range(0, 4) as $i) { ... } while (count($pids)) { ... } $db->query("UPDATE timings SET tstamp=NOW() WHERE name='stop time'"); class Db { protected static $db; public static function connection() { if (!isset(self::$db)) { self::$db = new PDO("mysql:host=localhost;dbname=test", "dbuser", "dbpass"); self::$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } return self::$db; } } Susan expected to see the rows in the timings table updated; the “start time” row should have listed the timestamp when the whole process was kicked off, and the “stop time” should have when everything finished up. Unfortunately, execution threw an exception and the database didn’t mirror her expectations. PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2006 MySQL server has gone away' in /home/susanbrown/test.php:21 Stack trace: #0 /home/susanbrown/test.php(21): PDO->query('UPDATE timers S...') #1 {main} +------------+---------------------+ | name | tstamp | +------------+---------------------+ | start time | 2012-10-13 01:11:37 | | stop time | 0000-00-00 00:00:00 | +------------+---------------------+ Like Arthur’s array, had Susan’s database become haunted? Well, see if you can piece together the puzzle if I give you the following clues: - When a process forks, the parent process is copied as the child. These duplicate processes then continue executing from that point onward side by site. - Static members are shared between all instances of a class. The PDO connection was wrapped as a Singleton, so any references to it across the application all pointed to the same resource in memory. DB::connection() first returned the object reference, the parent forked, the children continued processing while the parent waited, the children processes terminated and PHP cleaned up used resources, and then the parent tried to use the database object again. The connection to MySQL has been closed in a child process, so the final call failed. Naively trying to obtain the connection again before the end logging query wouldn’t help Susan as the same defunct PDO instance would be returned because it’s a Singleton. I recommend avoiding Singletons – they’re really nothing more than fancy OOP’ized global variables which can make debugging difficult. And even though the connection would still be closed by a child process in our example, at least DB::connection() would return a fresh connection if you invoked it before the second query if Singletons weren’t used. But still better would be to understand how the execution environment is cloned when forking and how various resources can be affected across all the processes. In this case, it’s wiser to connect to the database in the parent thread after the children have been forked, and the children would connect themselves if they needed to. The connection shouldn’t be shared. #! /usr/bin/env php <?php $pids = array(); foreach (range(0, 4) as $i) { ... } $db = Db::connection(); $db->query("UPDATE timings SET tstamp=NOW() WHERE name='start time'"); while (count($pids)) { ... } $db->query("UPDATE timings SET tstamp=NOW() WHERE name='stop time'"); An API Worthy of Dr. Frankenstein Mary Shelley’s Frankenstein is a story of a scientist who creates life, but is so repulsed by its ugliness that he abandons it. After a bit of gratuitous death and destruction, Dr. Frankenstein pursues his creation literally to the ends of the earth seeking its destruction. Many of us have breathed life into code so hideous that we later wish we could just run away from it – code so ugly, so obtuse, so tangled that it makes us want to retch, but it only wants love and understanding. Years ago I was toying around with an idea focused on database interfaces and how might they look if they adhered more closely to Unix’s “everything is a file” philosophy: the query would be written to the “file”, the result set would be read from the “file.” One thing lead to another, and after some death and destruction coding of my own, I had written the following class which has little relevance to my original idea: <?php class DBQuery implements Iterator { protected $db; protected $query; protected $result; protected $index; protected $numRows; public function __construct($host, $dbname, $username, $password) { $this->db = new PDO("mysql:dbname=$dbname;host=$host", $username, $password); } public function __get($query) { $this->query = $query; $this->result = $this->db->query($query); return $this->numRows = $this->result->rowCount(); } public function __call($query, $values) { $this->query = $query; $this->result = $this->db->prepare($this->query); $this->result->execute($values); return $this->numRows = $this->result->rowCount(); } public function clear() { $this->index = 0; $this->numRows = 0; $this->query = ""; $this->result->closeCursor(); } public function rewind() { $this->index = 0; } public function current() { return $this->result->fetch(PDO::FETCH_ASSOC, PDO::FETCH_ORI_ABS, $this->index); } public function key() { return $this->index; } public function next() { $this->index++; } public function valid() { return ($this->index < $this->numRows); } public function __toString() { return $this->query; } } The result was genius, but repulsive: an instance which looked like an object (with no real API methods), or an array, or a string… <?php $dbq = new DBQuery("localhost", "test", "dbuser", "dbpassword"); // query the database if the user is authorized // (instance behaves like an object) $username = "administrator"; $password = sha1("password"); if (!$dbq->{"SELECT * FROM admin_user WHERE username=? " . "AND password=?"}($username, $password)) { die("Unauthorized."); } // query the database and display some records // (instance is iterable like an array) $dbq->{"SELECT id, first_name, last_name FROM employee"}; foreach ($dbq as $result) { print_r($result); } // casting the object string yields the query echo "Query: $dbq"; I blogged about it shortly thereafter and branded it as evil. Friends and colleagues who saw pretty much reacted the the same: “Brilliant! Now kill it… kill it with fire.” But over the years since, I admit I’ve softened on it. The only rule it really bends is that of the programmer’s expectation of blandly named methods like query() and result(). Instead it uses the query string itself as the querying method, the object is the interface and the result set. Certainly it’s no worse than an overly-generalized ORM interface with select() and where() methods chained together to look like an SQL query but with more ->‘s. Maybe my class really isn’t so evil after all? Maybe it just wants to be loved? I certainly don’t want to die in the Arctic! In Closing I hope you enjoyed the article and the examples won’t give you (too many) nightmares! I’m sure you’ve got your own tails of haunted or monstrous code, and there’s no need to let the holiday fun fizzle out regardless of where you are in the world, so feel free to share your scary PHP stories in the comments below! - Omer Sabic - Ignatius Teo - Manuel Herrera - Joe - James Lee - Corey Ballou - FDisk
http://www.sitepoint.com/spooky-scary-php/
CC-MAIN-2014-35
refinedweb
1,700
53.1
Story #4764 As a user, I can sync a list of Collections from galaxy. 100% Description The user should be able to sync down one or more {namespace}.{name} Collections similar to what mazer installs here: Solution¶ Add a CollectionRemote and give it a 'white_list' attribute. This attribute accepts the same argument value as the `mazer install` command. Specifically it accepts one or more collections separated by spaces. Each collection is specified by namespace.name and an optional version string. If version string is unspecified, the latest is downloaded. If a version string is specified, that version is downloaded. So for example: foo.bar,1.2.3 foo.baz,2.3.4 foo.zab would sync down and create 3 collections: 1. foo.bar version 1.2.3 (even if newer was available) 2. foo.baz version 2.3.4 (even if newer was available) 3. foo.zab it will discover whatever the latest is and use that. Syncing without a version of time should give you newer and newer collections. Associated revisions History #1 Updated by bmbouter over 1 year ago Adding some details into seeing how Mazer does it. #2 Updated by bmbouter over 1 year ago #3 Updated by daviddavis over 1 year ago - Groomed changed from No to Yes - Sprint Candidate changed from No to Yes #4 Updated by bmbouter over 1 year ago - Status changed from NEW to ASSIGNED - Assignee set to bmbouter #5 Updated by bmbouter over 1 year ago - Status changed from ASSIGNED to POST PR available at: #6 Updated by bmbouter over 1 year ago - Status changed from POST to MODIFIED - % Done changed from 0 to 100 Applied in changeset pulp_ansible|fc72facd0fdfb6033bf8b5247888f30198fb2734. Please register to edit this issue Also available in: Atom PDF Add mazer whitelist sync Adds a CollectionRemote which users can use to sync a whitelist of collections. With the sync a repsository is required and a new respoitory version is created with the sync'd content downloaded. If mazer downloads content that already exists it will not add additional Artifacts or Collection units to pulp_ansible. If sync discovers no new collections it does not create a RepositoryVersion. closes #4764
https://pulp.plan.io/issues/4764
CC-MAIN-2021-04
refinedweb
360
56.45
In many arenas of computing, speed is the name of the game. Being able to compute quickly, generate results immediately, and fetch data instantly are key objectives for mission-critical software products. As code bases grow in size, it is often difficult to identify bottlenecks, the places in software where the most CPU time is spent, and optimize them in order to get that little extra bit of speed. As algorithms get more complex, it is often difficult to determine at a glance which is better suited in a production environment. Fortunately, Komodo IDE is here to help. Komodo IDE comes with a built-in code profiler for many of its supported languages. Invoking it is as simple as these three steps: - Load up your application’s source code entry point (typically the main method) - Open the “Debug” menu - Select “Start Code Profiling” You will be prompted with the usual Debug/Run dialog, only this time, after your process finishes, Komodo will present you with a dialog that shows a detailed account of how much time was spent in each chunk of your code. Using the drill-down interface, you can quickly identify the bottlenecks that need attention and fix them. Similarly, you can benchmark different algorithms and figure out which one performs the best. Identifying Bottlenecks Komodo’s profiler makes it incredibly easy to spot bottlenecks with its graph view. Consider this profiler result: It is quite clear that in this small script where work is being distributed over two methods, the `work2()` method is consuming an inordinate amount of CPU time compared to `work1()`. Therefore attention should be focused on optimizing the routines in `work2()` as opposed to the routines in `work1()`. After the optimization, Komodo’s profiler might report something like this: Much better. Even though `work2()` is still slower than `work1()`, the difference in compute time between them has been drastically reduced. Benchmarking Algorithms Komodo’s profiler can also be used to quickly benchmark algorithms. This may be quite helpful in a number of situations, such as when you are not sure which algorithm to put into a production system. Back in your early days of coding, you were told that repeated string concatenation is expensive. But just how expensive is it? We can leverage Komodo to help determine the answer. Consider the following Python code: def inefficient(strs): result = "" for s in strs: result += s return result def more_efficient(strs): return "".join(strs) strs = [str(i) for i in xrange(1000000)] inefficient(strs) more_efficient(strs) Komodo’s profiler allows us to visualize how much more expensive `inefficient()` is than `more_efficient()`: Clearly the `more_efficient()` algorithm is better-suited for production. Conclusion When application speed is critical, Komodo IDE’s profiling tools rise to the challenge, aiding developers in easily identifying bottlenecks in their existing code and helping them determine which algorithms are best-suited for primetime. Not only can you use Komodo to profile code in real-time, but you can also run your own profiling code outside of Komodo, save that data to a file, and then have Komodo load, process, and visualize the results. For more information, please visit.
https://www.activestate.com/blog/how-speed-your-code-komodo-ide/
CC-MAIN-2022-27
refinedweb
525
50.16
This action might not be possible to undo. Are you sure you want to continue? 08/07/2014 text original Domain-driven design in action Designing an identity provider Klaus Byskov Homann iii Abstract. In many scientic disciplines complexity is one of the most exciting current topics, as researchers attempt to tackle the messiness of the real world. a software developer has that same prospect when facing a complicated domain that has never been formalized. In this thesis the principles of domain-driven design are used to model a realworld business problem, namely a framework for an extensible identity provider. A specication for the software is presented and based on this specication a complete model is created, using the principles of domain-driven design which are also presented. The thesis also includes an implementation of a generic domain-driven design framework that leverages object-relational mappers. It is then showed how this framework can be used to implement the designed model. Finally, the quality and completeness of the model is validated through a series of reviews and interviews. The work shows that applying the principles of domain-driven design is a good approach to modelling a complex business domain. Resumé. I mange videnskabelige discipliner er kompleksitet et af mest spændende aktuelle emner, idet forskere forsøger at takle den virkelige verdens rod. En software udvikler har den samme udsigt når han står overfor et kompliceret domæne der aldrig er blevet formaliseret. I dette speciale bruges principperne fra domænedrevet design til at modellere et forretningsproblem fra den virkelige verden, nemlig et rammeværk for en udvidbar identity provider. En specikation for softwaren præsenteres, og baseret på denne specikation udvikles en komplet model ved brug af principperne fra domænedrevet design, der ligeledes præsenteres. Specialet indeholder også en implementering af et generisk domænedrevet design rammeværk der gør brug af objeckt-relationelle oversættere. Herefter bliver det vist hvordan dette rammeværk kan bruges til at implementere den designede model. Slutteligt valideres modellens kvalitet og komplethed igennem en række reviews og interviews. Arbejdet viser at det at benytte principperne fra domænedrevet design er en god tilgang til at modellere komplekse forretningsdomæner. .............................................................................................................Contents Abstract Resumé Contents Preface .................................... iii iii iv 1 4 5 8 21 44 88 103 118 133 136 139 147 151 ...... .................. ........................ ..... .................................................................... ........................ ....................................... 5 Object relational mappers 7 Implementation 9 Conclusion B Interviews Bibliography 6 A domain-neutral component 8 Design validation ..................................... Acknowledgements 1 Identity provider 2 Specication .......................................................... ............ 3 Domain-driven design 4 Dening the model ......................... ......... ....................................................... ...................... ....................................................................................................... A C# language elements .......................................... iv ... 1 . and everyone involved in the programming solves their tasks from a varying understanding of the problem domain. If the design is not thought through from the beginning. is not how many programmers you hire to code it but how well you design it. I alone am responsible for the development of the software. design patterns and best practices in order to achieve faster development of software dealing with a complex business domain. Domain driven design is not a technology or a method. The project does of course have other stakeholders. scient degree in Computer Science at the University of Copenhagen. and extend. it is important to me that the discussion is based on a real life problem. which makes the project well suited for this one-man thesis. Therefore. On this project. In order to discuss how domain-driven design can be used to design complex software. this thesis is based on a project that I am working on with my current employer. thus breaking down the language barriers that often exist between them. When focusing on the model it is easier to achieve a shared terminology between the domain experts and the programmers. The thesis was written in the period from September 1st 2008 to May 29th 2009. then chances are that you will end up with a cluttered code base which is both hard to understand. The premises for domain driven design are that each software project should be based on a model and that focus should be on the domain and the domain logic instead of on the technology that is being used. such as the technical director and the CEO of Safewhere. and not just on ctitious examples.Preface This master's thesis concludes my cand. but a mind set and a set of priorities. maintain. a company called Safewhere. Domain-driven designdeals with choosing a set of design principles. while at the same time achieving a code base that is maintainable and extensible. Introduction The thing that determines how complex a problem that can be solved by a piece of software. Chapter overview The rst three chapters contain a general introduction to the problem domain and to domain-driven design. Thus. but does include a complete design and implementation of the core functionality. what an Iden- tity Provider (IdP) is. Implementation: In this chapter I will showcase the source code for some of the most interesting parts of the implementation of the system. as described in Chapter 2. I have interviewed each person and had them assess the quality of the design and the benets of using domain-driven design in general. Domain-driven design: This chapter contains an introduction to domaindriven design and the concepts it uses. in general terms. The thesis does not include a fully functional version of the product. The last six chapters contain my main contribution. these chapters mainly contain material from books and papers and my contribution to the contents found herein is merely that of passing on knowledge in a suitable and more compact format. To make sure that I have achieved making an easily maintainable and extendible model I have asked a few of my peers to review the model. Identity provider: This chapter explains. and its purpose is to let me assess the quality of the model and the software presented herein. . where I take the theory from the rst chapters and put it into practical use.2 The project at hand concerns the development of a framework needed to implement a multi-protocol identity provider. Design validation: This chapter is based on interviews with some of my peers. A domain neutral component: In this chapter I will present the implementation of a domain neutral component that implements a lot of basis functionality that is useful for implementing a system based on domain-driven design. Specication: This chapter contains the feature specication of the software that is going to be developed. Model: This chapter contains the design model of the software using domaindriven design. Object-relational mappers: In this chapter I discuss how object-relational mappers can be leveraged in domain-driven design. Tips for the reader This thesis is written in British English. The reader should be familiar with object oriented programming and design. . int j) { //Do nothing with the input data and return return. since most of the chapters build on theory explained in previous chapters. } If you are not familiar with C# you may refer to the Appendix A. The thesis is intended to be read from beginning to end. however each code le can be inspected in any text editor. The developed code can be found on the enclosed disk.NET TM family. Having Microsoft Visual Studio 2008 installed is a prerequisite for opening the solution le.3 Conclusion: This chapter contains a conclusion on what has been learned from working with domain driven design. All code samples in this report are in C#. a language of the Microsoft . A code sample may look like this: 1 2 3 4 5 A code sample public void DoNothing(int i. which contains a short introduction to the language. 4 . professor Jyrki Katajainen for his support and thorough dedication to the project. Peter Haastrup. Finally I wish to thank my beautiful wife Gitte Byskov Homann for bearing with me on the late nights and weekends where a substantial part of my time has been spent studying for and writing on this thesis. for letting me write my thesis in collaboration with the company. Mikkel Christensen and Peter Haastrup for taking time to review the model and participate in interviews. I also wish to thank the CEO at Safewhere. and Mark Seemann. for shielding me from things I did not need to bothered with. for his sincere interest in the project. and for keeping encouraging me throughout the entire process. Niels Flensted-Jensen. I also want to thank the technical director at Safewhere.Acknowledgements First of all I wish to thank my supervisor. The IdP identies the user based on some credentials and the statements that it makes about the user are known as claims. In general terms it can be said that an IdP performs the function f( ) = ζ : ζ ∈ φ where is a given set of credentials and ζ is a set of claims from the complete set this thesis I will refer to token. Ajax. An identity provider. will be as identity provider claims φ of claims that can be issued by the IdP. ζ will be referred to as referred to as user credentials and nally I will refer to φ 1. Figure 1 shows the primary work of an identity Figure 1. Throughout f ( ) as token issuance. provider. such as the users name. according to [Daarbak 2008].1. email or CPR number. new browser capabilities and higher internet bandwidth has made it feasible for ISV's to provide software that is hosted in the cloud.Identity provider 1 An identity provider (IdP) is a centralized identity service whose primary responsibility is identifying a user and stating something about that user. the analyst company Gartner predicts that 30 percent of all CRM systems will be running on the software as a service model in the year 2012. Software as a service solutions are hosted in large data centers and provide the ability 5 . IdP use case example The advent of a range of new technologies such as SOAP web services. Buying software as a service is becoming a more and more popular for a number of reasons. In fact. self-hosted software tends to be lower. Primarily because the total cost of ownership for software as a service vs. This type of software is referred to as software as a service. Since this is the claim required by the bookkeeping application. since the identity provider will remember him. The bookkeeping application uses the SAML2. When user 1 wants to log in to the bookkeeping application.0 identity protocol and the shipping application uses the OpenId protocol. Secondly. the same thing happens.6 to quickly scale the application for a given customer should the need arise. This setup has several benets. namely OpenId. Figure 2 below shows a ctitious company that uses two SaaS applications in the cloud. When user 1 later wants to go and use the shipping application. the shoe sales company has one single . Figure 2. user 1 only has to log in once within a session. he is asked to go and get a security token from an identity provider that is trusted by the bookkeeping application. The shoe sales company uses a shipping application and a bookkeeping application in the cloud. User 1 is now allowed to use its features. The only dierence is that the shipping application uses another identity protocol. IdP use case. ISV's that sell software as a service solutions may choose to use claims based access control in their applications. The identity provider now issues a security token containing a claim that the holder is a SalesPerson. First of all. This extra exibility is another selling point for software as a service solutions. Please note that the IdP shown here is actually a software as a service product itself. the IdP allows the shoe sales company to easily integrate with software as a service products that use dierent identity protocols. This means that it would be possible to revoke user 1's access to both the shipping and bookkeeping application by revoking his SalesPerson claim in the IdP. Last. .7 place where it can manage it's users and the claims that are issued about them. although conceptually it could just as well be a stand-alone application in the shoe sales company. It is the design and implementation of an identity provider such as the one in Figure 2 that is the goal of this thesis. The chapter will contain requirements to both the software and the platform on which the software runs. from a business perspective. and therefore they are included here for completeness. thus allowing for seamless leverage of your existing investments in user directories. 8 . Before we go into the specic details. the software. to give you an idea of what the project is about. a term that I will throughout the remainder of the thesis. In the following text the product is referred to as Safewhere*Identify.509 as well as various identity federation mechanisms. For external users not ready for federated authentication Safewhere*identify provides integration to local user databases. User identities are seamlessly and securely transferred from their origin the place where the user rst logged in to your infrastructure. not use Safewhere*identify is a new kind of user identication solution providing for seamless and heterogeneous authentication across the supply chain of web applications and web services. Safewhere*identify supports any kind of authentication including traditional methods such as username/password and X. however. Applications and services move all authentication to the Safewhere*identify identity broker. The main focus of this thesis is. Selected benets and advantages include: Externalization of authentication. considerations about the platform are important for a complete understanding of the system. taken from a marketing brochure made by Safewhere. I want to present the following text. of course. thereby removing the need for operation and administration of a local extranet user database Traditional user authentication.Specication 2 This chapter includes the feature specication of the identity provider software that is going to be designed and implemented. With Safe- where*identify an organization may handle user identication centrally and outside of all web applications and web services. which in turn integrates with any and all authentication mechanisms Federated identities. Safewhere*identify provides a rich set of features with the aim to remove entirely all need for local administration and authentication . Safewhere*identify automatically provisions a user database per organization for authentication purposes and maintenance of other user attributes. Whichever is chosen has no eect on feature set or security.9 Figure 3. one organization. Due to the unique and standards compliant architecture and communication patterns. and is administered by. Logical identity ow from users to web applications Delegated user administration. Safewhere*identify provides identity conversion/mapping to successfully transfer identities between applications Provided both as a service and as traditional on premise software. Service provider integration. Each database belongs to. As more and more of your applications leverage the new identity solution. Safewhere*identify may equally well be leveraged as an external service (Software as a Service) or as software installed in your own infrastructure. The key capabilities include: Browser based federation. 2. Designing a multitenant system requires extra considerations regarding data storage layout and data security. each of your extranet partners) providing for full separation of data. Systems that host multiple instances of the same software for dierent customers are often called multitenant systems. (E. "active" federation through WS-Trust and possibly WS-Federation One Identity Provider instance per organization (e.1.g.10 of users. IdP The main dierence between this IdP and standard IdP implementations is that it must support more than one instance. Self registration of organizations and users. Safewhere*identify implements a number of federation protocols including SAML 2. and often with slightly dierent names and formatting. Safewhere*identify provides delegated administration of attribute mappings to transparently and correctly transfer identity between applications. a role to one application is a group to another).g. Workows support the signing of new organizations e.g. The rst requires review and approval of you whereas the latter leverages the distributed nature of user administration and leaves it up to the user's home organization. An instance is an iso- lated IdP that runs side by side on the same server as other instances. Another key feature of the IdP is that it must be extensible in a way that it can support more than one identity exchange protocol and more than one authentication mechanism. . new business partners as well new uses of each organization. but without being coupled to the other instances in any way other than sharing storage. Applications leveraging Safewhere*identify potentially all need dierent kinds of user attributes. or claims.0 and WS-Federation for browser based authentication Federated authentication for Web services aka. Separate IIS Application Pools under dierent service accounts for each instance ensure very tight security around each organizations data all the way to the le system and database level Claims mapping. The IdP will be running on Windows Server 2003/2008 on IIS 7. Technical requirements The IdP must be developed in C# and Asp. 2. User: An end user who uses an IdP instance to identify himself to a service provider.NET. Personal data store: All changes made to IdP congurations through the admin website are written to the personal data store. There is no implementation dierence between the system IdP instance and the customer's IdP instances. and relies on a set of claims issued by the system IdP to determine which IdP instance the administrator belongs to. Each IdP instance belongs to a customer. The gure shows the following concepts: IdP instances: The SaaS IdP contains a number of IdP instances. the system instance. 2. Administrator: A person who administrates a single IdP instance.11 2. The system IdP instance is used to authenticate administrators when they log in to administrate their own IdP instance through the Admin web. Registration website: The registration website is where new customers can sign up for the services provided by the SaaS IdP. On the admin website an administrator must be able to congure his IdP instance in a number of ways which will be discussed in further detail below. Admin website: The admin website acts as a service provider against the system IdP. except for one instance. Conguration data will be stored in SqlServer 2008.4. Terms This chapter makes use of the following terms: Customer: An organization that has bought an IdP instance. Overview Figure 4 shows an overview of the SaaS IdP. Each IdP instance .2.3. retrieve claims. 2. Protocols: Which protocols does the IdP support. . An IdP instance is driven by its conguration.12 Figure 4. IdP overview. The conguration mainly species the following ve things: Credentials: Which types of credentials are accepted.5. authenticate users etc. The personal data store is either a separate database or a separate database schema. IdP instance This section contains the specication of which features a single IdP instance contains. uses the personal data store to read it's conguration. The IdP instance may be congured to accept a large range of dierent credentials. The same requirement for being pluggable applies to the protocols that the IdP supports. but it is important that checking credentials is done in way such that further types of credentials can be added later. however this must be done in a way that does not impede future support for other protocols such as OpenId or WSFederation. In windows. Initially. Figure 5. Most identity protocols use certicates for encryption and signing of messages. Figure 5 shows a single IdP instance. the only require- ment is to support username/password and SAML2. the IdP may also be congured to issue a set of dierent claims. All the dierent conguration options are saved in the personal data store. a certicate store is tied to a system account. so in order to achieve having a personal certicate store for each IdP instance. Therefore the conguration must contain information on which certicates are to be used for this purpose. The only initial requirement is to support the SAML2.0 credentials. As Figure 2 shows. Service Providers: Which service providers is the IdP connected to. every instance must run under a separate user account. The certicates themselves must be stored in a personal certicate store on the Windows server. IdP instance.0 protocol. Certicates: Which certicates does the IdP use for signing and decryption. and which SP certicates are trusted. .13 Claims (attributes): Which claims does the IdP issue. To make SOAP requests over SSL to an IdP instance running in a virtual directory would require trusting the SSL certicate belonging to www. This would imply explicitly trusting every SSL certicate for each individual IdP instance. e.com. The IdP instance runs in its own IIS website.com. Figure 6 shows how an IdP instance should be deployed in IIS (Internet Information Services).14 2. and accessing it through www. by doing so. each IdP instance must run in its own IIS website and have a unique domain name and therefore also a unique SSL certicate. Hosting Figure 6.saasidp. all instances would be using the same SSL certicate. a trust relationship would be implicitly made to all IdP instances on the server. If each IdP instance runs under a dierent system account. in order to be able to con- gure dierent SSL certicates for each IdP instance it is important that the IdP instance has its own unique domain name. However. However.saasidp. Running in its own website means that it can have its own unique domain name.g. somecustomeridp.saasidp.com.com/somecustomeridp.6. namely that pertaining to www. IdP instance hosting. The alternative would be to have each IdP instance run in an IIS virtual directory.saasidp. Another benet of separate IIS websites is the ability to tie a website to an application pool. we can achieve having personal certicate stores . An SSL certicate is always tied directly to a domain name. So if the instances were running in a virtual directory. To avoid this. An application pool lets us dene the system account under which the website runs. 7. personal certicate stores provide a way of conguring dierent trusted certicates. the password should be entered and the remember password feature should be used. a clear separation of the IdP instances is achieved when it comes to certicates and certicate trust. Having personal certicate stores requires creating a new system account for each IdP instance. in which all administrators of . and the account will be hard to compromise. no one will ever be able to use that account for anything other than running the IdP instance. something that is important when importing certicates from federation partners. The administration website acts as a service provider to the system IdP instance. When conguring the IIS application pool. IdP instance administration. By doing this. Figure 7 shows how an administrator can administrate his IdP instance through the administration website.15 for each IdP instance. 2. By using the deployment structure shown in Figure 3. This account should be created as part of the installation of each IdP instance. This is deemed very important since most identity protocols rely on mutual certicate trust. By having personal certicate stores the operating system ensures that certicates in the personal store can only be accessed by the user to which the store belongs. Furthermore. IdP instance administration Figure 7. The password for the specic system account should be randomly generated and forgotten. UserEditable This property species whether or not the user will be able to edit the value of this claim.8. The IdP should specify a set of predened rules. . For example: dk:gov:saml:attribute:CvrNumberIdentier ClaimValue This property contains a specic value for the claim. The denition of a claim contains the following properties: Property ClaimType Description The type of the claim. the most important claim being an OrgId. Claim denition An administrator must be able to dene a set of claims for his IdP. and claim types are typically on URI form. There exists a set of predened claim types. the administration website knows which data to get from the data store. and when only one specic value for the claim makes sense. ValidationRule The administrator must be able to add some kind of validation rule for the value of a claim. but it would probably also be useful if the administrator was able to type in a regular expression do meet specic validation needs. It is used when users are not allowed to override the value. The customer's running IdP instance uses the data from the personal data store to fetch it's conguration. 2. DefaultValue Some claims may have a default value. When changes are made. This property lets the administrator specify a default value for the claim. This is especially useful when claim values are typed in by users.16 all customer IdP instances have an account. Based on the OrgId. and lets the administrator edit them. these are saved back to the shared data store. The system IdP issues a set of claims about the specic administrator. for example Email. and is presented with a list of his claims.17 Figure 8. and the value for the claims dened for the IdP instance. An organization may have several hundreds or thousands of users. 2. The gure mainly focuses on the username/password case. User and attribute administration Figure 8 shows an overview of user and attribute management. which leads to the need for an update page. must be possible for the administrator to dene which claims can be edited . but this is considered a special case. This fact calls for some import functionality. When the user has lled out the claim values. User and attribute management. The Email claim will be lled out on beforehand because it is already known.9. where users are created in the data store. shown in the gure as the user attribute update page. so many that it would be infeasible for a single administrator to create them all. It should of course also be possible for the administrator to create users manually. a password. each user needs to be approved by the administrator. To access the page. in this case FirstName and LastName. If we assume that a company has dened the claims FirstName. asking the recipient to go to a self registration page where he or she can enter a desired username. and the IdP should automatically send an email to each address. LastName and Email. The administrator must be able to import a list of email addresses. the It user presents his credentials. It must be possible for a user to change the value of his/her claims. Figure 9 shows the order in which this claims mapping is performed. This property must be dened when the claim is dened. Let us assume that the SalesPerson claim was a group claim with the value SalesPerson. However the IdP must make it easy to add and remove users from groups. As the gure shows. group claim with value Managers. 2. however the administration pages for an IdP instance must provide the ability to work with groups in a familiar manner. Figure 2 showed two service providers that both required a SalesPerson claim. This is important to note. The IdP should allow a simple mapping of claims such that when a user that has the SalesManager claim is connecting to the service provider which requires the SalesPerson claim. the service provider may specify that it requires some specically named claim. this claim is just another claim in the user's security token. 2. . groups are not hierarchical either.10. and claims have no hierarchical structure. In some specic customer organization they may have a group claim called SalesManager. Since group claims are merely expressed as claims. This involves listing all groups. listing all users in a group and an easy way to add and remove users from groups. Consider a Behind the scenes. which is conceptually identical to the SalesPerson claim. since in some systems groups have a hierarchical structure where a group can be a member of another group. Groups Group claims are no dierent from any other type of claims. both the self registration page and the attribute update page must be customizable with regards to look and feel such that it is possible for the customer to make it look like other of their corporate websites. This is not the case in this system.18 by the users and which cannot. the SalesManager claim is automatically mapped to the SalesPerson claim.11. Claims mapping When a connection to a service provider is created. Customer registration The IdP must contain a customer registration page which is the entry point for the workow shown in gure 7.13. the customer goes to the registration page and enters his data. Thus. Conclusion The support for multi-tenancy is an important commercial concern for this system. the multi-tenancy aspect can be solved by the platform alone. 2. Creating certicates for encryption and signing. Conguring website and application pool in IIS. A contract is then sent to the customer. address etc. However. and when the contact is returned the order is conrmed and the customer's IdP instance is created automatically. This automatic process includes: Creating the administrator in system IdP instance. such as a contact name. as we have seen in this chapter. 2. Claims mapping pipeline. Creating SSL certicate and conguring SSL in IIS. saved. Sending log in details to administrator. This data is The sales representative then contacts the user to conrm his identity and his intent to buy. the IdP must support automatic creation of new IdP instances. and a sales representative is alerted.12. When a customer wants to buy the services provided by the IdP. email. Having a per- tenant system account allows database isolation through either separate . Tenant isolation is achieved through a per-tenant system account running each tenants instance.19 Figure 9. which. . Claims mapping pipeline. the software design presented in this thesis will be totally multi-tenancy agnostic. The same goes for the certicate store. as mentioned. Therefore. to which only that system account has access. is also isolated on a per user basis. without compromising the commercial demand for multi-tenancy. databases or separate database schemas.20 Figure 10. no. The business people are talking to technical people in requirements gathering." The business person says. That means you can never have a conversation about how the system really works. Translation is never a perfect The way estimates work is that the technical person says. that's a totally dierent thing and will take an enormous amount of time. no. The material discussed herein is largely based on the book Domain-driven design: tackling complexity in the heart of software by Evans [2004]. Some will know the language the business people use. and then it's written up and handed o in this other sort of language for implementation. On most projects. "No. Before we dive into the details of domain-driven design. They will have words for the functional entities that are dierent from the words used by the business people. 2009. They will describe the actual functioning of the system in the same way. for the 21 . so it seems to me that feature C is a natural extension of A and B. What are the primary aspects of domain-driven design? I could boil it down into two or three basic things. let us start with an excerpt from an interview with Eric Evans. The software on the inside is actually nothing like what the business people are imagining it to be. The chapter is however not just a summary of the book. so they act as interpreters for the technical people who don't know that language.Domain-driven design 3 This chapter serves as an introduction to domain-driven design. It also hurts estimates. You have a process broken into parts. but will also contain my own perspectives and the explanatory examples provided here will be based on the problem at hand whenever possible. the author of the book DomainDriven Design (Evans [2004]). The interview was brought in the Software Development Times (Handy [2009]) on March 12. thing. "I have feature A and feature B." When in fact. Your technical people will discuss the system with a certain language. This leads to usability problems. The rst is the ubiquitous language. you'd have dierent people talking in dierent languages. You'd have to start with an intention to do it. and the results are much worse. it's not totally dierent from an implementation point of view. It's one of those cases where the cure is worse than the disease. Some people are good at it and some are not. we use that consistent language. When we build the system. that's ne. This we cannot eliminate. We share this. I don't mean this thing where people say we take the business language and the conception. There's no way for a non-technical person to anticipate what might be easy or what might be hard. Instead. we try to understand them and map them out. Also. Projects that succeed have usually accomplished this. it's naming things the way they would expect. Does that mean that something as simple as naming your libraries and classes after the business tables they represent? At the most basic level. it's a system of names and relationships among things. . It's very closely related and so much easier said than done.22 technical people. business people never propose ideas that might have been easy because it's not evident to them that they would be easy. If the system isn't built that way. There's no communication there. and when we talk about it. One of the keys to this is in recruitment. Here is one of the areas where I kind of had to invent a system because no one had really systematized this. Within any given project there are multiple models in play. What are the boundaries that dene where each one applies? If you say potay-to and I say pot-ah-to. You have to hire the kind of people who are good at this. People have tried. we stay true to this. What are the other two important aspects of domaindriven design? The second element is that you have to bring about a creative collaboration between the domain experts and the technical experts. and in [that] subsystem we talk about it that way. we're just talking about some ction. With the application model itself. as long as we know where pot-ay-to is used and where pot-ah-to is used. and just make software that reects it. In this subsystem we talk about it this way. I'm not talking about the kind of models I've already spoken of. who are good at getting in a fruitful conversation with a domain expert. The third aspect is what I call an awareness of context. There are more subtle aspects. 1. or drawing on a whiteboard. The developer's view is often not as complete as that of the domain expert and thus the model produced by the developer is not as rich and communicative as that of the domain expert. or as the book puts it. The domain model is an abstraction of all the knowledge about the domain. This does not mean that the such diagrams. the design models produced are cluttered with unimportant aspects that draw attention away from. the core domain logic. The main philosophy of domain-driven design is that the primary focus of any software project should be on the domain and the domain logic. software users. the model is also the backbone of the language spoken by all team members: developers. documents.23 Having a language to describe this and a terminology and a system serves to make it more reproducible. Domain-driven design is not a development method per se. The book refers to this discipline as knowledge crunching. It is however oriented toward agile development as we shall see further on. nor is it tied to any particular methodology. It is therefore of utmost importance that developers and domain experts collaborate intensely on the creation and maintenance of the model. The domain model is not a particular diagram. The domain model represents a large amount of knowledge contributed by everyone involved in the project and it reects deep insight into the domain at hand. carefully organized in a way that it conveys this knowledge in the most expressive way possible. put in practice. the business logic. or drawings could not exist. The domain model is the idea that the combination of these diagrams and documents intend to convey. or document. and therefore the business. but they are not the domain model as such. successful and protable. the book argues. address a specic problem. 3. The philosophy of domain-driven design Most software projects. if not all. and they probably should. ie. or completely hide. and it is denitely not just a data schema. many software projects are too focused on technology when designing their software. domain experts. Therefore. and testers. Therefore. and it also draws on well established software design patterns. . Developers and domain experts can have a dierent view of the problem domain. This may sound like common sense. Solving this problem well is important to making the software. One of the primary concepts of domain-driven design is the so-called domain model. the model is distilled knowledge. perhaps more technical jargon. This eectively means that any change to the language is also a change to the model and viceversa. not to mention the risk of misunderstanding makes the use of dierent jargons dangerous. the method lacks feedback.24 Knowledge crunching is achieved through brainstorming and experimenting. But when a deep and expressive model exists. and the model and language are very tightly coupled. In the waterfall method. Ubiquitous language A model should be based on a ubiquitous (universally present) language. The ubiquitous language must be used in both oral and written communication within the team. First of all. current users. Evans states that he has met many developers who initially do not like the idea of a common language because they think that domain experts will nd their terms too abstract or will not understand objects. and knowledge is potentially lost through the chain of information. This model based language should not only describe artifacts but also tasks and functionality. Using this method of modelling also has various benets over methods such as the waterfall method. a widespread practice when modeling. the principles of domain-driven design state that a conscious eort should be made to encourage a common language and that the model should be based on the ubiquitous language. and teams are often rearranged. Knowledge crunching is therefore an extensive exploration of the domain and a continuous learning experience. But the waterfall method has various shortcomings. Skilled and knowledgeable programmers get promoted. and when such things happen knowledge is essentially lost. Distilling as much knowledge as possible in the model is important because of the natural leaking of knowledge from most software projects. Having a common language resolves the confusion that often occurs when domain experts and developers use dierent terms to describe the same thing. continuous talking. and as such it goes far beyond nding the nouns. knowledge is usually passed on from domain experts to analysts. 3. as argued in [Evans 2004]. This is important because deep and expres- sive models seldom lie on the surface. etc. and from analysts to programmers. involving experiences from current systems or legacy systems and drawing on knowledge from domain experts.2. but he argues: . Thus. most of this knowledge is to a large extent preserved on the project. But the overhead cost of translation. Usually. In [Evans 2004]. domain experts use the business jargon and technical team members use another. diagramming and sketching. developers will improve their modelling skills and domain specic knowledge. then the model is by denition not right.25 If sophisticated domain experts don't understand the model. but they should represent skeletons of ideas and not be overly detailed. The code should be written to reect the domain model in a very literal way and it should use the same terminology as in the ubiquitous language.3. a layered architecture as that shown in Figure 11 should be used. Doing this will uncover any infeasible facets of the model early on and it is also an important part of getting the developers involved in the design process. Thus. The vital details of the model lie in the code. If they do not realize this. By taking part in the modelling process with domain experts. 3. database and network access and other things that are not related in any way to the domain. and to avoid cluttering the code. but one thing is for sure. but most importantly the will feel responsible for the model. then the code is denitely another representation of that same model. the code is always the most detailed representation of the model. So if a domain expert does not recognize his own business in what is supposed to be a model of that same business. Figure 11 shows the layers that make up the layered architecture. there is probably something wrong with the model. Domaindriven design focuses exclusively on the domain layer. It is important to notice that the components of each layer only interact with other components in the same . I really agree on this point. Diagrams and documents can be used to clarify design intent or decisions. Relationship between model and code If the ubiquitous language is one representation of the domain model. Especially because the focus of the model should be on the business domain and not on technical details. And that is very important. in order to achieve the tight coupling between model and code. because developers must realize that by changing the code they also change the model. and this is the layer which should contain all domain logic. any future refactoring of the code will weaken the model instead of strengthening it. something that the code does particularly well. Being detailed is Coupling the model and the implementation from an early start through prototyping is very important. It is obvious that every application has a lot of plumbing concerned with graphical user interfaces. It is arguable that the code is the most correct representation of the model. it becomes very hard to see the domain logic and reason about it. infrastructure layer. etc.4. such as managing long running transactions or workows. network access. it renders itself well to object-oriented programming languages. To change a business rule may require code changes in many dierent parts of the code and this can be very error prone. When domain related code is scattered throughout the UI. Since domain-driven design deals heavily with assigning responsibilities to the right places in code. etc. Building blocks of the model Domain-driven design species a set of conceptual objects that should be used in code to create the domain model. A layered architecture. and nally the infrastructure layer contains code related to data access. the application layer contains application specic code. layer or in layers below them. . logging. The user interface (UI) layer contains code for interaction with the user. Evans refers to the act of designing the code components that make up the model as model-driven design. So the value of using a layered architecture is that each layer can be specialized to manage dierent aspects of the computer program. 3.26 Figure 11. 1 2 3 4 5 6 7 8 public class User{ private private private private Guid _userId.4. which will be discussed in detail next. CreatedDate. . DateTime _createdDate. and UserAttributes as shown below. User object with properties UserId. The lifespan of entity objects is usually long. Consider a An example of an entity object could be a User object.27 Figure 12 shows an overview of these concepts. public User(Guid userId. The building blocks of domain-driven design.1 Entities Entities are objects that are dened by their identity rather than by the values of their properties. string _lastName. Figure 12. IEnumerable<Attribute> _userAttributes. 3. and the values of its properties can change often over time whereas the identity does not change. UserName. _userId = userId. } } Even though two identical if the User objects have the same LastName. change over time. The identity eld of an entity object is often automatically generated such as it would be the case in the example of the User object. this.28 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 string lastName.UserId. they are only LastName are not identical as long as the UserId is UserId is the same._lastName = lastName. since these can. } public Guid UserId{ get{return userId. if the user changed his or her last name.UserId == other. for example if the id was a social security number.} } public string LastName{ get{return _lastName._createdDate = createdDate. and most likely will.} set{_lastName = value. The Equals method above is an example of such an implementation. DateTime createdDate){ this. when implementing entity objects. this. The identity eld (UserId) is not always important to the user of the system. Thus. Two objects could even represent the same user regardless that the same. it is important to implement equality functions in such a way that the equality of two objects is based on comparing identity rather than comparing the individual properties of the object. or package tracking number.} } public DateTime CreatedDate{ get{return _createdDate. but it could be. for example.} } public IEnumerable<Attribute> UserAttributes{ get{return _userAttributes. .} } public overrides bool Equals(User other){ return this.} set{_userAttributes = value. } } public Currency Currency{ get{return currency. We usually do not care about the identity of the ten dollars (unless of course we are looking for stolen money) but only the value they represent. After all. When adding Money objects. Currency currency){ this. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 public class Money{ private int amount. 3.4. unique id.currency = currency. This maintains the immutability of the objects..29 3.} } public Money Add(Money other){ // Make sure currency is the same // Add amounts // return new money object . not who they are. this. private Currency currency. contrarily to entity objects. meaning that after creation the properties of the object cannot be changed. ten dollars are ten dollars no matter what.. are by denition immutable.2 Value objects Value objects. } public int Amount{ get{return amount. A service's interface should be dened in . are objects that do not have a Value objects are often used to describe entities. such as the Money class. new objects are created instead of changing any of the objects being added.amount = amount.3 Services Services are classes that implement domain logic that does not naturally belong to an entity or a value object. } } Note that the value objects. public Money(int amount. and we care about them only for what they are. A good example of a value object could be a Money object as the one below.4. a web-service could be created in the application and the only responsibility of that web-service would be to digest the XML input and delegate the call to the CertificateService in the domain layer. In terms of the CertificateService mentioned above. This does not mean that a system built using domain-driven design should not expose web-services. stateless classes are often comprised of static methods that do not require an instance of the class in order to be invoked. Since the concept of a service is used widely in computer science in general I feel it is important to clarify that a service in terms of domain-driven design should not be confused with a service such as in web-service. An aggregate. Figure 13.4. Classes that implement services should be stateless. . when using domain-driven design and a layered architecture.30 terms of other elements of the domain model and its operation names should be part of the ubiquitous language. In practice. however.4 Aggregates Aggregates are clusters of associated objects which are worked on as one unit when it comes to data changes. An example of a service in our problem domain could be a with operations such as CertificateService CreateCertificate and RevokeCertificate. 3. a web-service would be part of the application layer and not part of the domain layer. The following is the skeleton of a repository: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 public class UserRepository : IUserRepository { public User this[Guid userId] {get{ . }} public IEnumerable<User> FindByLastName(string lastName) .. public IEnumerable<User> FindRecentlyCreated() . But this SQL select firstname.. All these methods actually implement some kind of domain logic with regards to users. 3. public IEnumerable<User> FindAllUsers() . that SQL statement could potentially be duplicated many places in the code. making it harder to change the logic of nding recently created users. if for some reason the denition of recently created would change. public void AddUser(User newUser) .. } The above repository lets us fetch a User object by a unique id.. but the aggregate itself can be much more complex than the one shown here. meaning that this example.. nding all users with some last name. Attribute objects are only interesting to modify in the context of a User and do not make sense outside the context of a User.. The FindRecentlyCreated method could be using a SQL statement such as :createdDateThreshold would be a bound parameter).. The repository should hide all database code and give the illusion that all objects are in RAM..4. the it is the primary entry point when accessing the objects that make up the aggregate.5 Repositories The purpose of a repository is to fetch and save entities and value objects from a data store.. By implementing the IUserRepository and referencing the con- crete instance only by means of its interface. Aggregates can only have one aggregate root...31 Figure 13 shows an aggregate made up of a User object with various Attribute objects. Another benet of using repositories is that they can be made interchangeable. In User object is said to be the aggregate root.. and it also allows us to add new users. separate specic implementa- . . lastname from users where createdDate > :createdDateThreshold (where statement expresses domain logic so if we did not have a central place to put this function. it is important that developers understand the implementation of the repository being used. By specifying the concrete class that implements the repository in some external conguration le can make it easy to switch between the live environment and a developer or testing environment . thus facilitat- could be used for fetching dummy instances of the ing a means of testing various aspects of the code without having to access the production database. that User class. we could imagine the need to User class was created recently. A live system typically fetches its data from large databases. This is especially useful for testing. In our current example we could imagine an alternative implementation of the 1 IUserRepository. and replicating these databases to a development server can be both time consuming and sometimes even impossible. A specication is a class with only one method called |IsSatisedBy|. The method has a Boolean return type and the argument of the method depends on the type of object it is a specication for. 1995] . The term determine if a given instance of the recently is chosen on purpose because it is unspecic and denitely contains 1 This concept is actually a well established design pattern. 3. if someone was to write a method for counting all users in the system the function the FindAllUsers could be used and the Count method of IEnumerable<User> could be used to return the number of users. thus stressing why it is important that developers understand the implementation of repository functions.4. often referred to as strategy or policy. However. If we continue our User class example from above. That would most likely perform badly. In Chapter 5 I will discuss how OR-mappers can be leveraged to create repositories quite easily even for complex database schemas. when using a real repository implementation that accesses the production database the eect of performing the count this way would be that all users in the database would be loaded into memory just for the sake of counting them. called InMemoryUserRepository. However.6 Specications Specifications provide a means of encapsulating and more importantly naming those small Boolean expressions that tend to appear in most programs but whose purpose and or meaning with regards to business logic can be hard to gure out. This would probably perform quite well with a dummy repository implementation returning only a limited number of users. when using this approach. See [Gamma et al. For instance.32 tions of the repository can be seamlessly interchanged. } } The above code snippet is a class that implements the recently created business logic. However it suers from two serious drawbacks.. } The above code snippet shows one way of determining if a User was created recently. This observation is probably right for this very simple example..AddDays(−10)){ .. Now if we were to use this specication in code it could look something like the following: 1 2 3 4 User u = .IsSatisfiedBy(u)){ .. and furthermore we have created a single placeholder for the business logic. the code would have to be duplicated. leading to extra maintenance if the criterion ever changed. meaning that without code comments it does not convey to the reader which business rule it is checking. a class with a lot of predicate functions like large. one could easily imagine cases where the object itself does not contain all the necessary information to check the condition. 1 2 3 4 5 6 7 8 //Specification class that determines if a user has //been created recently public class RecentlyCreatedSpecification{ public static bool IsSatisfiedBy(User u){ return u. First of all it is not very expressive. It could be argued that what is achieved by the specication in this example could just as well have been placed in a function on the User class.CreatedDate > DateTime. Furthermore.AddDays(−10). making it easy to maintain.CreatedDate > DateTime. 1 2 3 4 User u = . IsRecentlyCreated easily becomes very Furthermore.... Let us assume that the denition for a recently created is that it is was created within the last 10 days. .33 business logic with regards to our current example. However. if(u. } The above code example shows how the resulting code is much more expressive in terms of what business rule is checked. if the same check was needed somewhere else..Now. if(RecentlyCreatedSpecification.Now. Most importantly factories should not automatically assign new ids to entity objects when they are reconstituted.7 Factories The factory design pattern is widely used in object-oriented design. module names should make sense in 2 Or . This means that everything needed to construct the object should be passed to the factory such that the construction can take place in one single interaction with the factory. . modules is used to group related concepts from the model. the continuity of that particular object would be broken.g. but it does mean that the factory should make sure to invoke this logic before returning the instance of the object in question.4. As with everything else in the domain model. A 2 module could be either a dll or merely a namespace within a dll. The most important property of factory methods is that they should be atomic. One important benet of using factories as opposed to con- structors is that the methods of the factories can have abstract return types or return some implementation of an interface.lib or whatever equivalent is oered by the technology used. because if they did. 3. in conceptual packages. Factories used for construction of entity objects should behave dierently when creating new instances of an object as opposed to creating a reconstituted instance of the object. This way. Furthermore.4. . I have seen examples of open source software packages (such as OpenSAML [OpenSAML 2009]) where even the simplest of objects had to be created through a factory. The analogy of the factory object is that sometimes the construction of an object is too complex for the object itself to perform. which makes changing or substituting these objects completely transparent to the caller. and it needs to be created in a factory. the user of the factory method does not need to reference the concrete type of the object that is returned. Simple objects that do not implement a common interface or use polymorphism should probably not use factories for their construction. The overuse of factories can actually obscure simple objects and make the user think that the object is more complex than it actually is. it is important that the factory ensures that all invariants for the object being created are met.34 3.jar. Factories should of course not be used for everything.8 Modules The concept of e. all classes containing User related logic. This does not mean that the logic that checks these invariants should be moved outside the object being created. In domain-driven design a design that renders itself well to refactoring and iterative renement is called a supple design.5. By being explicit about . operations.35 terms of the ubiquitous language. thus leading to duplicate code. In fact. the lack of a good design may completely stop refactoring and iterative renement since developers will dread to even look at the existing code and they will be afraid to make changes. be obvious that the naming of this method and parameter is more intention revealing than for example Create(User u). Likewise. The It should method name reveals that the method adds a user. Therefore. monolithic design elements also impede code reuse.5. An example of an intention revealing method could be the function from the AddUser(User newUser) UserRepository discussed in Section 3. Determining whether to use individual dll's or just namespaces within that dll should probably be based on the number of classes involved although no details are oered in [Evans 2004]. since a change to the existing mess may break some unforeseen dependency or just aggravate the mess. and argument names should be named in a way that reveals the eect and purpose.5 on page 31.1 Intention-revealing interfaces The word interface in with the keyword intention-revealing interfaces should not be confused interface as known from many programming language. Evans argues. classes. There are no exact formulas to achieving a supple design. just as design elements with confusing or misleading names may lead them to used inconsistently by dierent developers. The names should however not contain any information on how the eect is achieved. and that the argument should be a new user (as opposed to an already existing one). In this context. It is therefore important to make the design open to refactoring and change such that new knowledge can easily be incorporated into the design when it is discovered. the word interface refers to the naming of all public elements of a software artifact. Refactoring When we set out to write software we never know everything about the domain. 3. 3. These patterns are shown in Figure 14.4. but Evans oers a set of patterns that could lead to it. A system that lacks a good design does not encourage developers to leverage existing code when refactoring. and will be discussed in detail in this section. but the conceptual basis of the design will have been corrupted. and the two developers will be working at cross-purpose.5. 3. a side eect means a change in the state of a system. If someone other than the original developer must infer the purpose of an object or operation based on its implementation. but in the context of computer science in general and domain-driven design specically. the code may work for the moment. In [Evans 2004]. the risk of some developer unintentionally using the method for something else than its intent is minimized. Evans argues that generally there are two dierent types of functions in a system. that new developer may infer a purpose that the operation or class fullls only by chance. and what its parameters are.36 Figure 14. Patterns that contribute to a supple design.2 Side-eect-free functions A side eect normally means some unintended consequence. . Evans gives the following example of why intention-revealing interfaces are important: If a developer must consider the implementation of a component in order to use it. Functions that query system data and functions that alter system state. Evidently. what the method does. If that was not the intent. the value of encapsulation is lost. Evans states that the add() function should not be split add() and subtract() There is of course no simple up into two separate functions. 3. It does not always make sense to code the pre and post-conditions as part of the program because of performance overhead or missing support in the programming language.37 changes to system state neither can nor should be avoided per se. each function should perform something meaningful in its own right. according to the conceptual contours pattern. recipe for achieving conceptual contours.and post-conditions that should always be satised before and after invoking a function. nothing more and nothing less. side eects cannot be avoided.5. the assertions should be included in automated unit tests and in the documentation of the program. and therefore the decomposition of design elements into cohesive units is something that must be based on intuition and which must be expected to undergo many changes over time until a good granularity has been achieved. This is achieved through a set of pre. functions that alter system state should not return data. Evans argues that no single granularity will t everything in our domain model. Thus.4 Conceptual contours The conceptual contours pattern strives to achieve a meaningful granularity of functions. so instead of using a naive approach where every function or class is limited to a xed number of lines of code or the like. but it should not span several conceptual operations. so if that is the case. This means that functions that return data should not alter the system state in any observable way. classes and interfaces in the model.5. The assertions pattern in conjunction with intention revealing interfaces makes it even more explicit what the side eect of a function is. according to the side-effect-free functions pattern. the granularity should be based on what the function or object conceptually achieves. In the same way the functions should not be combined into one. 3. . In the same way. these two types of be- haviors should not be mixed.3 Assertions As we have just discussed in the preceding section. However. As an example of the latter. each object should be a whole concept. In the same way. 6 Closure of operations The name closure of operations comes from mathematics. especially those that are not essential to the concept. In general. 1 + 2 = 3. The addition of two integers yields another integer.38 3. the closure of operations pattern is Because of the mostly used for value types. the addition operation is closed under the set of integers. It is of paramount importance that these teams have a shared understanding of the model such that the meaning of each concept in the model is the same for all teams. thus leading to misuse of that class . However.5 Standalone classes The pattern called standalone classes revolves around reducing coupling between objects. The goal of this pattern is to achieve low coupling where ever possible because it makes the model easier to understand. Its function Money class presented in add is closed under instances of the Money class. and it is easier to understand. and not so often for entities. 3. life cycle of an entity it is not natural that a function on an entity would return another instance of that entity. Evans argues that every dependency that a class has makes the class more complex and the relationship between the class and its dependencies have to be understood in order to fully understand the class. If this is not the case. Therefore. classes that represent one concept to one team could potentially represent a dierent concept to another team.5. classes with low coupling are easier to understand. Model integrity Building large systems often involves several teams of developers who develop each their part of the system.5. In mathematics.6. since it does not introduce new concepts. e. value types can often have functions that oer closure of operations. However. it is important to reduce the number of interdependencies between classes.2 on page 29. It is obvious that taking the standalone classes pattern to the extreme leaves us with a (useless) model where everything is reduced to a primitive.g. 3.4. An example of a value type that has a function with closure of operations is the Section 3. This property can be used when designing a good model. but also because some of these patterns are useful for modelling in general. The teams may have an intention to work on the same model and share code.6. Figure 15. as outlined by domain-driven design are shown in Figure 15. The patterns used for maintaining model integrity. But sometimes systems are too big for a single unied model to exist. especially when communication between teams is not as good as it could be.1 Bounded context On large projects with several teams working on the same system it can become unclear whether or not the teams are working on the same model. nevertheless. the system being developed is developed only by this author. Model integrity patterns. not least because the future maintenance of the system being designed may have to be undertaken by more than one person or team. Having one large unied model requires good communication and processes to detect conicting interpretations of the model. Within the scope of this thesis. 3.39 and in the end maybe faulty behavior in the system. In order to overcome this . the patterns of domain-driven design that address model integrity are still interesting to describe. 3. the boundaries of these contexts can be explicitly dened in terms of usage within the application. continuous integration of the model is something that must be achieved by continually discussing the model and relentlessly exercising the ubiquitous language in order to strengthen the shared view of the model and to avoid that concepts evolve dierently in developers' heads.3 Context map When working with a number of bounded contexts. A good set of automated tests should make it more comfortable for developers to refactor existing code. 3. Domain-driven design takes the concept of continuous integration a step further.. As opposed to the continuous integration of code. This should happen through automated builds and automated tests.6. they will know instantly whether or not something is broken by the change they made. and hereby eliminates the risk that dierent teams will think that they are working on a unied model when in actual fact they are working on conceptually divergent models. The context map can be a diagram or a . but also on the continuous integration of model changes. Once a number of bounded contexts have been dened. thus being able to achieve a pure model within the bounded context. The relationship be- tween bounded contexts often involves some kind of mechanism to translate data between bounded contexts. Continuous integration of code is of course something that must take place in parallel with continuous integration of the model. Evans argues that it may sometimes be benecial to split the code and model into several bounded contexts. namely by not only focusing on the continuous integration of code. because by running the automated tests. the continuous integration of a model is not something for which a set of automated processes exist. code base etc.2 Continuous integration Continuous integration is a well-known practice that is aimed at speeding up delivery times and decreasing integration times in software development. A bounded context is an explicit denition of a context in which a given model applies.40 problem. a context map serves the purpose of creating a global view of the entire system and dening the relationship between the dierent bounded contexts.6. Therefore. Using bounded contexts makes it explicit to developers that they are working on dierent models. The rationale behind this is that if there is a real need for an external component. Evans argues that a pattern called the conformist pattern should be used. . The conformist patterns says that to conform to the model represented by the bounded context represented by the o-shelf component. 3.6 Conformist Sometimes one of the bounded contexts in a system is an o-shelf component with a large interface.5 Customer/supplier development teams The customer/supplier development teams pattern suggests that when there exist a relationship between two teams where one team delivers code that the other team is dependent on. but it can be any part of the model that is needed by all or some teams. and that the names of every bound context described in the context map enter the ubiquitous language.6.6. The shared kernel often represents the core domain of the system and/or a set of generic subdomains. The obvious benet of having a shared kernel is that code reuse between teams can be maximized. When this is the case. 3. then each team should work within their own bounded context. then that component probably represents valuable knowledge and probably has a well thought through model .41 text document or both. Making changes to the shared kernel requires consultation with all its users such that the model integrity of the shared kernel is not broken. and it is important that everyone involved in the development of the system know and understand the context map. 3. Doing so makes it easier to dene responsi- bilities and deliverables and the joint development of acceptance tests will validate whether these have been met.4 Shared kernel A shared kernel is essentially a part of the model that is used in more than one bounded context. Therefore.6. conforming to the model of the component is usually a good idea 3 3 Evans also argues that if this is not the case the use of that component should be seriously questioned. 3. The legacy system may have weak model or a model that does not t the current project. The facade is an interface that simplies access for the client. and therefore may not be worthwhile. and that can be used uniformly by all bounded contexts using the subsystem. the open host service pattern can be used. When this is case.6.6. thus making the subsystem easier to use. An anticorruption layer is often made up of a number services that have responsibilities in terms of the current model and internally use the facade and adapter patterns as described in [Gamma et al. The conformist pattern is most important when the interface with the component is big.8 Separate ways Sometimes the benet of integrating bounded contexts is small. 3. thus allowing developers to nd simple and specialized solutions within a small scope. the be used. and because the possibility of being dragged into a better design exists. . and making customized translators between all the subsystems and all the bounded contexts can be time consuming and hard to maintain over time.7 Anticorruption layer Interfacing with legacy systems is often a necessity for one reason or another. and an adapter is a wrapper that is used to transform messages from one protocol to another. The open host ser- vice pattern is about dening a protocol that gives access to the subsystem.6.9 Open host service Sometimes a subsystem has to be used by many bounded contexts. 1995]. The separate ways patterns can be used when that is the case. So instead of doing so. 3.42 because less translation between concepts is necessary. anticorruption layer pattern may An anticorruption layer is a technique where legacy systems are isolated through classes and functions that honor the current model and hides any conversion and translation logic from the current model to the model of the legacy system. 10 Published language Finally.6.43 3. . the published language takes the open host service pattern to the next level by dening a formal common language for the service. An example of a published language in this context could be SQL or XML. 6 and 7). 4. Therefore some sections may include a refactoring sub-section. In the following I will refer to the software as a whole as the IdP. the aim of the following sections is also to illustrate the iterative process involved in domain-driven design. administrating a single IdP instance. the aim of this chapter is to present the model from a point of view that is not overly implementation specic. but the implementation details will be left for the following chapters (5. however. In domain-driven design the model is comprised of both the code and all the documents that describe what the system does and which parts of the system are responsible for doing what.1. and the person who administrates the IdP the administrator (or an administrator since there may be several). Therefore. You may argue that instead of having a refactoring sub-section I could have just performed the refactoring right away. However. Figure notation Any model is tightly coupled to the data that it represents. Therefore. and therefore I have chosen the sub-section approach. Figure 16 shows an imaginary model concept called SomeConcept which 44 . interfaces and functions. domain-driven design also emphasizes that the model must be understood by (typically) non-technical domain experts. It is important to notice that in this context the administrator is a customer of the software. it seems natural to present the data schema together with the model.Dening the model 4 In this chapter I will dene a model of the software that was specied in Chapter (2). Apart from presenting the model. I will use terms such as classes. and for most applications and this one in particular. This does. that data is stored in a database. the entity objects) the same. not illustrate the iterative process that is one of the cornerstones of domain-driven designas well. Since I have great focus on the ubiquitous language I will name my database tables and the classes that encapsulate each row in a given table (ie. The schema view shows shows all the data properties of each of these two concepts. Apart from being naming convention that is easy to remember and easy to recognize. I use tity objects. It also shows the primary key. the SomeConceptId is a reference to the OID of some instance of SomeConcept. the arrow in the gure denotes that a For example. In fact. SomeConceptAttribute is coupled to a SomeConcept. OID OID (short for object id) as a primary key for all en- such that is should always be easy to infer which property references the of the other class.45 Figure 16. using this naming convention is useful in the imple- . Schema view also shows relationships between the classes. which is denoted by a small key symbol next to the property. Figure explanation has attributes called SomeConceptAttribute. I use a strict naming convention for these references. Figure 17 shows the convention for interfaces. As a matter of fact. these properties will denote some computed property that is not a part of the data schema. This nature of the relationship is explained with some text above or under the arrow. I will explain all methods and their parameters where I deem it necessary for understanding its purpose. are references to instances). which are: . You may also assume that the relationships denoted by arrows in schema view still exist in class view. and the class. An abstract class. I have now introduced the gure notation that is going to be used throughout this chapter. but they are not shown in order to keep the gure simpler. For example. There may sometimes be properties on the classes in class view. and it is time to take a look at the core concepts of the model.46 mentation as we shall see in Chapter 5. of the classes. and that each has an instance (reference to an instance) of a there may be blue arrows between the classes in class view denoting some kind of relationship between the classes. color). you may assume that all the properties that were there in schema view are still there. In class view. Also note that some classes are not part of the data schema at all. and abstract classes in class view. ISomeInterface represents an interface (note the green SomeClass is a class that implements that interface. when introducing new concepts that expand something that has already been shown. denoted by the circle and interface name above the class. I will not show relationships that have already been explained. when introducing a new concept I will not show relationships to things that have not yet been introduced. Class view also shows what type of class each class is in terms of domain-driven design. SomeOtherClass that inherits SomeAbstractClass has an arrow to class and a small arrow with the name of the base class next to it. Finally. and therefore they may have non-computed properties. if any. Likewise. For example. I will generally explain the type and meaning of all the properties unless they are self-explanatory are identical to previously explained properties. This is denoted in italics above each class. such as SomeAbstractClass has dotted line around it. Please note that I may chose not to show all relationships of a given table in schema view. as exemplied in the gure. but if there is. while. for Figure 16 you may assume that each a (possibly empty) list of SomeConcept class has SomeConceptAttribute instances (technically these SomeConceptAttribute instance SomeConcept class. The class view on the other hand shows additional functions. Events in this context is logging of errors and other activities in the system.47 Figure 17. I will refer to logging and tracing before actually introducing the part of the model that describes them. Figure explanation Plug-ins Users Conguration Certicates Credential providers Protocols Connections Claims The runtime Events Please note that I have chosen to introduce reporting of events as one of the last things.12. since as a concept it is not as central as the others. which you can nd in Section 4. . without the need of recompilation. Figure 18 shows two important interfaces used for creating plug-ins. This technique is commonly used. Figure 18. however. 1995] with the motto "Program to an interface. letting the system be extensible in this way would denitely allow for this in the future. An extensible system One of the most important properties of the system is that it must be extensible in two areas. and thus the IdP will not have to care about the how the plug-in is implemented.2.1 Plug-ins A plug-in is an extension to the IdP. but only that it satises the contract dened by the interface. not an implementation". Current development visions for the IdP do not include any third-party plug-ins. These properties are just descriptive strings. A plug-in is created by coding a class that implements the IPlugin interface.48 4. Class view: IPlugin and IEndpoint interfaces Both the IPlugin and IEndpoint interfaces specify that properties called Name and Description. This will be dealt with through the concept of plug-ins. The core IdP framework itself does not implement any plug-ins. it only denes how such a plug-in should behave. Another important benet of using a plug-in structure is that extensions to the system can easily be deployed to existing installations. namely in the way a user presents his credentials and in the protocols supported by the system.2. and endorsed by [Gamma et al. 4. The . The should return an instance of a class that can handle web requests. Figure 19 shows the properties of a User.49 ProvidedLoggingSources property of the IPlugin interface returns a list of logging source names that the given plug-in uses when (or if ) it creates log entries. The Password is a computed hash of the user's plain text password. Lastly. an endpoint denes a (web-site absolute) path and some class that knows how to handle any requests for that path.3.Net framework it would simply be an implementation of the IHttpHandler interface. 4. Handler property Every Conceptually. and comparing the result with the value . These a quite self-explanatory. However. a DateCreated and a LastLogin property. by denition. and that. it knows something about. The plug-in concept of our model is actually a general abstraction that will be specialized by the more specic credential plug-in and protocol plug-in as we shall see later on. using the original salt value. A user represents some person which is known by the IdP. all that needs to be done is to compute the hash of the password entered by the user. and in the . so that needs to be added. by the GetEndpoints function dened IPlugin interface must return a list of IEndpoint implementations. the hashing function will only return the same computed hash as the one stored in the User object if the same salt value is used. So to verify a user's password. the system must therefore create the logging sources by using the LoggingSourceRepository. and the PasswordSalt is some random value used by the hash- ing function as entropy. a PasswordSalt property. The bounded context comprised by each plug-in will have a conformist relationship to the IdP since the plug-in must adhere to the model dened in the IdP. modern web development framework has extensibility points for such classes. the Add The logging source repository denition that we saw earlier did not have an function. Given the user's plain text password. During installation of a plug-in. Each plug-in implementation will be a bounded context in domain-driven design terms. the user contains a UserName. Users One of the core concepts of the IdP is the user. Apart from a unique id (the User has a Password and a OID). 50 Figure 19. the userName and clearTextPassword. you may think that this is silly. Now.3.1 User creation The administrator must be able to manually create users. a The UserFactory class has CreateNew method that takes two arguments. Schema view: User stored in the User object. by passing it as a parameter to the Note that the UserRepository also has a ExistsByName function. Comparing usernames is just a matter string comparison right? The answer is no. Using this approach disallows anyone with access User class VerifyPassword function. The ExistsByName function in turn uses a specication (the ExactUserName specication) to compare the given username with that of existing users. the classes shown in Figure 20 are needed. Note that the shown in Figure 20 has a 4. and false other otherwise. sets the creation date and other relevant User instance. To achieve this. that does just that. The IdP has a rule that says that usernames are case insensitive. and the ExactUserName specication im- plements logic that compares usernames case insensitively. and it is therefore important to use the ExistsByName function to check that no user with that name already exists. . namely a When called. elds and returns a CreateNew method computes the hashed password and password salt. user to have the same username. to the database to see any of the users' passwords. The User instance can then be persisted Add function of the UserRepository class. This funcThe IdP does not allow for more than tion takes a username and returns true if a user with the given name already exists. will discuss how this can be achieved in a generic way.3. You can think of this as an easy way for the administrator to create users in the IdP. Until then.51 Figure 20. 4. It may also need to have a all users whose UserName starts with some specic letter.3. the UserRepository will need to have a FindAll funcFindByFirstLetter function to easily fetch In Chapter 6 I tion. you may assume that all repositories have at least a FindAll function. In fact. because instead of having to type in all the information about each . To cater to this need. Class view: User creation 4.2 Displaying existing users The administrator will need to be able to display a list of all existing users.3 Inviting users As you may remember from Chapter 2 an important feature of the system is that it must be possible to invite users to register at the the IdP. search functions will probably be needed in all the repositories. 52 user, the invitation mechanism delegates this work to each user. The act of inviting a user merely entails sending an email to some email address with a verication code in it. The verication code is simply an automatically generated code that the IdP links the email address to which it was sent. By presenting his email address and the correct verication code, the user can prove that he was indeed invited. The user then chooses a user name and types in any additional information (for example values for his claims as we shall see in section 4.10) and he is then enabled in the system without further involvement of the administrator. Figure 21. Schema view: User invitation Figure 21 shows the schema involved in class holds the each user invitations. The UserInvitation UserInvitation has a number of UserInvitationStatusHistory refer- ences, each of which reference a is used to dene statuses, such as email sent, code veried and user created. The history table merely links a status to an invitation using the StatusDateTime property to record the date and time of the given status. When the user is created as an actual User in the system, an InvitationToUser object is created in order to be able to track which invitations correspond to which users. Figure 22 shows the entities from Figure 21 in class view, together with all the other classes needed for user invitations. trates most things related to user invitations is the The class that orches- UserInvitationService 53 Figure 22. Class view: User invitation class. The IniviteUser method takes a single parameter, an email adCreateNew method UserInvitation dress, and performs the following steps; First it calls the of the UserInvitationFactory, passing the email address as the only paVerificationKey for the rameter to the method, to create a new instance of the class. The factory method internally generates the UserInvitation instance, and it also generates an instance of UserInvitationStatusHistory which it couples to the UserInvitation instance. Next, the UserInvitation Add method of the UserInvitationRepository. instance is persisted by calling the Finally, a mail message, including the verication key is composed, and the SendEmail method of the EmailService is called to send the email message. The InviteUsers method of the UserInvitationService takes a comma sepInviteUser method once for each CreateUserFromInvitation method takes an email ad- arated list of email addresses and calls the email address. The dress, a verication key, a user name and a password as parameters. It then nds the UserInvitation instance corresponding to the email address and VerifyKey method VerifyKey method takes the verication key provided veries that the verication key is correct by calling the on that instance. The by the would-be user and compares it to the one it holds internally. If everything is ok, the invitation history is updated by creating the proper objects and calling the proper repositories. Finally a new persisted, and nally an User object is created and InvitationToUser instance is created and persisted. 54 And that concludes the user invitation, and the user is now ready to login with the IdP. 4.3.4 Self registration Another way of user creation is through self registration. Self registration has many similarities with user invitations, but diers in that the would-be user and not the administrator initiates the creation process. Furthermore, any self registration must be approved by an administrator. Figure 23. Schema view: Self-registration Figure 23 shows the schema view of the classes needed for the support of self registration. You can probably recognize most of the concepts from user invitations. The main dierence here is that the SelfRegisteredUser User object, such class contains all the properties needed for creating a as Password, PasswordSalt, etc. The SelfRegisteredUser class contains a property called RequestedUserName. The reason for this is that at the time of self registration a given user name may be available, but at the time of actual user creation, the name may have been taken by someone else. Figure 24 shows the classes for self registration in class view. The classes are similar to those of user invitation, and therefore I will not explain them in detail. It is important though, to note that there are many more statuses involved in self registration than there are for user invitations. Registered conrming his registration. Lets see how the model could benet from having these concepts made more explicit. The administrator may also choose to deny registration. Finally the administrator may choose to ignore the registration. Class view: Self-registration is the status for newly registered users. instance of a 4. First of all. such as UserInvitationService. Subject. as described earlier. When the The address is veried an administrator must take some kind of action. the status is changed to Veried. which results in a status of Ignored. which may have changed from what the user wanted. The UserRegistrationService is the class that provides all these Also notice that the status changes.55 Figure 24. but no email is sent. and returns an User.5 Refactoring The model thus far contains some implicit concepts regarding sending emails to users. . This email will contain the actual user name. UserService and UserRegistrationService compose mail messages internally and sending them through the EmailService. UserFactory has been extended to include a method called of a CreateFromRegistration which takes an instance SelfRegisteredUser and an optional new user name. and an email being sent to the user. which results in the status changing to User created and an email being sent to the user. these dierent email messages should be made explicit concepts in the model. administrator can either approve the registration. After registration an email is sent to the provided email address in order to verify the address. Instead of having the services.3. When the user conrms the email address. which results in a status of Denied. Figure 25 shows the classes involved in sending these email messages. the interface IEmailMessage denes three properties. including the verication key (available through the property of the UserInvitation instance passed to the constructor) and a Email property is also read from link to the verication page. Message and Recipient BaseEmailMessage.11. For example.56 Figure 25. Furthermore. The Subject is simply a static string saying something like You have been invited to join an identity provider. Now. A last concern is if the 4 EmailService has been made obsolete. Class view: Emails Message and Recipient. passing itself as the parameter. Finally. the 4 BaseEmailMessage implements the IEmailMessage interface. but has all the properties from the interface as abstract members . You may wonder how the correct address to the verication page is obtained. This address. the UserInvitation instance. So the signature of this method is changed to take an instance of an IEmailMessage as a parameter. and takes an instance of an UserInvitation in its constructor. These are the properties that are needed by the SendEmail method of the EmailService class. This method calls the EmailService's SendEmail method. gure. implements a (non abstract) method called BaseEmailMessage This Send. allows us to dene explicit email messages such as the ones shown in the InvitationEmailMessage is a class that inherits the Subject. The Message property contains the message VerificationKey body. It implements the abstract properties from the base class (and which originate from IEmailMessage). Could the Meaning that they must be implemented by any inheriting classes . and a lot of other parameters about the the IdP will be made available to all components in a way explained in Section 4. each plug-in will rely on the ability to be congured in order to function correctly. but which are logically grouped in lists (or tables).4. text configuration element which hold string data. the namespace is just used to group conguration elements belonging to some part of the system. if at a later point I wished to implement a method called CheckServerConnection. This does not make the model implementation specic. certificate configuration elements which hold references to certicates. than on a BaseEmailMessage class. Conguration Conguration of the IdP is essential in many ways. First of all. Figure 26 shows the schema view of the conguration elements. Common for all the conguration elements is that they are identied by a and an NameSpace ElementName. or protocol. and indicate thereby indicate that only the values true and false would be valid. but the the type of data in the string. This is also the case for connections. EmailService is to know about which SMTP server to use and which credentials (if any) to present to that SMTP server. Therefore a conguration framework is called for. The value is always stored as a string. which are described later in this chapter.saml2.core conguration element. for all elements belonging to the IdP itself. The information contained in the is to some extent specic for the way certicates are stored on machines running Windows.57 logic implemented by the well have been put in the SendEmail method of the EmailService just as Send method of BaseEmailMessage? It denitely The main purpose of the could have. The Type eld is used to indicate Type could. conguration of the IdP itself must be possible. for example. Furthermore. Generally speaking. Furthermore. The CertificateConfigurationElement contains references to certiCertificateConfigurationElement cates. the The namespace used by any given part of the system must be unique. but I do not think it is good idea. TextConfigurationElement has a Value eld which contains the actual value. it EmailService would denitely be more logical to have such a method on an class. but .connection1 for some connection.saml2 for a some protocol implementation and nally protocol. and list configuration values which also hold string data. Therefore. The NameSpace is a string identier of the owner of the The namespace could be something like idp. The conguration framework denes three basic types of data. 4. be Boolean. The most important features of conguration elements are that they must be presentable to the administrator such that they can be read and edited. and where zero or more rows of data may be called for. there must be a mechanism for validating the values typed in by the administrator. So a as shown in the gure. ListValue is tied to both a row and a column. or part of the system. . acts as a conformist to the environment where it is ultimately going to run. the meaning of which is dened by a column. Furthermore. a single row of data contains several values. if such a list exists. Schema view: Conguration elements merely indicates that the bounded context that the model comprises. that relies on a given conguration element must be able to provide a list of valid values for a given element.58 Figure 26. The ListConfigurationElement is used to dene conguration elements ListConfigurationElement could be presented. The plug-in. Figure 27 shows how a Generally. only certain values make sense for a given conguration element. that have a tabular nature. Another important concern is that in some cases. denes that a single property called Namespace. Class view: Conguration elements Figure 28 shows the conguration elements and their repositories in class view. The important thing to notice here is the INamespaceElement interThe interface merely Since all three entities face. . that is implemented by all the three entities.59 Figure 27. Conguration lists Figure 28. in class view. all three repositories can use the same specication to nd the elements of the respective type that matches a given namespace. It can validate the values of all conguration elements as a whole. we will call a configurable com- ponent. of course). This way. they can implement the interface without further ado. This is useful. be it the core IdP itself or any plug-in or connection. To provide this functionality a number of classes and interfaces are called for. It can provide a structure that logically groups its conguration values such that they can be displayed to an administrator in a way that makes sense in the context of the meaning of the conguration elements. A congurable component is represented by the IConfigurableComponent interface. thus making it straightforward to nd all conguration elements belonging to a given component (identied by its namespace. and the value of a single conguration element individually. because we can now dene a specication called NamespaceMatchesSpecification. whose IsSatisfiedBy method takes an instance of a INamespaceElement. Any part of the system that depends on conguration values.60 have such a property already. A congurable component provides the following functionality: It can provide a set of default values for all its conguration elements. Figure 29. Class view: Congurable component Figure 29 shows some of the concepts needed to support congurable components. This is useful because the itself only contains information about where the certicate is stored. Also notice that the method called CertificateConfigurationElement denes a CertificateConfigurationElement The LoadCertificate that loads the actual certicate from the cer- ticate store. The interface denes which namespace . method returns an instance of the framework class called X509Certificate2. and validates its value (or values. GetDefaultValues is a Furthermore. Figure 30. or null if the validation succeeds. Finally. or heading if you like. is an overloaded method that takes one of the three conguration element types as a parameter. function that returns an instance of a ConfigurationSet. Visual conceptualization of conguration Figure 31 shows the classes involved in conguration element display. if it is a ListConfigurationElement). which is a string containing a caption. The main purpose of the DisplayConfigurationSet is to display a cong- uration set in some UI. containing an error message if the value is not valid. The DisplayConfigurationSet is the aggregate root of all the classes.61 the elements of the component has. This function is called during installation of a congurable component and its main purpose is to return all conguration elements such that they can be created in the database. the interface denes a set of functions. and as you may have noticed a few extra properties have been added since the class was introduced in Figure 29. a class that presents ements of the dierent types explained earlier. It returns a string. A ConfigurationSet is merely a container for the conguration elGetDisplayConfigurationSet DisplayConfigurationSet. The returns an instance of a the conguration elements in a way suitable for showing in a user interface. through the ElementNamespace property. such as that shown in gure 30. the method. The ConfigurationSet as a parameter and returns an instance of a DisplayConfigurationSet (which will contain information about potential errors). of the en- . The DisplayConfigurationSet class will be explained in greater deValidateConfigurationSet is a method that an instance of a ValidateElement tails later. These properties are the Caption property. DisplayTextConfigurationElement has the following properties. yet another grouping container.0 protocol conguration. and most Groups property. This is useful for the order of which the elements are displayed. A tab control is a common user interface control found in most applications. The DisplayConfigurationTab class also has a Caption property. The base class contains a single property. intended to be displayed as a single tab in a tab control. The last property of this class is the of Tabs property. it has a of DisplayConfigurationGroup instances. Class view: Display conguration elements tire display conguration set. used to logically and visually group conguration elements. ConfigurationElement is of type TextConfigurationElement and represent the conguration ele- . which returns a list of instances of the abstract The BaseDisplayConfigurationElement class serves the single purpose of being a common base class for the three display conguration elements. Apart from the erty. A caption value could for example be Identity provider conguration or SAML 2. A DisplayConfigurationGroup is Caption property it has an Elements propBaseDisplayConfigurationElement.62 Figure 31. DisplayConfigurationTab is a grouping container. ReadOnly property indicates whether a given display conguration element is editable. namely ReadOnly which is the only property The the three display conguration elements have in common. A This property returns a list DisplayConfigurationTab instances. The Groups property returns a list importantly. Those certicates that are not issued by another certicate are said to be self-issued.5. you know that most certicates are issued by other certicates. but a list of X509Certificate2 instances. The other two display conguration elements are similar. 4. and are often referred to as root certificates. I have chosen to extend the BaseDisplayConfigurationElement with a Description property. Now. amongst other things. it must. and they use a very well guarded root certicate to issue certicates that .4. correspond- ListValue inside the ListConfigurationElement returned by the PossibleValues property of the other two display elements. from where a value can be chosen. the PossibleColumnValues is similar to the ever it contains a list of lists. The PossibleValues may be null. ConfigurationElement property. if the ministrator can type in any value.63 ment that is being displayed.1 Refactoring There needs to be some way of communicating the meaning of a conguration element to the administrator. Finally. Such companies are called certicate authorities (CA's). null. ConfigurationElement. DisplayCertificateConfigurationElement diers in that its DisplayListConfigurationElement PossibleValues does not return a list of string values. where each list corresponds to the possible values for a column in 4. Therefore a Description property is called for. Finally. trust the root certicate that issued Quite a few companies world-wide make their living by issuing certicates. should the Description property be on the conguration element. in which case the adHowever. Sometimes the name of a conguration element may not be enough do explain what a conguration element is. Certicates If you know a little about X.509 certicates. howListConfigurationElement. or on its display counterpart? Since it is primarily a display related thing. from which the administrator can choose. They dier in the type returned by the Furthermore. the of possible values. that certicate. The diers in that its ing each Errors property is a matrix of error messages. the string values it contains will be rendered as a drop-down list. In order for an application to deem some certicate to be valid. The Error property is a string containing a PossibleValues is a list PossibleValues is not possible error message for the element. 64 other companies or people can buy. the certicate conguration elements were only references to certicates in the certicate store. you can create your own root certicate. Figure 32. or if you can persuade your business partners to trust your root certicate. because they ship with the operating system. The most important class is the CertificateService class. However. creating new certicates. The CreateCertificate . The administrator needs some means of inspecting (searching for) the certicates in the certicate store. and use this to issue other certicates. when you do not wish to pay for a certicate that you are going to use for test purposes. because you can choose to trust that root certicate. only parameter to the method is a so-called distinguished name. This will work ne on your own systems. deleting existing certicates and so on. However other systems will not trust your root certicate. a string value that is used to identify the certicate later. Class view: Certicates Figure 32 shows the classes involved in maintaining certicates. but that does not really matter as long as you are only using it for test purposes. We have already seen how certicates can be used in conguration of the system. The CreateRootCertificate The method can be used to create a root certicate as explained above. The root certicates of these CA's are per default trusted by all computers world wide. Sometimes. 5 A common format is the canonical encoding rules (CER) format which is supported on most platforms. and nally. only it searches in trusted root certicates store instead of in the standard location. and 12 is a version number. is an overloaded method. the cate to another. 6 A common format for exporting certicates with private key is the PKCS12 format. together with a It also returns a byte array. The DeleteCertificate is X509Certificate2 ExportCertificate method takes as its only X509Certificate2 class. All the certicate specications in the gure take an instance of the X509Certificate2 in their respective IsSatisfiedBy methods. This method takes an instance of an X509Certificate2 class. The FindCertificates method takes a list of certicate specica- tion instances. it would be useful to have a feature that could check if the certicate is being used by a CertificateConfigurationElement. The specications are.65 is used to create a certicate. ExportCertificateWithPK exports a certicate including its private key. password to use to protect the certicate. takes a byte array (in PKCS12 format) and a password. and not expired. The FindRootCertificates does exactly the same as the FindCertificates method. The other overload. eg. but furthermore. and imports the certicate private and public key into the certicate store. it takes a root certicate that is used to issue the certicate. and warning the administrator if this was the case. and returns a byte 5 The used to delete a certicate. The does the opposite. PKCS stands for Public-Key Cryptography Standards. and also in a common format . issued CertificateIsInvalid CertificateMatches compares one certi- to the distinguished name string parameter given in the specication's constructor. .1 Refactoring When deleting a certicate. CertificateByDistinguishedName which compares a certicate CertificateIsValid checks if a certicate is valid. The parameter an instance of the array representing the certicate's public key in a common format . The root certicate parameter is passed to the method as an instance of the X509Certificate2 framework class.5. As a parameter it also takes a distinguished name. 4. The by a trusted root certicate. 6 The ImportCertificate method The rst overload only takes a byte array (in CER format) and imports a certicate public key into the certicate store. and it takes an instance of an class its only parameter. and returns all the certicates that match those specications. this time corresponding to both the public and private key of the certicate. it is safe to delete the certicate. Class view: Refactoring the certicate conguration element repository Figure 33 shows that the CertificateConfigurationElementRepository FindByCertificate. If the list is not empty. 4. but it is the administrator who chooses which credential providers are enabled for his IdP. such that the administrator can enable it. A concrete credential provider plug-in implementation is contained within a module (dll) on the IdP server.6. Schema view: Credential providers . and possibly change those conguration elements to reference another certicate before deleting it. Figure 34.66 Figure 33. If the list is empty. The IdP will oer several dierent credential providers. It takes has been extended to include a method called an instance of an X509Certificate2 and returns a list of CertificateConfigurationElements that are currently the depending on that given certicate. the administrator can be told which certicate conguration elements depend on that certicate. Credential providers Credential providers are specialized plug-ins that facilitate user logins. and registered in the database. The ICredentialProvider interface extends the IPlugIn interface. Here. As mentioned in Chapter 2. explained in Section 4. while the SAML variant uses federation with another identity provider to collect the user's credentials. as there is only one local user database anyway. But what about the SAML federation scenario. and CredentialProviderType. This works well for the username/password scenario. that can be used to create an instance of the given credential provider using reection. . The model only allows for one instance of each credential provider to be congured. The credential mechanisms are actually quite dierent. in that the username/password variant validates users through the local database. SupportsConnections is a boolean that indicates whether or not the credential provider supports connections. must represent a type that implements a specic interface. it must be possible to setup several federations with several other IdPs.8. A credential provider may have more endpoints for various purposes. such that it can initiate the login sequence properly. The default endpoint is the endpoint that initiates the login for a given user. and indicates that the given CredentialProviderDefinition has been congured (is in use). however not by conguring more than one instance of the credential provider. and adds two new properties.67 Figure 34 shows the who main concepts involved in setting up credential providers. A CredentialProviderDefinition denes a credential provider It has a that can be used to log users in. pointer to a The ConfiguredCredentialProvider is merely a CredentialProviderDefinition. The instance of an DefaultEndpoint is a property that returns an IEndpoint (see Figure 18 on page 48). Figure 35 shows the interface that must be implemented by plug-ins that are credential providers. but by the concept of connections. so it is important that the IdP knows which is the default endpoint. This is useful for displaying a user interface to the administrator where he can congure the connections for a given credential provider that supports connections. which I will explain in detail below. And this is indeed going to be possible. which is a string that The type name contains a fully qualied type name. most importantly it has a Name and a Description.0 federation. the IdP must be able to accommodate credentials both in the form of username/password and via SAML 2. so I will not get into a lengthy explanation here. Such protocols include SAML. that there are repositories for both concepts (ProtocolDefinition and shown here.7. ConfiguredProtocol) even though they are not . Class view: Credential provider interface 4. Notice however.68 Figure 35. WS-Federation. OpenID and many more. Figure 36 shows the schema view of the classes for dening and conguring protocols in the IdP. There is no real dierence from the way credential providers where dened and congured. Protocols Protocol plug-ins are specialized plug-ins that provide the implementation of some protocol for exchanging security related information. 69 Figure 36. Schema view: Protocols Figure 37. Class view: Protocol plug-in interfaces 70 Many, but not necessarily all, protocols rely on the exchange of metadata in order to establish a trust relationship between an identity provider and a service provider . 7 Our IdP has no way of knowing the metadata format used in dierent protocols, therefore the responsibility of creating the metadata must be delegated to the protocol plug-in implementation itself. Therefore, the HasMetadata property of the IPotocolPlugIn can be used by the implementor of a specic protocol to indicate whether or not the implementation provides (or has) metadata. When that is the case, the implementation of GetMetadata is supposed to return a string representing If metadata for the given implementation. HasMetadata returns false, the GetMetadata method will not be called. It does however still need to implemented, because otherwise the interface will not be implemented. Throwing an exception in the method implementation will be acceptable. The SupportsConnections property is identical to that of credential providers. This indicates that the property could be moved to the common interface IPlugIn, something that I will address below. The IProtocolEndpoint interface is an extension of the property, IEndpoint interface, which adds one important RequiresAuthentication. This property indicates whether a given This is the case for all protocol endpoints that protocol endpoint requires the user to be authenticated before delegating control to the endpoint. send user's identity to a service provider. The IdPwill check any protocol endpoint for this property, and make sure to authenticate the user (through one of the congured credential providers) before delegating control to that endpoint. I will explain this in greater detail in Section 4.11. 4.7.1 Refactoring As mentioned before, the SupportsConnections property exists in both the ICredentialProviderPlugIn and IProtocolPlugIn interfaces. Therefore it can be moved to the IPlugIn interface. Furthermore, some credential providers HasMetadata property also need to export metadata. This means that the and the GetMetadata method could also be moved. This leaves the question of whether the two interfaces should be there at all. However I like the idea of having them both. They are good to have for the sake of of the possibility that the two may diverge in the future. Having them both also make them more explicit as concepts, which is important in terms of domain-driven design. 7 Service provider is also sometimes referred to as relying party. 71 4.8. Connections As already mentioned, it is normal for an identity provider to require a congured connection before it wants to communicate with a service provider. A connection is merely some information about the other party. This in- formation usually includes certicates, and a set of string values describing various aspects of the way communication between the two parties is carried out. In other words, a connection is a specialized conguration set. Given the fact that the data (the conguration set) that describes a connection is specic to a given protocol (the language the two parties speak), it must be responsibility of the protocol implementation to provide the default conguration set values, and the display conguration set. A credential provider may also need to congure connections, for example in the case of federated SAML2.0 login. In this case the credential provider acts as a service provider to some other IdP. Figure 38. Schema view: Connections Figure 38 shows the schema view of some of the classes needed to model connections. nection. A ConfiguredConnection represents some congured con- Besides having an object identier, a congured connection has a name, a description, and a namespace, represented by the properties a congured connection belongs to either a protocol or a credential provider. in class view in Figure 39. Finally. in order to support working with the data classes shown in Figure 38 a set of services and repositories are called for. The ConnectionNamespace. SaveConnection has two parameters. Class view: Connections Now. namely the parameters required to create a new instance of the and ConfiguredConnection class. Figure 39. updating and fetching of congured connections. OwnerId refers to the id of either a ConfiguredProtocol or a ConfiguredCredentialProvider. and the second is an instance of a ConfigurationSet. ConnectionDescription. Please note that the are not shown in the gure. These are ConnectionName. This mapping is achieved through the and ProtocolConnection CredentialProviderConnection classes. The CreateConnection method has three parameters.72 ConnectionName. the rst is an instance of IConnectionService interface. ConnectionDescription and ConnectionNamespace. The GetConnection method has a single paramOwnerId and it returns an instance of ConfiguredConnection. that map a congured connecConfiguredProtocol and ConfiguredCredentialProvider classes tion to either a congured connection or a congured protocol. eter. A These are shown connection service is a service that faThe cilitates creating. IConnectionService denes these functions. As the gure also shows. ConfiguredConnection. There are two classes that implement the namely ProtocolConnectionService and CredentialProviderConnectionService. . The ConnectionNamespace is used to map a congured connection to a ConfigurationSet. It has also been necessary to expand the called ConnectionConfigurator. not shown in the gure for brevity).9. is an abstract class that contains the implementation for since this method must do the same thing in the context of protocol connections and credential provider connections. This property is of type IConfigurableComponent. one for in- . it is the responsibility OwnerId parameter to fetch the cor- of the concrete implementation to use the rect instance of either ProtocolConnection or CredentialProviderConnection IPlugIn interface with a property (using the appropriate repository. The implementation of ent for the classes. Plug-in installation Since the system is plug-in based. and is used to validate the values in the conguration set for either a protocol connection or credential provider connection. and that there are no endpoints within the plug-in that handle the same path. again. for the GetConnection method. The dates (saves) the instance of SaveConnection method upConfigurationSet ConfiguredConnection through the ConfiguredConnectionRepository (not shown in the gure). 4. the installation of a new plug-in must not lead to multiple endpoints that handle the same path. In the case of the CreateConnection method. because only these implementations can know what the valid values are in their specic case. The concrete implementation of the IConfigurableComponent must of course be provided by the implementation of a protocol or credential provider. Figure 40 shows the classes needed for plug-in installation. Most importantly. and likewise it saves the instance of through its corresponding service (ConfigurationService shown in the gure). GetConnection and CreateConnection is dier- ProtocolConnectionService and CredentialProviderConnectionService This is due to the fact that they interact with instances of the ProtocolConnection and CredentialProviderConnection respectively (shown in Figure 38). The central class is the PluginInstallationService that has two methods. It is also important to test that the new plug-in implements the correct interface. The BaseConnectionService SaveConnection.73 Both classes inherit the BaseConnectionService. Likewise. it is important that installation of new plug-ins is supported by the system. it is the respon- sibility of the respective service implementation to create an instance of the correct connection class (either ProtocolConnection or CredentialProviderConnection) in order to associate the connection with either a protocol or a credential provider. Step 2 Test that the instance implements the correct interface (ICredentialProviderPlugin or 8 IProtocolPlugin). Step 4 Create an instance of the NoEndpointAlreadyExistsForPathSpecification class. namely the type string . Class view: Plug-in installation stalling protocol plug-ins and one for installing credential provider plug-ins. and the assembly containing the plug-in. and the paths of their handlers are appended to the list. In order to make this list. The constructor of this class takes a list of paths that are registered for those plug-ins that are already installed. which are then instantiated. The IsSatisfiedBy Step 3 Create an instance of the and make sure that its method has an argument of type returned by the GetEndpoints method and tests that no two endpoints have identical path properties. Finally. the two repositories shown are used get all installed plug-ins. and it iterates all endpoints IsSatisfiedBy method returns true.74 Figure 40. AllEndpointPathsAreUniqueSpecification IPlugIn. the 8 IsSatisfiedBy method is called for each endpoint Corresponding to the CredentialProviderType or ProtocolType properties of the CredentialProviderDefinition or ProtocolDefinition classes respectively. as a byte array. . Each method has two parameters. The two methods perform similar actions: Step 1 Load the assembly and instantiate the class given in the type string through reection. or in the case where all users must get a predened value for . The provider claim. mainly used in UI. that a name must comply to. This type of claim is called a CredentialClaim. The DefaultValue is used to dene a default value for the claim. and represents a claim that is issued by the IdP. Finally. the only claim that originates from the credential provider is a username claim. in the sense that it does not provide a concrete value for the claim. type is called an called IdentityProviderClaim. Since this instance implements the IPlugin interface (both Verb|ICredentialProviderPlugin| and IProtocolPlugin are extensions of instance. The IdentityProviderClaimDefinition class is used to dene an identity It does however dene every other aspect of the claim. In the simple case of username/password credentials. The name and description properties of the denition object are taken from the concrete instance instantiated in step 1. Claims The ability to dene and work with claims is one of the core features of the IdP. The is acceptable for the claim.75 in the plug-in that is going to be installed. However. The IdPdenes three conceptually dierent types of claims. for example. a number of claims may be returned by the federation partner. The This claim next type of claim is a claim that is dened in the IdP itself. Figure 41 shows some of the classes related to claims. in schema view. The ValueType denes the type of value that etc. The NameFormat is used to describe the format of the Name. in the case of federated login. An instance of either CredentialProviderDefinition or ProtocolDefinition is created and added to the corresponding repository. IPlugin) these properties are available on the concrete 4. A name format could. The rst type of claim is a claim that originates from some credential provider. such as xs:string or xs:int Description is merely a textual description. It is normal for dierent protocols to dene name formats. Step 5 If we have come this far. the last type of claim is IssuedClaim. The value type is normally expressed as an xml type. in the case where a user has no explicit value for the given claim. the plug-in is ready to be installed. DisplayName property is a reader friendly name used for display in UI. specify that the name must be a well-formed URI.10. The Name is the actual name of the claim used when sending the claim some security token. but having the DefaultValue and IsGroup properties allows the IdP to display a user interface where the administrator can work with groups. When the claim is issued by the IdP it looks like any other claim. used for dening groups.76 Figure 41. Adding a user to a group simply results in a claim that states that the user is part of some group. Name. in the case of groups. The DefaultValue is. whereas for other claims it could be set to true. Schema view: Claims the claim. the IsUserUpdateable boolean propIsUserUpdateable property The erty is used to specify whether or not the value of a claim can be changed by a user. and that its value can be used to nd a . Finally. By creating an IdentityProviderClaimDefinition with a and default value of for example Administrator. This property is used to dene that a given credential claim bears the identity of the user. The CredentialClaimDefinition class does not have a DefaultValue property. the will be set to false. for example. As specied in Chapter 2. ClaimName idp:Group. a conceptual group can be created. it must be possible to assign users to groups. It does however have an IsIdentityBearer property. Again. ValueType and Description properties. NameFormat. The CredentialClaimDefinition class is similar to the IdentityProviderClaimDefinition class in that it also has DisplayName. IdentityProviderClaim couples an IdentityProviderClaimDefinition to a User. and (optionally) assigns a ClaimValue for the given claim denition to the given user. since its value is not asIsGroup and signed by the IdP. For the same reason it does not have the IsUserUpdateable properties. it is probably not very practical. Figure 42.77 corresponding User instance. this mechanism is a clear candidate for future refactoring. The IdentityProviderClaim CredentialClaim class) has. When an administrator or a user assigns a value to the claim. as we shall see in Section 4. The IssuedClaim class is a simple data class that holds the values that are needed in the response of a given protocol implementation.11 . A given protocol implementation can query the IdP for a list of instances of on page 79. but The Verb|ClaimValidation| class is used If. This solution denitely works theoretically. the validator is instantiated and evaluated against the value. Class view: Claims Figure 42 shows the three dierent claim types. The class is the same as we have already seen in Figure 41. The a class whose main purpose is to tie a value. and therefore. IssuedClaim for the current user. for example. or have a specic length or format. and are CredentialProvider implementation. Instances of the created by a given CredentialClaim is CredentialClaimDefinition with a CredentialClaim class are never persisted. a the ClaimValidation can be coupled to IdentityProviderClaimDefinition. some claim dictates that its value must be an integer. to dene validation classes for the value of identity provider claims. the credential claim that is identity bearer must have a value that corresponds to a username of some given User instance. It is shown here again to emphasize the two methods it (and the ToIssuedClaim method takes no parameters and returns an instance of the IssuedClaim class. The mechanism is very similar to that of plug-ins in that a ClaimValidation refers to a type string that can be used to create instances of the validator through reection. For this to work. that de- claim mapping denition has a scribe the claim mapping. A claim mapping denition is made up of zero or more CredentialClaimMappings and/or IdentityProviderClaimMappings. and ValueType. The service which has three parameters.1 Claim mapping As mentioned in the specication. Name. Schema view: Claim mapping Figure 43 shows the entity classes used to model claim mapping. As with every other aspect of the model. a protocol id.78 4. and ProtocolClaimMapping and ConnectionClaimMapping. or These two classes dene a number of Boolean values. The actual mapping of claims is done by the has one method. and OverrideValueType. By definition. NameFormat. OverrideNameFormat. NewNameFormat. A Name and a Description property. a GetMappedClaimsForUser. Figure 43. The ClaimMappingDefinition is used to dene a claim mapping. In chapter 7 I will show the implementation of this method. there are services and repositories to handle reading and writing claim mapping data. but only the descriptive properties. and a connection id. user name. a claim mapping belongs to either a this fact is modelled in the classes which hold references to a and Protocol or a Connection. . This concept is called claim mapping. The semantics of these values are that if they have the value true. ClaimMappingDefinition and a ConfiguredProtocol ConfiguredConnection respectively.10. OverrideName. ClaimMappingService. the corresponding New value (NewName. NewValueType) will be used to override the original value of the claim. in schema view. It is important to note that it is not the value of a claim that is changed. shown in Figure 44. it must be possible to change various properties of a claim depending on who the receiver of that claim is. the IdP lets the handling protocol perform validation on. all the components of the model are orchestrated in a way that makes the system work.79 Figure 44. Figure 45 shows the sequence of events that happen when the IdP receives a request for a protocol endpoint. the wellformedness of the request. If the requested URI does not match any of the URIs exposed by the congured protocols an error is reported and a trace entry is created. BeginRequest In this phase the IdP initializes the IdPContext (explained in Section 4. All request parameters will be saved in the IdPContext. Figure 45. DetermineProtocol In this phase. This protocol will be referred to as the "handling protocol".11. The runtime system During runtime.11. for example (but not limited to). the IdP uses the requested URI to determine which specic protocol implementation to reroute the request to. ValidateRequest During this phase.1) and establishes a session. Class view: Claim mapping 4. Runtime request sequence Each of the steps is described below. If validation fails an error is reported and a trace . If necessary. the next phase will be performed. Authentication (optional) In this phase the handling credential provider will collect the user's credentials and create the set of credential claims. The singleton instance .11.80 entry is created. all credential claims that must be reissued are promoted to IdPClaims SendResponse During this phase the response it sent and the sequence ends. and that one is chosen. the IdP will ask the handling protocol if the requested endpoint is one that requires an authenticated user. 4. DetermineCredentialProvider (optional) During this phase the IdP will determine which credential providers have been congured. If the user is already authenticated. Figure 46 shows the IdPContext class in class view. but it also serves as a placeholder for state information between http requests. and instead it is implemented using the singleton pattern (as explained in [Gamma et al. Furthermore. MapUser During this phase. The credential provider that ultimately collects the user's credentials will be referred to as the "handling credential provider". The class does not have a public constructor. each claim is mapped as dened in the IdP conguration. If more than one provider has been congured. the identity bearing credential claim will be mapped to a userid in the user store ProcessRequest During this phase the handling protocol will process the request as specied by its implementation. the user will be presented with a list of possible providers to choose from. If this is the case.1 IdPContext The IdPContext is the IdP's main interface for plug-ins. 1995]). AuthenticationCheck During this phase. The credential claims will stored in the IdPContext. It exists to make it easier for any plug-in to access the IdP's core functionality without having to know too many details about services and repositories. Otherwise there will be only one. First a protocol mapping is performed (if it exists) and then a service provider mapping is performed. otherwise it will skip to the ProcessRequest phase. the current user will be extracted. Optionally the claims for ExtractIdPClaims (optional) During this phase the user's IdPClaims are extracted. the IdP will skip to the ProcessRequest phase. This is done though the EndpointService. because a plug-in can be only a single le. no physical les are requested. such as the last part of the URI. Class view: IdPContext of the class is accessed through the Current property of the class. given a protocol id and a connection id. if the person that makes the request is currently logged in (indicated by the tion). Therefore. when you request a URI. . This method returns the mapped claims for the current user. UserName property holds the username of the user. upon successful authentication of the user. index. namely the dll containing its implementation. The method has an argument whose type is a list of CredentialClaim instances. corresponds to a le on the web server. the IsLoggedIn utility func- ShouldTrace and Trace functions from the TraceService.11. On the IdP.html. and their log counterparts are being exposed here.2 EndpointService In general. The AuthenticationDone method must be called by a credential provider. Finally. however.example. the authenAuthenticationFailed method must be called by the creGetClaimsForUser method is the only method that a The tication fails. it must be able to nd out which plug-in should handle the request. in this case. This makes it easier to upload new plug-ins. Instead. for some reason.html.81 Figure 46. when the IdP receives a request. the dential provider. If. The given protocol implementation must call. 4. what is requested are virtual endpoints that map to some implementation that knows how to respond.com/index. Class view: EndpointService Figure 47 shows the EndpointService class and related classes. Reporting of events During runtime. Both log entries and . logging and tracing. The EndpointService denes a single method. conguration errors. Reporting of these events will be split into two dierent concepts. but also in terms of actions taken by the users and the administrator or administrators themselves. 4. a path. It then uses the to determine which endpoint can handle the path. GetEndpointForPath which has a single argument.12. and which returns an instance of knows how to respond to requests for that path. the IdP must report events in a way that administrators of the system can see what is going on in terms of unexpected errors.82 Figure 47. The method internally uses the IEndpoint that GetEndpointForPath ConfiguredProtocolRepository and ConfiguredCredentialProviderReposi EndpointHandlesPathSpecification to nd all endpoints. and it can also be used to turn logging on and o for dierent parts of the IdP. for example. The LogEntry and LoggingSource classes contains the following elds shown in Figure 48. Figure 48. This is.1 Logging As mentioned above. A single unit of logging data will be called a and its corresponding class will be called coupled to a log entry LogEntry.12. In such an application.83 trace entries must be saved in the database because of the nature of the IdP being a software-as-a-service application. and therefore writing these entries to a le on the disk is not as good an option as writing them to a table in a database. A log entry is always logging source. the administrator will not have access to les on the machine where the application runs. A logging source is used to dene parts of the system that write log entries. but logging sources can also be dened for plug-ins as we shall see below in Section 4. useful for corporations who need to comply to the Danish standard for information security (DS-484).1. logging has to do with events that happen in the system as part of normal use. the IdP will itself dene a set of logging sources.2. Logging such events will allow administrators of the system who has made changes to the system and when. Having this information in a database enables rich query capabilities that are not oered by simple text les. something that will come in handy when needing to display the data in a graphical web user-interface. This is useful for grouping log entries. Furthermore it allows him to see information about user activity. 4. Schema view: LogEntry and LoggingSource . A session is a well-known concept when working with web applications. It also contains the id of the logging source it belongs to. that is used to determine if logging for a given source is turned on or o. We have now dened what kind of data a log entry contains. We denitely need some way of facilitating logging. the . In fact. Note that the relationship between the two is denoted by an arrow. incurred by the UserId property.84 As the Figure 48 shows. a name and a description. but it is by no means obvious how it is to be used. and returns true if logging is enabled for that logging source. a log entry also contains a SessionId. That arrow represents the Finally. each log entry contains a timestamp of when the log entry was created (EntryDateTime) and some message (EntryText). LoggingService Figure 49. and since the IdP is a web application it makes sense to log the session id. Finally. the caller might decide not to build the message at all. User class. So what if logging has been turned o for the current logging source? In this case. But logging often entails building large text messages. which takes both memory and time. or even what the unique id of the logging source is. and ll in the missing parameters. namely the name of a LoggingService needs to interact logging source. A logging source also has a unique id. Figure 49 shows the class. In order to perform the correct logic. Therefore our logging service needs to have a function to help the caller determine if logging is turned on for a given logging source. such that all log entries for a given session can be found. it also has an Enabled ag. how would a user of the class know what unique id to assign to the log entry. Class view: Logging service class The ShouldLog function takes a single parameter. the name of a logging source and some text. The LoggingService class would then have the responsibility of creating a new instance of a LogEntry. what the id of the current user is. You may have noticed another arrow pointing to the relationship to the LogEntry class. What we probably want is some kind of service class with a function that takes two parameters. . do not have to call the ShouldLog function. namely a logging First of all. Figure 50. The source name and a message. IdPContext class will be explained later. which returns the LoggingSource class. and the current LoggingSourceRepository to get the correct instance of SessionId. the function calls corresponding instance of the check the it should return. if they wish. As mentioned this function takes a single parameter. but for now just accept that Add function on the LogEntryRepository which takes an instance of a Finally it reads the The it knows the current user's id and the session id.85 with other components of the system. Note that the bold italic text over or below each component denotes what type of object it is in terms of domain-driven design. not the case. the LogEntry class and lls in the LogEntryText. If the function determines that logging is enabled. It calls the LoggingSource class from where it can retrieve the LoggingSourceId. When called. Figure 50 shows which other components the LoggingService class interacts with. and UserId properties the IdPContext class. Finally. the name of the logging source. the function can return. namely the ShouldLog function. the function calls the LogEntry as a parameter. This also means that callers. The function can now Enabled property of the LoggingSource instance to determine what Log function takes two parameters. Class view: Logging service interactions Lets start by considering the most simple of the two functions. but that they may do so. ShouldLog to If this is determine if logging is turned on for the current logging source. starts by creating a new instance of the time. the the ShouldLog function calls FindByName function on the LoggingSourceRepository. and have a corresponding class called TraceEntry.12. especially for warning and information level trace entries. It must be possible to turn tracing on and o. critical. However. This is because a tracing prole does not have an identity. Since tracing is used to report errors. It will also be possible to turn o errors.2 Tracing Tracing has to with recording events that are related to errors and debugging the system. but it will however not be possible to turn o tracing of critical errors. the TraceEntryFileWriter . error. Data regarding whether or not tracing is turned on for dierent levels is encapsulated in a concept called a trace prole. there is a small dierence here. warning and information. but the le would of course only be accessible to maintainers of the entire software-as-aservice solution. As before we create a service to facilitate adding trace entries.86 4. We dene 4 levels of error tracing. A single unit of tracing data will be called a trace entry. and any two instances of the class can be considered to be the same if their values are identical. This is especially helpful when dealing with errors that have to do with missing database connectivity. any trace entries with trace level critical will also be written to a le on disk. Secondly. Class view: Tracing service interactions Figure 51 shows that adding trace entries is in many ways similar to adding log entries. The it interacts with are shown in Figure 51. The most important things to notice is that the TraceProfile is a value object and not an entity objects. Trace entries need to be written to the database for much the same reason that log entries needed to be written to the database. TracingService and the classes Figure 51. There credential provider plug-ins and protocol plugCredential claims. Every aspect of the IdP can be congured through configuration sets. A claim mapping sends information about a user to a service provider. Users. and invokes the Add function on the TraceEntryRepository everytime. and can be summarized as follows: that binds The IdP is an application Claims to are two kinds of plug-ins. and the Add TraceEntryFileWriter whenever the trace level is set to function on the critical. When a protocol issued claims. namely and protocol claim mappings connection claim mappings. When something in the IdP fails. which is a claim dened in the IdP. Users are created in the IdP through self registration. The two are conceptually identical but the protocol claim mapping is always performed rst. An issued claim originates from either a credential claim. The Trace function of the TracingService takes as parameters a message and a trace level. There are two dierent types of claim mappings. The IdP is extensible through plug-ins. import. and the result of an authentication is a set of information about a user and his claims to some other party. and when an important event occurs.13. The set of issued claims that are sent may have undergone a claim mapping. except their value.87 is used to write trace entries to a le on disk. or invitation. or an identity provider claim. it sends a set of is a mechanism that alters any aspect of a set claim. Both protocols and credential providers may rely on service connections to describe how they communicate with other systems. 4. A credential provider plug-in can authenticate a user. . which contain dierent configuration elements. a trace entry is created. A protocol plug-in sends ins. a log entry is created. called a provider. Ubiquitous language The model described in this chapter has left us with a broad ubiquitous language. Object relational mappers 5 Most modern IT systems rely on databases to store application data. string userName. when using a traditional data access layer.1) NOT NULL. } 88 .} public User(int oid. set.} public string UserName { get. } public string PasswordSalt { get. a UserRepository UserFactory factory class. a User domain class. the denition of a supple model is a model that can easily be changed. set. set.[User]( OID INT IDENTITY(1. changing the database schema would potentially require many changes in the data access classes and domain objects. } public DateTime DateCreated { get. [Password] NVARCHAR(128) NOT NULL. } public string Password { get. namespace IdP { public class User { public int OID { get. PasswordSalt = passwordSalt. PasswordSalt NVARCHAR(128) NOT NULL. The table denition might look something like this: 1 2 3 4 5 6 7 8 9 User table −− User table CREATE TABLE [idp]. Password = password. CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED ([OID]) ) A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 User domain class could look like this: User domain class using System. set. As we saw in Chapter 3. string passwordSalt) { OID = oid. DateCreated = dateCreated. DateTime dateCreated. However. string password. example. and the database schema used is indeed an important part of the domain model. UserName = userName. DateCreated DateTime NOT NULL DEFAULT GetDate(). where we have a repository class and a Let consider the following simple User table. UserName NVARCHAR(128) NOT NULL. set. SqlCommand cmd = new SqlCommand(sql.ExecuteReader().Data. DateCreated from User where UserName = @UserName". return UserFactory. return UserFactory. conn).Parameters.Data. SqlCommand cmd = new SqlCommand(sql. userName).Open(). either by name or by id. namespace IdP { public class UserFactory . using (SqlConnection conn = new SqlConnection("a valid connection string")) { conn. } } public User GetUserByName(string userName) { string sql = "select OID. } } } } And nally a factory which that is able to translate a record of a to an instance of our SqlDataReader User domain class. PasswordSalt.Parameters. conn).AddWithValue("UserName".FromReader(reader). UserName. could look like this: User factory class 1 2 3 4 5 6 using System. cmd. Password. PasswordSalt. Password.FromReader(reader).SqlClient.ExecuteReader(). could look like the following: User repository class 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 using System.89 21 22 } } A repository with methods to nd a given user. oid). cmd.AddWithValue("oid". DateCreated from User where OID = @oid". using (SqlConnection conn = new SqlConnection("a valid connection string")) { conn. namespace IdP { public class UserRepository { public User GetUserById(int oid) { string sql = "select OID. SqlDataReader reader = cmd.SqlClient. using System. UserName. SqlDataReader reader = cmd.Open(). complete with queries and update and delete methods. In this chapter we shall see how we can leverage the power of object relation mappers to create a more supple and maintainable design. For .Net. Since the specication requires me to use a SQL Server database. passwordSalt). adding another eld to the However. updating the two select statements and adding another line of code to the factory method.ToString(). userName. password. .ToString(). DateTime dateCreated = (DateTime) reader["DateCreated"]. using this approach would not yield an especially supple design. and I'm already using .Net 3.Read()) { int oid = (int) reader["OID"]. since even small logical model changes would require quite a few changes to the code. LINQ to SQL is part of the .5. return new User(oid.Net framework 3. the two most feature complete being NHibernate and LINQ to SQL. Needless to say. several alternatives exist. string userName = reader["UserName"].ToString(). dateCreated. string passwordSalt = reader["PasswordSalt"]. string password = reader["Password"]. } return null.90 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 { public static User FromReader(SqlDataReader reader) { if(reader. what if we were to add another eld to the require updating the table. } } } The preceding classes show how we could implement data access for a the User table using the building blocks of domain-driven design. Object relational mappers are software tools that automatically map the tables of a database schema to classes in a programming language. User table? Doing so would User domain class. using LINQ to SQL seems like a natural choice. if we had a large database schema.5 framework and has been developed in close cooperation with the SQL Server team for high performance when used with a SQL Server database. IEnumerable<string> query = from s in names where s. however.Net language.WriteLine(item). The most important thing to notice. First of all.91 5. Extension methods are static methods that are declared outside of a the class they work on. 9 The standard query operators apply to IEnumerable<T> which is the interface implemented by every enumerator (array and specialized collections) in .ToUpper(). "Frank". " David" }. "Everett". foreach (string item in query) Console. T. is that the SQL like syntax in the example is merely syntactic sugar on top of a concept called extension methods. LINQ LINQ stands for language-integrated query and provides a set of standard query operators that can be used directly in any . "Albert". 1 2 3 4 5 6 7 8 9 10 User extension namespace IdP { public static class UserExtensions { public static int UserNameLength(this User u) { return u. method for the The following example shows how an extension User class could be implemented.Net over a generic type shows the syntax for a standard LINQ query. having the from keyword in the beginning of the expression.1. and brought into scope by importing the namespace where they are declared. Query expressions in LINQ benet from compile time syntax checking. "George". static typing and intellisense . There are several interesting things to note about the above code.Length == 5 orderby s select s. The following code sample 1 2 3 4 5 6 7 8 9 10 LINQ example string[] names = { "Burke". the reverse syntax of the query. "Connor". This has been necessary to make intellisense work.Length. } } } 9 Intellisense is the auto-completion feature in the Visual Studio IDE .UserName. "Harris". The full applying extension methods as the example shows. This means that the above query could be rewritten to this: 1 2 3 4 5 6 7 8 LINQ example 2 string[] names = { "Burke".OrderBy(s => s). "Connor". Console. "Everett".Select(s => s.GetByName("Klaus"). LINQ to SQL is an extension of what we have seen so far. As a matter of fact. ToUpper()). "Harris". Note that the result of applying a LINQ extension method to an IEnumerable<T> is itself an IEnumerable<T>.UserNameLength(). the syntax of the above example is merely syntactic sugar. bool> predicate). LINQ to XML provides language integrated query for XML documents.Length == 5).Length == 5 in the example. Figure 52 shows the dierent avours of LINQ that are oered by the framework.1.WriteLine(item). such as list of query operators dened by LINQ can be seen in Table 5. as mentioned.Net.Where(s => s. So how does this apply to the LINQ query shown above? Well. however they have nothing to do with LINQ to SQL. LINQ to XML and the LINQ enabled ADO. int len = u. Figure 52 shows that there are three dierent LINQ enabled data sources. and actually translates to a set of extension methods on IEnumerable<T>. which is written as a lambda expression.ReadLine().92 Note the special this keyword. "Albert". foreach (string item in query) Console. and LINQ enables ADO. User u = UserRepository. By importing the namespace where the extension UserNameLength function directly on applies to the type method is dened. Func<T.Net provides language integrated queries for relational . IEnumerable<string> query = names. The example shows how extension methods on IEnumerable<T> are used to express the query. "Frank". The signature of the extension methods used is The Func type is an expression from type T to bool. " David" }. s => s. The LINQ examples we have seen thus far have been good for illustratory purposes. We have already seen how LINQ to objects works in the preceding examples. thus allowing us to keep this IEnumerable<T> source. that tells the compiler that the method User. LINQ to objects. "George". which is called LINQ to objects. I can now call the the 1 2 User class. LINQ. Except First. and LINQ enabled ADO.93 Restriction Projection Ordering Grouping Quantiers Partitioning Sets Elements Aggregation Where Select. ElementAt. LastOrDefault. ToDictionary. ADO. ElementAtOrDefault. TakeWhile. Average. Min. Max. namely . ToList.Net consists of three dierent technologies. Skip. data stores. Union. Cast Element First. Sum. ElementAt Count. SkipWhile Distinct. AsEnumerable. DefaultIfEmpty Figure 52. SelectMany OrderBy. ToLookup. Contains Take. ThenBy. LongCount. Aggregate Conversion ToArray. Intersect.Net is the framework for manipulating relational data. SingleOrDefault. OfType. Single. FirstOrDefault. Reverse GroupBy Any. All. Last. FirstOrDefault. LINQ to SQL data context (designer view). but it is said that it will become the predecessor of LINQ to SQL. A data context is a class that has several responsibilities. a framework for dening conceptual data schemas. LINQ to SQL and LINQ to entities. called entity data models. Furthermore. and nally it tracks in memory changes made to the instances of those classes. and drag the onto the design surface. mainly because it lacks tool support in the IDE. LINQ to datasets is an extension to the in memory dataset classes of the .2.94 Figure 53. LINQ to SQL also supports compile time type checking. User table . the data context contains information about which database to connect to. as shown in Figure 53. LINQ to datasets.Net Entity framework. Using the IDE we can create a new data context. LINQ to entities has still not proven to be mature. and LINQ to entities is the LINQ extension of the . This is achieved through the what is called a data context. First of all. LINQ to SQL As with LINQ to objects.Net framework. 5. it contains denitions for the classes that represent the tables in the database. instance). System.Lin LINQ data context #pragma warning disable 1591 // // <auto−generated> // This code was generated by a tool. instance). // Runtime Version:2.1433 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated.MappingSource mappingSource = new AttributeMappingSource().Data.Linq. System. } public IdPDataClassesDataContext(string connection) : base(connection. // </auto−generated> // namespace IdP { using using using using using using using using using System.Linq.Linq. partial void InsertUser(User partial void UpdateUser(User partial void DeleteUser(User #endregion Definitions instance). System. System.Settings.Data.Mapping.Data. mappingSource) { OnCreated().0.IDbConnection connection) : .DatabaseAttribute(Name="IdPDatabase")] public partial class IdPDataClassesDataContext : System. System. public IdPDataClassesDataContext() : base(global::IdPTestApp.Data.Mapping. System. System. System.Data.Expressions. [System.Linq.Mapping. #region Extensibility Method partial void OnCreated().Default. } public IdPDataClassesDataContext(System.Data.Reflection. SafewhereConnectionString.DataContext { private static System. Now lets see what kind of code is generated behind the scenes. mappingSource) { OnCreated().95 We have now seen that we can easily drag tables onto our data context to generate the corresponding class.Collections.50727.Data.Generic.Linq.Properties.ComponentModel.Linq. 96 base(connection, mappingSource) { OnCreated(); } public IdPDataClassesDataContext(string connection, System.Data.Linq.Mapping. MappingSource mappingSource) : base(connection, mappingSource) { OnCreated(); } public IdPDataClassesDataContext(System.Data.IDbConnection connection, System .Data.Linq.Mapping.MappingSource mappingSource) : base(connection, mappingSource) { OnCreated(); } public System.Data.Linq.Table<User> Users { get { return this.GetTable<User>(); } } } [Table(Name="idp.[User]")] public partial class User : INotifyPropertyChanging, INotifyPropertyChanged { private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty); private int _OID; private string _UserName; private System.DateTime _DateCreated; private System.Nullable<System.DateTime> _LastLogin; private string _Password; private string _PasswordSalt; #region partial partial partial partial partial partial partial partial partial Extensibility Method Definitions void OnLoaded(); void OnValidate(System.Data.Linq.ChangeAction action); void OnCreated(); void OnOIDChanging(int value); void OnOIDChanged(); void OnUserNameChanging(string value); void OnUserNameChanged(); void OnDateCreatedChanging(System.DateTime value); void OnDateCreatedChanged(); 97 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 partial void partial void partial void partial void partial void partial void #endregion OnLastLoginChanging(System.Nullable<System.DateTime> value); OnLastLoginChanged(); OnPasswordChanging(string value); OnPasswordChanged(); OnPasswordSaltChanging(string value); OnPasswordSaltChanged(); public User() { OnCreated(); } [Column(Storage="_OID", AutoSync=AutoSync.OnInsert, DbType="Int NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] public int OID { get { return this._OID; } set { if ((this._OID != value)) { this.OnOIDChanging(value); this.SendPropertyChanging(); this._OID = value; this.SendPropertyChanged("OID"); this.OnOIDChanged(); } } } [Column(Storage="_UserName", DbType="NVarChar(128) NOT NULL", CanBeNull=false )] public string UserName { get { return this._UserName; } set { if ((this._UserName != value)) { this.OnUserNameChanging(value); this.SendPropertyChanging(); this._UserName = value; this.SendPropertyChanged("UserName"); this.OnUserNameChanged(); } } } [Column(Storage="_DateCreated", DbType="DateTime NOT NULL")] public System.DateTime DateCreated 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 98 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 { get { return this._DateCreated; } set { if ((this._DateCreated != value)) { this.OnDateCreatedChanging(value); this.SendPropertyChanging(); this._DateCreated = value; this.SendPropertyChanged("DateCreated"); this.OnDateCreatedChanged(); } } } [Column(Storage="_LastLogin", DbType="DateTime")] public System.Nullable<System.DateTime> LastLogin { get { return this._LastLogin; } set { if ((this._LastLogin != value)) { this.OnLastLoginChanging(value); this.SendPropertyChanging(); this._LastLogin = value; this.SendPropertyChanged("LastLogin"); this.OnLastLoginChanged(); } } } [Column(Storage="_Password", DbType="NVarChar(128) NOT NULL", CanBeNull=false )] public string Password { get { return this._Password; } set { if ((this._Password != value)) { this.OnPasswordChanging(value); this.SendPropertyChanging(); this._Password = value; this.SendPropertyChanged("Password"); this.OnPasswordChanged(); } 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 } set { if ((this.SendPropertyChanged("PasswordSalt"). It is decorated with a DatabaseAttribute which denes the name of the database it connects to.PropertyChanged(this. DbType="NVarChar(128) NOT NULL". Also note that it is declared as partial. public event PropertyChangedEventHandler PropertyChanged. Partial classes are especially useful when one part of the partial class is autogenerated by a tool._PasswordSalt != value)) { this.99 215 216 217 218 } } [Column(Storage="_PasswordSalt".OnPasswordSaltChanged()._PasswordSalt.PropertyChanging(this. new PropertyChangedEventArgs( propertyName)).OnPasswordSaltChanging(value)._PasswordSalt = value.SendPropertyChanging(). emptyChangingEventArgs). CanBeNull= false)] public string PasswordSalt { get { return this. this. protected virtual void SendPropertyChanging() { if ((this. because it means that we can dene other meth- .PropertyChanging != null)) { this. } } protected virtual void SendPropertyChanged(String propertyName) { if ((this.PropertyChanged != null)) { this. } } } } #pragma warning restore 1591 LINQ data context 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 The main data context class begins on line 25. this. } } } public event PropertyChangingEventHandler PropertyChanging. which means that it can extended. this. this. This is the eld that we can use to The return perform LINQ queries. If you remember the example from page 92.Linq. type of the property is Table<User>. which allow LINQ to SQL to track . In LINQ to SQL. On lines 32-35 we nd a set of extensibility methods. When we start iterating over the result. as we shall see in an example shortly. any query on a Table<T> is not executed until you iterate over the result. a SQL query is automatically generated behind the scenes and executed against the database. On line 68 we can see a public property called Users.DataContext class. again declared partial. even though they can be used. is that the Table<T> class implements another interface. The way it has been done. connecting to the database. the and User class implements the INotifyPropertyChanging INotifyPropertyChanged interfaces. but with another internal implementation. The generic Table<T> type is an integral Table class implements the part of the LINQ to SQL framework and implements the translation from LINQ expressions to actual SQL statements. The User class has a Table attribute that includes information about the actual table name in the database. This problem has of course been overcome by the designers of LINQ to SQL.Data. updated or deleted. for which all the same extension methods have been dened. which will not be overwritten if we run the code generation tool again. and in general it would be a complete misuse of the rdbms's capabilities. we applied three extension methods to an array. This would potentially cripple the application's performance. However. we would eectively fetch all rows into memory and do the selection logic on the in memory data representation. which contains most of the logic for tracking object changes. committing changes etc. The IEnumerable<T> interface and can thus make use of the extension methods mentioned earlier.100 ods and elds of that same class in an other le. Furthermore. The class inherits the System. of course. On line 77 we have the denition for the User class. These methods allow us to implement extra logic whenever User objects are inserted. This behaviour allows us to compose a complex query of several less complex parts without executing the actual query before we iterate over the result. If we were to do this with the data from a SQL database table. Lines 38-66 contain various overloaded constructors. called IQueryable<T>. This is because the LINQ to objects extension methods are executed immediately on the underlying IEnumerable<T> instance. the LINQ to objects extension methods should not be used with Table<T> classes. 101 in-memory data-changes. In Chapter 3 we saw an example of a specication called RecentlyCreated.OID). The example below shows how we can use the data context to query the user table.UserName).UserName. which contained the denition for a recently created user. database column type etc. such as Apart from that. A very nice feature of LINQ to SQL is its ability to translate generic expressions to SQL statements. namespace IdPTestApp . thus enabling us to have specications that can be used both in code and in the database.. //The query is not performed against the database before we iterate over the result foreach(var user in result3) { Console.Select(u => u). all three variables declared using the actually of type IQueryable<User>.Select(u => u).Where(u => u. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 LINQ to SQL example using(var ctx = new IdPDataClassesDataContext()) { //Find users with a UserName of length 5 var result = from u in ctx. } } Note that the var keyword is a shorthand that can be used where the var keyword are compiler can infer the type by looking at the type of what is assigned to variable.WriteLine(user. pected properties. We could extend the User class generated by the data context to contain a method that could tell us if a user was recently created: Extending the User class 1 2 3 using System. the User class has all the ex- OID.OrderBy(u => u.Users. and all these properties are decorated with attributes that tell something about the database column name.OID > 100).Length == 5 orderby u. We can leverage this ability to declare our domain-driven designspecications as LINQ expressions.Users where u.Where(u => u.UserName.OID select u.Length == 5). UserName etc. In this case. //or var result2 = ctx. //we can further refine the result by adding an extra where clause specifying that OID must be higher than 100 var result3 = result2. Invoke(this). } } } Note the static variable called to SQL query too.102 4 5 6 7 8 9 10 11 12 13 14 { public partial class User { public static Func<User. we can reuse that same LINQ expression. public bool IsRecentlyCreated() { return spec. This variable can be used in a LINQ So if we want to nd all recently created users in the database. .Where(User. AddDays(−10)). } } In this elegant way. bool> spec = (user => user. as shown in the following example. thus making maintenance a lot easier.WriteLine(user.Now. spec.Users. 1 2 3 4 5 6 7 8 9 10 11 Using specification to query the database using (var ctx = new IdPDataClassesDataContext()) { //Find recently created users var result = ctx.Select(u => u). foreach (var user in result) { Console.DateCreated > DateTime.UserName). we can ensure that our domain logic is only dened in one single place.spec). this identity usually translates to a primary key. In Chapter 3 we saw that repositories should be used to store and retrieve entity objects. when in reality they are being retrieved from some external storage. the result of any of the operations should not be stored.Core. The rameter. We can dene an interface for entity objects as shown below. OID (acronym for object id). IEntity. and in [McCarthy 2008] and [Harding 2008] in particular. and in this chapter we shall explore how we can leverage the capabilities of LINQ to SQL to create a domain-neutral component that can support development using the domaindriven design concepts. meaning that if one of the operations fail. interface species that every object that implements the interface must have a readonly property called We will sometimes need to manipulate several dierent entity objects from dierent repositories within a transaction. an entity object always has an identity. A transaction is an encapsulation of a series of operations that are co-dependent. such as a relational database. In the domain-driven design community.Domain { public interface IEntity<TPrimaryKey> { TPrimaryKey OID { get. It is. and in a relational database. I presume that this term has been chosen instead of transaction because of the fact that the word transaction is so tightly coupled to a specic storage technology.A domain-neutral component 6 In Chapter 5 we saw how LINQ to SQL works. of course. The word transaction is well-known from the world of relational databases. which species the type of the primary key. thus creating the illusion that these objects are all in memory. namely relational databases.cs 1 2 3 4 5 6 7 namespace Safewhere. not the responsibility of a repository to know when it is used within a unit 103 . } } } The IEntity interface shown above is a generic interface with a type paTPrimaryKey. this transaction concept is referred to as a unit of work. Given its long life-span. Core. an equivalent to committing when working with Create function is a generic function that will IUnitOfWorkElement a constructor that takes no create an instance of any class that implements the interface and has a default constructor (eg.Domain { public interface IUnitOfWorkElement { IUnitOfWork UnitOfWork { set. Create and Complete. IUnitOfWorkElement interface has the following denition: 1 2 3 4 5 6 7 IUnitOfWorkElement.Domain { public interface IUnitOfWork : IDisposable { T Create<T>() where T : IUnitOfWorkElement. new().Net. The interface for an abstract unit of work could be dened as follows: IUnitOfWork. but should we wish to use some other data storage in the future. LINQ to SQL resources are not unmanaged . void Complete().Core.cs 1 2 3 4 5 6 7 8 9 10 11 using System. relational databases. using System.Data. } } } The IUnitOfWorkElement interface denes a write-only property called IUnitOfWork in- UnitOfWork of type IUnitOfWork.cs 1 2 3 4 using System. } } The IUnitOfWork interface denes two functions. The The Complete method is used to complete the unit of work. arguments.104 of work. using System.Linq. and will be used by the IUnitOfWork's Create method. IDisposable interface. Any implementor of the IUnitOfWorkElement interface will thus expose a way of setting the associated stance.Transactions. as we shall see in the following concrete implementation of a unit of work: LinqToSqlUnitOfWork. including the IDisposable interface is a safe choice. But a repository must have the ability to be enrolled in a unit of work. This is denoted by the The new() restriction on the generic type T). . namespace Safewhere. an interface that can The interface also inherits the be implemented to release unmanaged resources during garbage collection when working in .cs namespace Safewhere. 105 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 namespace Safewhere. } public void Dispose() { _dataContext = null. A . } #endregion } } The LinqToSqlUnitOfWork class has two constructors.Domain { public class LinqToSqlUnitOfWork : IUnitOfWork. _txScope = transactionToUse == null ? new TransactionScope() : new TransactionScope(transactionToUse). _txScope.GetEntityContainer<T>() { return new LinqToSqlEntityContainer<T>(_dataContext). } public T Create<T>() where T : IUnitOfWorkElement. Transaction transactionToUse) { if (dataContext == null) throw new ArgumentNullException("dataContext"). TransactionScope _txScope. new() { return new T {UnitOfWork = this}. and calls the null value for the transactionToUse parameter. The rst one takes DataContext as discussed in the previous chapter.Core. public LinqToSqlUnitOfWork(DataContext dataContext) : this(dataContext. } public void Complete() { _dataContext.SubmitChanges(). null){} public LinqToSqlUnitOfWork(DataContext dataContext. _dataContext = dataContext. } #region IRepositoryImplementationHelper Members IEntityContainer<T> IRepositoryImplementationHelper. IRepositoryImplementationHelper { DataContext _dataContext.Dispose().Complete(). _txScope. DataContext in a private variable and sets a LINQ other constructor with a The second constructor saves the the private _txScope variable based on wether or not the transactionToUse TransactionScope is used to implicitly enlist a block of parameter is null. cs namespace Safewhere.106 code in a transactions in . and allows you to assign values to public properties during the call to the constructor. The persist the changes made in memory to the database.5.Linq. } } The IRepositoryImplementationHelper interface is going to be used by GetEntityContainer. The IRepositoryImplementationHelper interface is dened as follows: 1 2 3 4 5 6 7 IRepositoryImplementationHelper.cs 1 2 3 4 5 6 7 8 9 10 using System. which should be used to add an remove entity objects from a Note the use of the object initializer syntax that is new in . See [Microsoft 2009] for an in-depth explanation. as described in [Microsoft 2007]. . The Verb|IEntityContainer| our repository implementation.Domain { public interface IEntityContainer<T> : IQueryable<T> { void Add(T element). the class implements the terface by implementing the IRepositoryImplementationHelper in- GetEntityContainer method. void Remove(T element).Domain { public interface IRepositoryImplementationHelper { IEntityContainer<T> GetEntityContainer<T>() where T : class. namespace Safewhere.Core. } } The IEntityContainer interface is a generic interface that inherits from IQueryable interface.Net 3. The Create method is implemented by calling the default constructor10 of T and setting the unit of work instance to the current LinqToSqlUnitOfWork the type instance. The vate Complete method is implemented by calling SubmitChanges on the priSubmitChanges method on a DataContext will make the DataContext _dataContext object.Core. and denes a single method called which returns an has the following denition: IEntityContainer. Finally. It denes the two methods Add the generic built-in and 10 Remove.Net. IEntityContainer of a given class T. and then calling Complete on the TransactionScope instance. Linq. System. } #region IEntityContainer<T> Members public void Add(T element) { _table.IEnumerator System. System.Data. } public void Remove(T element) { _table. System. } #endregion #region IEnumerable<T> Members public IEnumerator<T> GetEnumerator() { return _table.Linq. internal LinqToSqlEntityContainer(DataContext dataContext) { _table = dataContext.Collections using using using System.GetEnumerator() { return _table.Core. namespace Safewhere.Generic.IEnumerable. } #endregion #region IEnumerable Members System.InsertOnSubmit(element).Collections.107 container. The following class is an implementation of the interface. } #endregion #region IQueryable Members public Type ElementType { get .GetEnumerator().Domain { internal class LinqToSqlEntityContainer<T> : IEntityContainer<T> where T : class { Table<T> _table.GetTable<T>().Collections. IEntityContainer LinqToSqlEntityContainer.GetEnumerator().DeleteOnSubmit(element). 108 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 { var q = _table as IQueryable<T>. using System. Since this entity container uses LINQ to SQL. but only as an internal datastorage-specic container for entities.Linq. return q.cs The rst thing to notice about the is marked LinqToSqlEntityContainer is that it internal is that internal.ElementType.Expressions. return q. The constructor takes a LINQ DataContext which is uses for retrieving the correct Table for the generic type T. TPrimaryKey> . It implements all the interface methods from IEntityContainer by Table instance _table.Expressions. using System.Expression. for the generic type Table. T.cs 1 2 3 4 5 6 7 using System. return q. This means that it can only be used from within the assembly where it resides. its internal implementation is based on an instance of a as described earlier. The reason that this class is made it is not supposed to be used directly by any client.Domain { public interface IRepository<TEntity.Linq.Provider. calling the suitable methods on the We have now seen the denition of most of the classes needed for our generic repository implementation.Core. } } #endregion } } LinqToSqlEntityContainer. namespace Safewhere. } } public System.Linq. and we can now dene the interface for the repository as follows: IRepository. } } public IQueryProvider Provider { get { var q = _table as IQueryable<T>.Expression Expression { get { var q = _table as IQueryable<T>. The rst function takes a parameter of type Expression<Func<TEntity. repository. } IQueryable<TEntity> Find(Expression<Func<TEntity. bool>> expr). Lines 16-22 dene dierent nder functions. TEntity this[TPrimaryKey primaryKey] { get. bool>> expr). IUnitOfWorkElement where TEntity : IEntity<TPrimaryKey> { void Add(TEntity element). bool>>. the The two Find functions. IQueryable in- TPrimaryKey. The IBooleanExpressionHolder interface is dened as follows: . IRepository interface denes Add and Remove functions for adding and removing entity objects. but only return the rst result instead of returning The a collection. an inFindFirst functions take the same parameters as terface that we will use to encapsulate our specications. bool>> expr). which is the lambda expression that was described earlier in this chapter. as we shall see further down. It also denes an indexer function (in line 14) that allows nding a single instance of an entity object based on its unique identier. void Remove(TEntity element). the interface imposes the restriction that the type must implement the as a primary key. and the IUnitOfWork interface. The second Find function takes an IBooleanExpressionHolder. the interface denes two mines if one or more entity objects exist for a given lambda expression or IBooleanExpressionHolder. and type. TEntity FindFirst(Expression<Func<TEntity. IQueryable<TEntity> FindAll().109 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 : IQueryable<TEntity>. TEntity FindFirst(IBooleanExpressionHolder<TEntity> spec). Furthermore. IQueryable<TEntity> Find(IBooleanExpressionHolder<TEntity> spec). where TEntity is the type of the entity object that the repository serves. The repository must also implement the terface for type The TEntity. TPrimaryKey. bool Exists(Expression<Func<TEntity. bool Exists(IBooleanExpressionHolder<TEntity> spec). } } The IRepository interface is a generic interface of type TEntity and TPrimaryKey is the type of the primary key of that entity TEntity IEntity interface with using the same type. The two Find func- tions return a collection of entity objects based on some search criterion. FindAll function returns all the entity objects in the Exists functions that deter- Finally. . using System. namespace Safewhere. } public Expression<Func<T.Core. bool>>. The purpose of this interface is to hold an expression that will evaluate to a Boolean value for an instance of a given generic type junction with a repository. IBooleanExpressionHolder<T> { Expression<Func<T. } } } The IBooleanExpressionHolder is a generic interface for a generic type T and contains a single property of type Expression<Func<T. namespace Safewhere. bool>> Expression { get.Domain { public interface IBooleanExpressionHolder<T> { Expression<Func<T. } return _compiledExpr. bool>> specExpression) { _expr = specExpression.110 IBooleanExpressionHolder.Domain { public abstract class LambdaSpecification<T> : ISpecification<T>.Expressions. bool>> _expr. that is dened as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 using System.Linq. When used in conFind.cs 1 2 3 4 5 6 7 8 9 10 using System. The most important class in the component that implements the IBooleanExpressionHolder interface is the LambdaSpecification ISpecification.Expressions.Core.Linq. FindFirst or IBooleanExpressionHolder can be used to express a search criterion for use with the Exists functions. bool> CompiledExpression { get { if (_compiledExpr == null) { _compiledExpr = _expr. an implementation of T. protected LambdaSpecification(Expression<Func<T.cs class. bool>> Expression { get { return _expr.Compile(). } } public Func<T. bool> _compiledExpr. using System. Func<T. } } The ISpecification interface is a generic interface that has a single function.Linq.cs 1 2 3 4 5 6 7 8 9 using using using using System. just like the example presented earlier. This is because in domain-driven design a specication should be explicitly named and be a concept of it's own. TPrimaryKey> : IRepository<TEntity. we do not want anyone to use instances of the class directly.Collections. The LambdaSpecification class does however implement Expression and CompiledExpression.cs namespace Safewhere.111 30 31 32 33 34 35 36 37 38 } } public bool IsSatiesfiedBy(T element) { return CompiledExpression. and invokes the compiled expression on this instance. the implementation of the interface is implemented by the an argument of an instance of the class' generic type. only compiled internally ISpecification IsSatisfiedBy function. even though it could be possible.cs The both LambdaSpecification class is a generic abstract class that implements ISpecification (described further down) and the IBooleanExpressionHolder interface. CompiledExpression is that same expression. } } } ISpecification. The class contains two properties.Invoke(element). Lastly. The class is abstract since.Expressions. the The Expression property is the same that the class gets in the constructor.Core. The method takes ISpecification interface is dened for better performance.Domain { public class Repository<TEntity.Core.Generic. The as follows: 1 2 3 4 5 6 7 ISpecification.Domain { public interface ISpecification<T> { bool IsSatiesfiedBy(T element). System.Linq. System. namespace Safewhere. The value of most of the logic for creating those explicitly named specications. IsSatisfiedBy. System. TPrimaryKey> . We can now dene a repository in the following way: Repository. } } public IQueryable<TEntity> Find(Expression<Func<TEntity.SingleOrDefault(entity => primaryKey. TPrimaryKey> Members public void Add(TEntity element) { _entityContainer. } public void Remove(TEntity element) { _entityContainer. bool>> expr) { return _entityContainer.Equals(entity. } #region IRepository<TEntity.Where(spec.Where(expr). } . } public IQueryable<TEntity> Find(IBooleanExpressionHolder<TEntity> spec) { return _entityContainer. IEntity<TPrimaryKey> { protected IUnitOfWork _unitOfWork. OID)).GetEntityContainer<T>().Expression). return repHelper. } public bool Exists(Expression<Func<TEntity.FirstOrDefault(expr).Expression). } public TEntity FindFirst(IBooleanExpressionHolder<TEntity> spec) { return _entityContainer. protected IEntityContainer<TEntity> _entityContainer.112 where TEntity : class. bool>> expr) { return _entityContainer.Count(expr) > 0. bool>> expr) { return _entityContainer.Add(element).FirstOrDefault(spec. protected IEntityContainer<T> CreateEntityContainer<T>() where T: class { var repHelper = (IRepositoryImplementationHelper) UnitOfWork . } public TEntity FindFirst(Expression<Func<TEntity.Remove(element). } public TEntity this[TPrimaryKey primaryKey] { get { return _entityContainer. GetEnumerator() { return _entityContainer. } } #endregion #region IUnitOfWorkElement Members public IUnitOfWork UnitOfWork { set { _unitOfWork = value.GetEnumerator(). } #endregion #region IEnumerable Members System.AsQueryable(). } #endregion #region IEnumerable<TEntity> Members public IEnumerator<TEntity> GetEnumerator() { return _entityContainer. } public IQueryable<TEntity> FindAll() { return _entityContainer.IEnumerable. } } public IQueryProvider Provider { get { return _entityContainer.IEnumerator System.Count(spec.Collections.ElementType.Collections. } #endregion #region IQueryable Members public Type ElementType { get { return _entityContainer.Provider.GetEnumerator(). .113 public bool Exists(IBooleanExpressionHolder<TEntity> spec) { return _entityContainer. } } public Expression Expression { get { return _entityContainer.Expression.Expression) > 0. one important reason for using repositories is that a concrete repository implementation. . for example one that uses an SQL server backend. we can implement an in-memory data store.114 121 122 123 124 125 126 127 128 129 130 131 132 _entityContainer = CreateEntityContainer<TEntity>(). the private the CreateEntityContainer function. amongst other things.Domain { public class InMemoryUnitOfWork : IUnitOfWork. The most important thing to notice is in the implementation of the on the class. implements the IRepository interface preIUnitOfWorkElement interface.cs 1 2 3 4 5 6 7 8 9 10 11 12 using System. For this we need. When the UnitOfWork instance is set _entityContainer variable is instantiated by calling sented earlier. This is very useful for those unit tests whose purpose is not to test the data storage system. namespace Safewhere.Core. } get { return _unitOfWork.cs The Repository class.1. IRepositoryImplementationHelper { readonly IInMemoryDataContainer _container. Preparing for test As mentioned in Chapter 3. InMemoryUnitOfWork. but only to test the domain functionality. an an InMemoryUnitOfWork and InMemoryEntityContainer. The CreateEntityContainer function UnitOfWork to a IRepositoryImplementationHelper which GetEntityContainer function. in turn casts the contains the class use this 6. } } #endregion } } Repository. The rest of the functions in this EntityContainer internally to perform the work. By adding a few extra classes to the domain neutral component. can be interchanged with another one. of course. public InMemoryUnitOfWork(IInMemoryDataContainer container) { if (container == null) throw new ArgumentNullException("container"). Generic. new() { return new T { UnitOfWork = this }. namespace Safewhere. an instance of an IInMemoryDataContainer.115 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 _container = container. } public void Complete() { //this method does nothing } #endregion #region IDisposable Members public void Dispose() { //this method does nothing } #endregion #region IRepositoryImplementationHelper Members public IEntityContainer<T> GetEntityContainer<T>() where T : class { return new InMemoryEntityContainer<T>(_container). } #endregion } } InMemoryUnitOfWork. using System. This is solely by decision. } #region IUnitOfWork Members public T Create<T>() where T : IUnitOfWorkElement. using System. this class does not make use of the TransactionScope class.Domain { . Furthermore.cs The InMemoryUnitOfWork class is a lot like its LINQ counterpart.Linq. since it easily could. The most important dierence is the argument to its constructor.Collections.cs 1 2 3 4 5 6 using System.Core. the class can be kept simple. InMemoryEntityContainer. The GetEntityContainer function uses an in-memory version of an entity container. instead of the DataContext instance that is used by the LINQ version. but because its purpose is to facilitate testing without an underlying database. } #endregion #region IQueryable Members public Type ElementType { get { IQueryable<T> q = _data.116 internal class InMemoryEntityContainer<T> : IEntityContainer<T> where T : class { readonly List<T> _data.AsQueryable().IEnumerable.GetEnumerator().Collections. } #endregion #region IEnumerable Members System.GetEnumerator(). internal InMemoryEntityContainer(IInMemoryDataContainer container) { _data = container. } } public System. } #endregion #region IEnumerable<T> Members public IEnumerator<T> GetEnumerator() { return _data.Linq.Expression Expression { get { .ElementType.GetEnumerator() { return _data.Add(element). } public void Remove(T element) { _data.IEnumerator System.Collections.Remove(element).GetData<T>().Expressions. return q. } #region IEntityContainer<T> Members public void Add(T element) { _data. return q. return q.Collections.AsQueryable().117 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 IQueryable<T> q = _data.AsQueryable(). .Core. and is dened as follows: IInMemoryDataContainer.cs The implementation of the InMemoryEntityContainer class also resembles its LINQ counterpart.Linq the class can perform its logic on that list just in Table instance. } } The IInMemoryDataContainer denes a single generic function that provides data of the correct type.Expression.Provider.cs 1 2 3 4 5 6 7 8 9 using System. By using the LINQ to objects extensions System.Domain { public interface IInMemoryDataContainer { List<T> GetData<T>(). The dened in the same way as the LINQ counterpart does with its IInMemoryDataContainer provides the data. namespace Safewhere. } } #endregion } } InMemoryEntityContainer.Generic. } } public IQueryProvider Provider { get { IQueryable<T> q = _data. diering only in that its internal data-holder is a generic List (the _data variable). entity objects and specications. of course including some repository implementations using the framework described in the preceding chapter. Figure 54. please feel free to look at the implementation of that part on the enclosed disk media. 10 of which do not contain source code. and therefore not fully documented either. I will focus on the parts that I nd most interesting. As you can see the solution contains 155 les. but this is natural since the system is not fully implemented. Figure 55 shows the structure of the solution.Implementation 7 In the previous chapter we saw how we could use LinqToSql to create a domain-neutral framework to help us with the implementation of repositories. Figure 54 shows an analysis of the solution le. As you can see the solution 118 . entire implementation is too large to be presented here. The enclosed disk media contains a Visual Studio 2008 solution le and all the corresponding les. Source lines of code. In this chapter I will present the Since the implementation of some of the classes described in Chapter 4. As you may have noticed the percentage of lines of comments is quite low. If I have left out some part that is of your particular interest. .119 structure and le names adhere strictly to the ubiquitous language. .120 Figure 55. Solution le structure. 121 7.0.1 Runtime system The runtime system is particularly interesting, since it shows how the IdP delegates the actual work to the dierent plug-ins. The class that handles every request and determines what to do is the class, shown below. IdPEndpointHandlerFactory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 using using using using using using IdPEndpointFactory.cs System.Web; System.Web.SessionState; Safewhere.IdP.Application.Endpoints; Safewhere.IdP.Application.Pages; Safewhere.IdP.Domain.Logging; Safewhere.IdP.Properties; namespace Safewhere.IdP.Application.HttpHandlers { public class IdPEndpointFactory : IHttpHandlerFactory, IRequiresSessionState { public IHttpHandler GetHandler(HttpContext context, string requestType, string virtualPath, string path) { var loginSpec = new IsGlobalLoginUriSpecification(); if(loginSpec.IsSatiesfiedBy(context.Request.Url)) return new IdPLoginPage(); IEndpoint endpoint = EndpointService.GetEndpointForPath(virtualPath); if (endpoint != null) { return endpoint.Handler; } string errorMessage = string.Format(IdPErrorMessages.EndpointNotFound, virtualPath); IdPContext.Current.Trace(TraceLevel.Warning, errorMessage); return new ErrorHandler(errorMessage); } public void ReleaseHandler(IHttpHandler handler) { return; } } } The IdPEndpointHandlerFactory class implements the IHttpHandlerFactory framework interface and is hooked into the web server conguration le. Hereafter, for every request that the web server receives, it will call the 122 GetHandler method of our class to get an appropriate handler. global login page, and if this is the case, an instance of the is returned. Otherwise, the In the GetHandler method, a check is performed to see if the request is for the IdPLoginPage11 EndpointService's GetEndpointForPath method is called, to nd the endpoint that serves the given path. If an endpoint is found, its handler is returned. Otherwise an error message is traced, and a generic error page is returned. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 using using using using using using EndpointService.cs Safewhere.Core.Domain; Safewhere.IdP.Domain; Safewhere.IdP.Domain.Credentials; Safewhere.IdP.Domain.Protocol; Safewhere.IdP.Infrastructure.Util; System.Linq; namespace Safewhere.IdP.Application.Endpoints { public class EndpointService { /// <summary> /// Gets an implementation of IEndpoint that can handle a given path. /// Looks for both protocol and credential endpoints. Protocol endpoints /// take precedence over credential endpoints if two should exist /// with the same path (eventhough this is considered an error). /// </summary> /// <param name="path">The path.</param> /// <returns>An instance of a class that implements IEndpoint, /// or null if no suitable implementation is found.</returns> public static IEndpoint GetEndpointForPath(string path) { var ep = GetProtocolEndpointForPath(path); if (ep != null) return ep; ep = GetCredentialEndpointForPath(path); return ep; } private static IEndpoint GetProtocolEndpointForPath(string path) { using (var uoo = new LinqToSqlUnitOfWork(new IdPDataClassesDataContext())) { var cpr = new ConfiguredProtocolRepository { UnitOfWork = uoo }; var all = cpr.FindAll(); foreach (var cp in all) { var plugin = ActivatorUtil.GetInstance<IPlugIn>(cp.ProtocolDefinition.ProtocolType ); 11 The IdPLoginPage is the page that shows the user the dierent congured credential providers and lets him choose which one to use for authentication. 123 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 if (plugin == null) continue; plugin.PluginId = cp.OID; var spec = new EndpointHandlesPathSpecification(path); var endpoint = plugin.GetEndpoints().FirstOrDefault(spec.IsSatiesfiedBy); if (endpoint != null) return endpoint; } } return null; } private static IEndpoint GetCredentialEndpointForPath(string path) { using (var uoo = new LinqToSqlUnitOfWork(new IdPDataClassesDataContext())) { var ccpr = new ConfiguredCredentialProviderRepository { UnitOfWork = uoo }; var all = ccpr.FindAll(); foreach (var ccp in all) { var plugin = ActivatorUtil.GetInstance<IPlugIn>(ccp.CredentialProviderDefinition. CredentialProviderType); if (plugin == null) continue; plugin.PluginId = ccp.OID; var spec = new EndpointHandlesPathSpecification(path); var endpoint = plugin.GetEndpoints().FirstOrDefault(spec.IsSatiesfiedBy); if (endpoint != null) return endpoint; } } return null; } } } The EndpointService looks for protocol endpoints and credential provider path given as an arguFirst it calls the endpoints, and tries to nd one that matches the ment. the GetProtocolEndpointForPath method, which uses ConfiguredProtocolRepository to nd congured protocols. It uses re- ection to instantiate every congured protocol implementation, and it then iterates over the endpoints provided by the implementation to see if any 2 The username/password credential provider Let us see how the username/password credential provider plug-in is implemented. which GetEndpoints is pretty straightforward.Generic.UserNamePassword.IdP.Core.CredentialProviders.CredentialProviders. namespace Safewhere. namespace Safewhere. ConfiguredProtocolRepository.Application. using Safewhere.cs using System. method is implemented by our generic 7.IdP.IdP. because the FindAll Repository class. .Application { public class UsernamePasswordPlugin : IPlugIn { public string Description { get { return "Provides login via username/password". using Safewhere.cs 1 2 3 4 5 6 7 8 9 using Safewhere.Endpoints.UserNamePassword.IdP.Domain.Domain. int> { } } The ConfiguredProtocolRepository is very simple.Application. set. Something very similar is done in the GetCredentialEndpointForPath method. } } } The UsernamePasswordPlugin class implements the IPlugin interface. } } public List<IEndpoint> GetEndpoints() { return new List<IEndpoint> {new UsernamePasswordEndpoint(PluginId)}. which in this case returns a list with only one element. } public int PluginId { get. The most important thing to notice is the method.0.124 endpoint matches the requested path.Protocol { public class ConfiguredProtocolRepository : Repository<ConfiguredProtocol.Endpoints.Collections. an instance of the UsernamePasswordEndpoint class. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 UsernamePasswordPlugin. IdP. .Application.CredentialProviders. Safewhere.Domain.Credentials. namespace Safewhere.IdP.IdP.IdP.125 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 UsernamePasswordEndpoint.IdP.Collections.Domain. using Safewhere. Safewhere. } } } } The UsernamePasswordEndpoint class implements the IEndpoint interface.cs using System.Properties.Web. Safewhere.SessionState.IdP.Core.Application.Generic. Safewhere.IdP.UI.CredentialProviders.Endpoints.IdP. Safewhere.WebControls.Web.Linq. using Safewhere. System. Safewhere.UserNamePassword.". } } public string Name { get { return "UsernamePasswordEndpoint".IdP. System. System.Claims. } } public IHttpHandler Handler { get { return new UsernamePasswordPage(_pluginId).Application.UserNamePassword. System.UI. public UsernamePasswordEndpoint(int pluginId) { _pluginId = pluginId.Application.Web. } public string Path { get { return "unpwdlogin.Pages. } } public string Description { get { return "Endpoint for username/password login.CredentialProviders.Endpoints { public class UsernamePasswordEndpoint : IEndpoint { private int _pluginId.UserNamePassword.Web. UsernamePasswordPage Handler property.idp". Safewhere.Domain.Application. and most importantly it returns an instance of the class in its 1 2 3 4 5 6 7 8 9 10 11 12 ï ˙ z £ using using using using using using using using using using using using System.Domain.Pages. _password = new TextBox { Width = 150.IdP. private TextBox _username._content. _username.Domain. _passwordLabel = new Label { Text = Resources. namespace Safewhere._content. private Label _passwordLabel.ID = "username". _content.ID. public UsernamePasswordPage(int credentialProviderDefinitionId) : base( credentialProviderDefinitionId) { } protected override void OnLoad(System.AddControl(_usernameLabel). private Panel _buttonPanel.UsernameLabelText }. base. _password.EventArgs e) { return.CredentialProviders.Logging. _usernameValidator = new RequiredFieldValidator { ControlToValidate = _username.AddControl(new LiteralControl("<br/>")). private RequiredFieldValidator _usernameValidator. /// </summary> public class UsernamePasswordPage : CredentialProviderBasePage. _passwordLabel. _content.Dynamic.UsernameValidationText }.ID = "password". base. IRequiresSessionState { private Label _usernameLabel.UserNamePassword.AddControl(_usernameValidator).AddControl(_username). base.Width = 100. private TextBox _password. Display = ValidatorDisplay. } protected override void CreateChildControls() { _usernameLabel = new Label { Text = Resources. _username = new TextBox{Width = 150}.Pages { /// <summary> /// A page that collects username and password through a form. private Panel _messagePanel.PasswordLabelText }.IdP.Password }. ErrorMessage = Resources.Width = 100. private Button _submit. _usernameLabel.Application. . private RequiredFieldValidator _passwordValidator.AddControl(_passwordLabel). TextMode = TextBoxMode.126 using Safewhere._content. CausesValidation = true} .EventArgs e) { if(IsValid) { var user = UserService. Display = ValidatorDisplay.127 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 _content.ID. _submit. base. foreach(var ccd in ccp. base. } RetrieveCredentialClaims(user).SubmitButtonText. _buttonPanel = new Panel { Width = 255 }.AddControl(_passwordValidator). _buttonPanel.Add(_submit).Add(CredentialClaim.GetUser(_username.AddControl(_password). System. } } private void RetrieveCredentialClaims(User user) { var claims = new List<CredentialClaim>().Add("text−align". _submit = new Button {Text = Resources.VerifyPassword(_password. return. _buttonPanel.Style._content._content. _messagePanel = new Panel {Visible = false}.AddControl(_messagePanel).FindByPrimaryKey(CredentialProviderDefintionId).Dynamic. ccd)).Controls.Click += _submit_Click. } void _submit_Click(object sender.AddControl(new LiteralControl("<br/>")).CredentialClaimDefinitions) { if (ccd.PasswordValidationText }.AddControl(_buttonPanel). base.UserName. ErrorMessage = Resources. if (user == null || !user. var ccp = ccpr._content._content.IsIdentityBearer) claims. using (var uoo = new LinqToSqlUnitOfWork(new IdPDataClassesDataContext())) { var ccpr = new ConfiguredCredentialProviderRepository {UnitOfWork = uoo}. "right"). } } 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 .Text)) { LoginError().Text). base. _passwordValidator = new RequiredFieldValidator { ControlToValidate = _password.FromDefinition(user. false otherwise</returns> . using Safewhere.128 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 if(claims. This method uses VerifyPassword method in turn implements the two text elds where the user can enter his username and his password.Visible = true. return Find(spec).Domain.Current.</param> <returns>True if the user exists.Error.Domain.Add(new LiteralControl("Wrong username and password combination")). It then calls the on the responds to the one stored in the error message is displayed. "No credential claims found for user " + user.Core.UserName).Repositories { public class UserRepository : Repository<User.cs using System. _messagePanel.Domain. The most interesting method is the the UserRepository class to get an instance of the User class corresponding User class instance. using Safewhere. This page basically displays _submit_Click method. to see if the password entered by the user corUser instance.Controls.IdP. the user's credential claims are extracted and the IdPContext class is called. }else { IdPContext. int> { public IQueryable<User> FindUsersBeginningWith(string beginsWith) { var spec = new UserNameBeginningWith(beginsWith).Users. and the user can try again. } } private void LoginError() { _messagePanel.Count > 0) { IdPContext.Current. an AuthenticationDone method of the to the name entered by the user.AsQueryable()).Trace(TraceLevel.Specifications.IdP. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 UserRepository. The check is performed case insensitively </summary> <param name="userName">the userName.Linq. } } } The UsernamePasswordPage inherits the Page class from the framework (which IHttpHandler interface). namespace Safewhere.AuthenticationDone(claims. If this is not the case. Otherwise. thus nalizing the authentication. } /// /// /// /// /// /// <summary> Checks if a user with the given userName already exists. set. } } } The UserRepository class also has a simple implementation. The dummy protocol displays all the claims for the user that logs in. even though a FindUserByName method used by _submit_click method described above simply uses an instance of the few extra methods have been added.Application { public class TestProtocolPlugin : IPlugIn { public string Description { get { return "Simple test protocol".IdP.cs using System. 7.Application.0. .Generic.Collections. I will show the following implementation of a dummy protocol.Endpoints.IdP. using Safewhere.3 A dummy protocol Although no production-ready protocol implementation has been developed as part of this thesis. The the ExactUserName specication class to nd the corresponding User instance. namespace Safewhere.Protocols.129 21 22 23 24 25 26 27 28 29 30 31 public User FindByName(string userName) { return FindFirst(new ExactUserName(userName)). using Safewhere.TestProtocol.Application. } public int PluginId{get.Endpoints. } public bool ExistsByName(string userName) { return Exists(new ExactUserName(userName)). } } public List<IEndpoint> GetEndpoints() { return new List<IEndpoint> {new TestEndpoint(PluginId)}.TestProtocol. The dummy protocol responds on a single URI.Protocols. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 TestProtocolPlugin.} } } The TestProtocolPlugin class is very similar to the UsernamePasswordPlugin presented above.IdP. 130 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 TestEndpoint.Application.Endpoints. } } public IHttpHandler Handler { get { return new TestHandler{Path = Path.Web. using Safewhere.Handlers { public class TestHandler : IdPHttpHandlerBase { private NameValueCollection _requestParams.Collections. namespace Safewhere. 1 2 3 4 5 6 7 8 9 10 11 12 using using using using TestHandler.Handlers. } } } } The most important feature of the instance of the TestEndpoint class is that it returns an TestHandler class.IdP.idp". namespace Safewhere. System.Endpoints { public class TestEndpoint : IEndpoint { private readonly int _protocolId.IdP.IdP.TestProtocol. using Safewhere.Application. } } public string Name { get { return "testendpoint". } } public string Description { get { return "Test endpoint". public TestEndpoint(int protocolId) { _protocolId = protocolId. public override bool ValidateRequest(NameValueCollection requestParams) .cs System.TestProtocol.Web.Protocols.IdP.Application.Application. Safewhere.Specialized.Protocols.IdP. } public string Path { get { return "test.Application.cs using System.TestProtocol. which is shown below. ProtocolId = _protocolId}.Application.Protocols. Safewhere.IdP.HttpHandlers. } public override bool RequiresAuthentication() { return true. claimStr += "</claim>\n". context.Response.Response. the core IdP makes sure that the user is authenticated before . ProtocolId . } public override bool ForceReauthentication() { return false. It does so by overriding the ValidateRequest. } } } You can see that this handler participates in the runtime request sequence described in Section 4.Write(username + "</title></head><body>"). var claims = ClaimMappingService.Current. claimStr += "<name>" + claim. ForceReauthentication and SendResponse methods from its parent class. context.ValueType + "</valuetype>\n". claimStr += "<valuetype>" + claim.GetMappedClaimsForUser(username.Response.DisplayName + "</b><br/>"). RequiresAuthentication.UserName.Value + "</value>\n".Response. string claimStr = "<claims>\n". context.Response.Write("Name: " + claim. claimStr += "<claim>\n".11 on page 79.Response.Write("</textarea></form>").Response.Write("ValueType: " + claim.Write("<b>" + claim.Response.Write("</body></html>"). } public override void SendResponse(HttpContext context) { string username = IdPContext. claimStr += "<nameformat>" + claim.NameFormat + "</nameformat>\n".Write("<html><head><title>Claims for user: ").Response. } claimStr += "</claims>\n".Value + "<br/>").131 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 { _requestParams = requestParams.Response. return true.ValueType + "<br/><br/>"). context. context.Write("<form><textarea cols=100 rows=50>" + claimStr).Response. claimStr += "<value>" + claim. turns Since the RequiresAuthentication method re- true. connectionName).NameFormat + "<br/>").Name + "<br/>").Write("Value: " + claim. foreach(var claim in claims) { context.Write("NameFormat: " + claim. string connectionName = _requestParams["connection"]. context.Write("<h1>" + username + "</h1>"). context. context. context. context.Name + "</name>\n". 132 calling the SendResponse method. this method can make use of IdPContext. Therefore.Current. .UserName and call the ClaimMappingService to get the user's claims. which it then just outputs as html. most of the developers of Safewhere attended a seminar where we were rst introduced to domain-driven design. and not least. as the code base grew. Safewhere started on a good idea and a prototype written by the founders. but it became increasingly dicult to add new features and refactor existing features. and version one of the product was delivered. there was something missing in our development method.Design validation 8 The purpose of this chapter is to evaluate the design that I have presented in this thesis. not much time was spent on software design and modelling. In the beginning. I presented my work to my 133 . each programmer had his own individual coding style and understanding of the business domain. In retrospect. In the beginning of the company's life money was scarce. and each programmer was given a few high level features to implement. and in order to attract venture capital having an actual product to show for was priority number one. After the initial release the need for new features grew rapidly. as it turned out however. Before looking into what has been learned. to assess the practicability of domain-driven design. extensive test suites and a generally open communication culture. Instead. Upon writing this thesis. Even though domain-driven design was not the main topic of the seminar we were all left with the impression that it was a smart thing to do. One of the ways this manifested itself was that we had several classes in dierent assemblies with similar names but with dissimilar meaning. this did not seem to cause that big of a problem. and these were added on an ad hoc basis. Since I was about to write my thesis it was decided that my current project could be a guinea-pig project for introducing domain-driven design in Safewhere. The product was working well. Despite of established coding guidelines. I wish to present the background for introducing a new design paradigm in Safewhere. As with many small companies. a few programmers were hired. It was also dicult to explain the architecture to new members of the team. myself included. since the code was not organized in a heterogeneous manner. Therefore we decided to try out domain-driven design in practice. the problem clearly was that we did not have either a model (and model framework) nor a ubiquitous language. given the circumstances. Safewhere is a relatively new company with only three years of existence. One day. fully functional. So. It will lead to a stable and mature end-product. there were also a number of less positive things. So overall my work has been very well received. Peter.134 colleagues. Another concept which could have been named better is the Logging concept. Only one interviewee though that these names could have been better. found them really good. which I will address below. IdentityProviderClaim IssuedClaim. These include the dierent types of claims and 13 . Another inter- viewee. and even though the names may not be the most elegant. It represents a concise and expressive ubiquitous language. and there is no doubt that domain-driven design will be the future design paradigm in Safewhere. 8.1. which makes it easy to talk about. and below is what I found. 12 13 The interviews can be found in full length in the Appendix B. they do convey important information. and secondly to make them give me their opinion of domain-driven design. A better name for this model concept could have been business activity monitoring which is both more expressive and unambiguous. CredentialClaim. It is exible and has a clear distribution of responsibilities between components. First of all it was noted that a few terms could have had better names. and whether they would support the introduction of domain-driven design as a design standard on future projects in Safewhere. It will reduce total cost of ownership of the software. The word logging is over- loaded and has potentially dierent meanings to dierent people. That said. I interviewed 12 three people. The rst purpose of this was to have them assess the usability and completeness of the model. Model evaluation Based on the interviews. The distinction between the three types of claims is more important than their names. making it easy to refactor. I can conclude that the following is true about the model: It solves the business problem well. it should probably not have been made part of the model since it is an infrastructure concept and has no real relevance to the business problem. . As for the tracing modelconcept. It is of course still possible to add more constructors with more parameters. mainly the interfaces. There is however a good reason why using a lot of factories is a good convention for this model. but it is not clear if it even belongs here. since breaking changes for self-developed plug-ins can be handled at compile time and during testing. . I do not think that this is a aw in the model per se. This is denitely a weakness in the model. The problem is that the LinqToSql framework generates parameter-less constructors. Refactoring of the plug-in system. Unfortunately this convention is implicit and I have not been able to come up with a way to express it explicitly in the model. and hopefully avoid unintended use of the parameter-less constructors. So by using factories to create entity objects throughout the code. I have created a convention that others can follow. which they are not. Another weakness of the model has to do with the plug-in nature of the system. There are no current plans to allow third party plug-ins in the IdP. The factories are mainly used to construct entity objects. and I hope that a better solution can be found in the future.135 If anything it could have been part of the domain-neutral component. but rather a natural consequence of any plug-in architecture. it could potentially lead other developers to the impression that the parameter-less constructors are also an option. could lead to breaking backwards compatibility with third party plug-ins. so this is no great concern at the moment. However if constructors instead of factories were used throughout the code. It has also been pointed out that there is an excessive use of factories in the model. Microsoft Azure. 9. Therefore. And that is exactly one of the points of domain- driven design.1. it is not. when it comes to the domain-neutral component.1 Future perspectives Hosting software in the cloud is most likely going to be become very common in the future. However. and solves the business problem dened in the specication well. the storage mechanisms used in these cloud services are very dissimilar to relational databases. Also. or model framework. 136 . Cloud hosting services are being oered by many of the big players on the market. with products such as Google App Engine.Conclusion 9 This chapter contains my conclusion on what has been learned throughout the process of writing this thesis. The model All in all I think the model that has been developed is good and thorough. Amazon Elastic Cloud and more. The benet of these services is that they provide instant on-demand scaling of the applications they host. 9. I have achieved this exibility to a high degree through distribution of responsibilities and by having made the model storage-agnostic.1. and that is why it is so important that the model is exible so that future refactorings can be carried out with ease. the model will face its greatest test if at some point it is decided to host the IdP in the cloud. a fact that is substantiated by the interviews that were carried out. Now. You will never get your model one hundred percent right the rst time. It would be natural to host a product such as the IdP in the cloud to make sure that even extremely high loads on the application could be handled. I am sure that this will contribute to shorter development times on future projects because it is highly reusable. is the model perfect? No. such as hotel reservation systems. And using the domain-driven design concepts such as entities.2. and at the beginning of this project I was anxious to nd out if the domain-driven design principles could be applied to this kind of system as well. Domain-driven design is based on some very sound principles that can be applied to any software design task that I can think of. Mastering domain-driven design is not something that can be learned from a single project. having a broad ubiquitous language that is reected in the model and code has made the system so much easier to talk about and discuss. and by practicing my domain-driven design skills I am sure that . Even though there was no sharp distinction between domain experts and developers on this project. has been very positive. Textbooks about domain-driven design often use examples from business domains that are commonly understood. and I look forward to becoming even more adept at applying its principles. recognizable to newcomers to the team. order systems or the like. services. In the modelling phase. repositories. The larger makes the code much more the teams. the more important it becomes to have a common ubiquitous language and a strong and exible model. when the model has already been lined out. it is still important that every developer on the team has knowledge about domaindriven design such that all the concepts used in the model can be understood by everyone. I have not had the opportunity to try domain-driven design on bigger scale since this project has not involved other developers than myself. a lot of time has been saved by the ability to refactor in the model instead of refactoring code that has taken a long time to write. Furthermore. specications. and a thorough understanding of domain-driven design. For me personally. I think that it is safe to conclude that it can. Using domain-driven design on a project requires commitment from all developers. Domain-driven design I can only say that this. etc. my rst encounter with domain-driven design. being able to design software very well is a goal that I strive to achieve because it will give me greater professional satisfaction. The business domain that has been modelled here is of a very technical nature. the developers involved must have strong modelling skills. In the development phase.137 9. I am however convinced that the benets of using domain-driven design will turn out to be even bigger when working on large teams. but my work with it so far has denitely encouraged me to keep using it. 138 I will be able to reach that goal. . properties. struct types. and assemblies.C# language elements Appendix A This aim of this appendix is to give a short introduction to the C# language. C# programs consist of one or more source les. Types C# programs use type declarations to create new types. generics. Assemblies typically have the le extension .1. depending on whether they implement applications or libraries. namespaces.dll. A type declaration species the name and the members of the new type. When C# programs are compiled. and delegate types. 139 . Please refer to [Microsoft 2009] for the complete reference. enum types. which contain members and can be organized into namespaces. A.exe or . and should by no means be regarded as complete. Programs declare types. they are physically packaged into assemblies. Variables of value types directly contain their data whereas variables of reference types store references to their data. A. and events are examples of members. interface and delegate types all support they can be parameterized with other types. There are two kinds of types in C#: value types and reference types. methods. Five of C#'s categories of types are user-denable: class types.2. Fields. Program structure The key organizational concepts in C# are programs. the latter being known as objects. types. Classes and interfaces are examples of types. The contents presented herein are based on the C# Language Specication [Microsoft 2009]. members. struct. enabling readers without prior knowledge of the C# language to understand the code samples in the thesis. whereby Class. interface types. With value types. Members The members of a class are either static members or instance members.2. mechanisms whereby derived classes can extend and specialize base classes.140 With reference types. Class types support single inheritance and polymorphism. and instance members belong to objects (instances of classes). properties. The following table provides an overview of the kinds of members a class can contain. Member Constants Fields Methods Description Constant values associated with the class Variables of the class Computations and actions that can be performed by the class Properties Actions associated with reading and writing named properties of the class Indexers Actions associated with indexing instances of the class like an array Events Operators Notications that can be generated by the class Conversions and expression operators supported by the class Constructors Actions required to initialize instances of the class or the class itself Destructors Actions to perform before instances of the class are permanently discarded Types Nested types declared by the class . and it is not possible for operations on one to aect the other A. it is possible for two variables to reference the same object and thus possible for operations on one variable to aect the object referenced by the other variable. Static members belong to classes. the variables each have their own copy of the data. and others).1 Classes A class type denes a data structure that contains data members (elds) and function members (methods. destructors and static constructors of the base class. which controls the regions of program text that are able to access the member. Base access A base-access consists of the reserved word base followed by either a ". an instance method. Accessibility public protected Meaning Access not limited Access limited to this class or classes derived from this class internal protected internal Access limited to this program Access limited to this program or classes derived from this class private Access limited to this class Inheritance A class inherits the members of its direct base class type. These are summarized in the following table. Inheritance means that a class implicitly contains all members of its direct base class type. There are ve possible forms of accessibility." token and an identier or an expression-list enclosed in square brackets: base-access: base . except for the instance constructors. or an instance accessor. A base-access is permitted only in the block of an instance constructor. . identifier base [ expression-list ] A base-access is used to access base class members that are hidden by similarly named members in the current class or struct.141 Accessibility Each member of a class has an associated accessibility. 1 2 3 4 5 6 7 8 class Text { public Text(): this(0. Likewise. a constructor can invoke the constructor of its base class by using base-access as illustrated below: 1 2 3 4 5 6 7 8 using System. while this can be used to refer to the instance for which the function member was invoked. int y. or an instance accessor. null) {} public Text(int x. which actually does the work of initializing the new instance. int y): this(x... Constructors The this(. this is classied as a value.) constructor initializer to invoke the third instance constructor. string s) { // Actual constructor implementation } } In the above example. Within an instance constructor or instance function member of a class. an instance method. null) {} public Text(int x. y. class A { protected string theString. this-access: this A this-access is permitted only in the block of an instance constructor.) form of constructor initializer is commonly used in conjunc- tion with overloading to implement optional instance constructor parameters.theString = aString.. Both use a this(. 0. } . the rst two instance constructors merely provide the default values for the missing arguments. Thus.142 This access A this-access consists of the reserved word this. it is not possible to assign to this in a function member of a class.. public A(string aString){ this. A. Delegates are similar to the concept of function pointers found in some other languages. and all struct types implicitly inherit from type object. An interfaces does not provide implementations of the members it denes. structs are value types and do not require heap allocation. and a class or struct may implement multiple interfaces.2 Structs A struct type is similar to a class type in that it represents a structure with data members and function members. An interface may inherit from multiple base interfaces. A.WriteLine("The string: " + theString + ". } } A. An interface can contain methods.143 9 10 11 12 13 14 15 16 17 18 19 20 21 22 } class B : A { private int theInt. properties. } public void PrintIt(){ Console. events. Delegates make it possible to treat methods as entities that can be assigned to variables and passed as parameters. Struct types do not support user-specied inheritance.2.2. The int: " + theInt).2. public B(string aString. unlike classes.theInt = anInt. it merely species the members that must be supplied by classes or structs that implement the interface. .3 Interfaces An interface denes a contract that can be implemented by classes and structs. A class or struct that implements an interface must provide implementations of the interface's function members. However.4 Delegates A delegate type represents references to methods with a particular parameter list and return type. and indexers. int anInt) : base(aString) { this. } } An instance of the Function delegate type can reference any method that The Apply method takes a double argument and returns a double value. A delegate can reference either a static method Square or Math. } static void Main() { double[] a = {0. Square). that object becomes this in the invocation. return result. m.144 but unlike function pointers. double[] sines = Apply(a. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 using System. Function f) { double[] result = new double[a. double[] doubles = Apply(a. the multiplier example above can be written more easily without using a Multiplier class: .Sin). Thus. delegates are object-oriented and type-safe.Multiply).5.Length]. class Multiplier { double factor. Multiplier m = new Multiplier(2. which are "inline methods" that are created on the y. 0. 1. and when the instance method is invoked through the delegate.Sin in the previous example) or an instance method m. returning a double[] Main method.Multiply in the previous example). for (int i = 0. Anonymous functions can see the local variables of the surrounding methods. i++) result[i] = f(a[i]). The following example declares and uses a delegate type named Function.0}.factor = factor. double[] squares = Apply(a. Delegates can also be created using anonymous functions. public Multiplier(double factor) { this. } public double Multiply(double x) { return x ∗ factor. Math. } } class Test { static double Square(double x) { return x ∗ x. A delegate that references an instance method also references a particular object. Apply is used to apply three dierent double[]. applies a given Function to the elements of a with the results. } static double[] Apply(double[] a.Length.0). delegate double Function(double x).0. i < a. In the functions to a (such as (such as double[]. and the parameter type cannot be a pointer type. non-nested static classes. } } An extension method is a regular static method. 0. The type declaration is constructed from its parts by following the rules in this section. index.2. In addition.2.5 Partial types A type declaration can be split across multiple partial type declarations. An interesting and useful property of a delegate is that it does not know or care about the class of the method it references. } public static T[] Slice<T>(this T[] source. result. The following is an example of a static class that declares two extension methods: 1 2 3 4 5 6 7 8 9 10 11 12 13 public static class Extensions { public static int ToInt32(this string s) { return Int32. The rst parameter of an extension method can have no modiers other than this. Array. that method is said to be an extension method.Length − index < count) throw new ArgumentException(). whereupon it is treated as a single declaration during the remainder of the compile-time and runtime processing of the program. T[] result = new T[count]. return result. where its enclosing static class is in scope.Parse(s). int index. Extension methods can only be declared in non-generic. A. count). an extension method can be invoked using instance method invocation syntax. all that matters is that the referenced method has the same parameters and return type as the delegate.0). using the receiver expression as the rst argument.6 Extension methods When the rst parameter of a method includes the this modier. (double x) => x ∗ 2. The following program uses the extension methods declared above: .145 1 double[] doubles = Apply(a.Copy(source. int count) { if (index < 0 || count < 0 || source. A. "333". using ordinary static method calls: 1 2 3 4 5 6 7 8 9 static class Program { static void Main() { string[] strings = { "1". foreach (string s in Extensions. foreach (string s in strings. } } } The Slice method is available on the string[]. 1. } } } . "22". "4444" }. "333". The meaning of the program is the same as the following. "22".ToInt32(s)). because they have been declared as extension methods. 2)) { Console.Slice(strings. and the ToInt32 method is available on string.WriteLine(s.Slice(1.WriteLine(Extensions. 2)) { Console. "4444" }.ToInt32()).146 1 2 3 4 5 6 7 8 9 static class Program { static void Main() { string[] strings = { "1". and I have already done that many times. and they don't have to spend time to come up with good class names. Mikkel Yes. both yes and no.1. before you begin programming. It makes developers understand the relevance of all classes. I do acknowledge that these issues will always be present in systems that use a plug-in structure. Me That is correct. I think you avoid the disconnect between the developers and the business. and that it is not a problem with the model per se. As a matter of fact the model probably does make it easier. I think that DDD allows the model to be described in a short and concise manner. Mikkel Well. Me After having read this thesis. Mikkel Christensen Developer at Safewhere Me What is your general impression of the model? Mikkel I think the model solves the business requirements presented in the specication well. with regards to refactoring. That makes them more productive. Or rather. I think the readability of this model is better than many other standard-OO models I have seen in the past. I think. Me What do you think about domain-driven design's focus on a ubiquitous language? Does it make sense? Mikkel It makes a lot of sense. I guess that having a well-dened model like this. However. Me Do you think that the model is exible or good enough to make refactoring easy. Me You seem like a fan of domain-driven design already.Appendix B Interviews B. is it obvious for you how you could implement a new plug-in. I think the plug-in structure introduces a few issues with backwards compatibility. I would have to implement the IPlugin interface. because of the way it clearly distributes responsibilities. either the ICredentialProviderPlugin or IProtocolPlugin. Don't you think that there is anything negative about it? 147 . Me Do you think that this specic system benets from being modelled using the domain-driven paradigm? Mikkel Yes. and in a exible way. you can actually perform a lot of refactoring on the model itself. Me Do you have a better suggestion for the names? Mark Hehe. In my understanding it is more of an infrastructure thing. I agree. since all the decisions have been made already. Mark Also. but they would not have to great business insight to implement the model. Me I wouldn't say that I disagree. anything else? Mark I don't like the naming for Logging. I can't conclude that it is the solution to everything. I think that business activity monitoring. The problem is that the LinqToSql framework generates parameter-less . Mark Seemann Senior Developer at Safewhere Me What is your impression of the model? Mark My overall impression is that it is good. I think it is an overloaded word and that it doesn't describe the exact meaning. Me Ok. but I am denitely ready to try it out in practice. I think the distinction between dierent claim types is good. it seems eective. Me Good point. Past the modelling phase. no. not really. But apart from that. developers would still have to be familiar with domain-driven design. however I think the names could have been better. I appreciate that the factory pattern is often useful. When the system has been modelled. Me A valid point. everything that these factories do could have just as well been done in constructor functions. but it seems that in this model. would you use domain-driven design on your next project? Mikkel Based on what I have read in your thesis. no. The only reason I have chosen to make it part of the domain model is that I want to expose services that allows inspection of trace messages in the administrator UI. The factories are mainly used to construct entity objects. B. or something like that would have been better.148 Mikkel Well. I think that it should be a piece of cake to implement. Mark I am also not sure that tracing actually belongs to the domain. to bridge the gap between the business and the code. Me In the modelling phase only? Mikkel Yes.2. if I have to say something it would be that it requires quite skilled developers in the modelling phase. you have quite a few factories. Me So. I think. Both technical maturity for the programmer. Me Yes.ploeh. Me So do you think there is anything negative to say about domain-driven design? Mark Well. And maturity. That is a good convention then. This in-memory equivalent was made . you think that a model such as this renders itself well to TDD? Do Mark Well. You see. And here. 1000 tests often. You want to write tests that cover the entire system. Me Well.149 constructors. that makes sense. to make sure that you are not making breaking changes. you just stop running them. I am concerned with the tight coupling to the relational database that you have presented in your model. require them to be execute really fast. which they are not. TDD is all about rapid feedback. for TDD to be eective you really have to run your tests often. it could potentially lead others to the impression that the parameter-less constructors are also an option. it would not be unnatural to have around 1000 unit-tests. not really. Therefore. because it really is an essential pattern that requires that kind of attention. Therefore. database access really becomes a bottle-neck. But I think that the extra productivity provided by the or-mapper is worth making this sacrice for. In his book. It is of course still possible to add more constructors with more parameters. And running all. For a system such as the one you have modelled. and eectively stop exercising TDD. Me So what is your overall impression of domain-driven design? Mark I think that domain-driven design is a very sound approach. and run these tests over and over again. and I admit that it is a minor aw in the model. If your tests do not run in a few seconds. The domain model pattern was actually introduced by Martin Fowler a long time ago. But using domain-driven design requires strong modelling skills. So by using factories to create entity objects throughout the code. You even have your own blog about TDD (. and hopefully avoid unintended use of the parameter-less constructors. you are big advocate for test-driven development (TDD).dk). Mark Ok. but also a mature organization. However I like how Eric Evans has written an entire book about it. say. I agree that unit tests should not rely on database access. I create a convention that others can follow. Me Mark. it was presented on 1015 pages. the domain neutral component has an in-memory equivalent to the database access component. However if I used constructors instead of factories throughout the code. I must have missed that. Do you agree with this.3. ok. Mark Oh. I am very happy with those names.150 exactly with unit testing in mind. Me That is correct. and that having such a good model will make it much easier and faster to develop future releases. seems to require more preparation time before coding can start. making the model easy to talk about on a very detailed level. It will be easier to maintain the code. such as domain-driven design. Peter Haastrup Technical director of Safewhere Me Do you think that the model covers the specication? Peter Haastrup I think that is generally covers specication well. I am sure that this extra cost is made up for by faster development times. and the entire multi tenancy issue is supposed to be handled at installation time. and think that they convey important information about what they represent. Me Do you think that model presented herein represents a good ubiquitous language? Peter Haastrup Yes. however I don't think that it covers the multi-tenancy aspect at all. The model is multi-tenancy unaware. Me Do you think that using the domain-driven design paradigm and the domain-neutral component presented here will make development faster? Peter Haastrup I think that it is positive that a lot of decisions about how to structure the code have been made already. . I think that the model has made a lot of concepts explicit. will result in a much more stable and mature product. Me One of the other interviewees found that the claim names could have been better. by using separate database schemas. In that case I think the model renders itself well to TDD. Me What impact do you think that using domain-driven design for this IdP project will have on the total cost of ownership for the product? Peter Haastrup Although introducing a new design paradigm. Peter Haastrup No. I think that using domain-driven design the way you have here. and programmers will be able to recognize the concepts used from project to project. separate web applications in IIS. Furthermore. B. and separate machine accounts for execution. As a matter of fact I think that it is made up for already by the time of the rst release. not at all. com/fwlink/ ?LinkId=64165. Harding. Erich. Johnson. OpenSAML. Evans.microsoft.aspx?ArticleID=33357.Bibliography References Daarbak. C# Language Specication 3. Torben. McCarthy. Domain-driven design through Eric Evans' eyes.. sdtimes. .com/en-us/library/ms172152.. Tim.com. Eric.opensaml. 1995. Kim.0. Implementing an Implicit Transaction using Transaction Scope. computerworld.microsoft.dk/art/49054.. John. Richard. Domain-Driven Design course. 2009. Alex. Wiley Publishing Inc. Gamma.. Addison-Wesley.com/content/article. Microsoft. 2008. http: //msdn. OpenSAML website.kimharding. 151 . Ralph. 2004. Hver tredje CRM-løsning er SaaS-baseret i 2012. 2007. 2008. 2009. Handy.NET Domain-Driven Design with C# . 2009. and Vlissides. Helm. Microsoft.aspx.org. 2008. terns: Elements of Reusable Object-Oriented Software Design Pat- . Addison-Wesley. Domain Driven Design .. This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/147544153/Domain-Driven-Design-in-Action
CC-MAIN-2016-07
refinedweb
38,354
51.14
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including async-lodash with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. This is a very simple wrapper around lodash. It has all the same methods (except chain as of v1.0.0). Before invoking the lodash method it will resolve all promises that are in the arguments. npm i async-lodash -s Lodash is an amazing library, and it is a pretty standard one to include in most modern JavaScript applications. Promises are another amazing addition to the JS standard. When we put them together, life gets pretty easy. Up until now when you wanted to combine them, you might do something like this: import lodash from 'lodash'; // This async object could be a user event, an HTTP request, or any other async action. We'll mock it quickly here with Promise.resolve. const asyncObject = Promise.resolve({ user: { id: 4, name: 'Buddy', }, }); function getUser(asyncUser) { return new Promise(resolve => { return asyncUser.then(user => resolve(lodash.get(user, 'user.id'))); }); } getUser(asyncObject).then(console.log); // 4 Honestly, that's really not that bad... BUT I think it could be even easier. What if we could do something like? import lodash from 'lodash'; // This asynchronous object could be a user event, an HTTP request, or any other async action. We'll mock it quickly here with Promise.resolve. const asyncObject = Promise.resolve({ user: { id: 4, name: 'Buddy', }, }); function getUser(asyncUser) { return lodash.get(asyncUser, 'user.id'); } getUser(asyncObject).then(console.log); // 4 The difference is small, but the benefit is huge. The dirty work of opening and resolving the promise has been taken care of! All you see is the lodash method, just like you would when working with synchronous data. import _ from 'async-lodash'; const promise = Promise.resolve({ my: { test: 'value' }}); async function getMyTest(asyncValue) { const result = await _.get(asyncValue, 'my.test'); console.log(result); // 'value' } import _ from 'async-lodash'; import fetch from 'node-fetch'; function getObjectAsync() { return new Promise(resolve => fetch('') .then(res => res.json()) .then(resolve); }); } function getMyTest(asyncValue) { _.get(getObjectAsync(), 'body.value') .then(console.log); // 'whatever property value was returned from fetch'; } Often times when using Lodash, methods are nested within each other. This can still be done with Async-Lodash. Here's an example: import _ from 'async-lodash'; const data = [ { id: 5, name: 'Tom', isActive: false, }, { id: 23, name: 'Billy', isActive: true, }, ]; _.get( _.head( _.map( // Filter out all inactive users. _.filter(Promise.resolve(data), d => d.isActive), // Merge the object and return a new one with an info property d => Promise.resolve( asyncLodash.merge(Promise.resolve(d), { info: `${d.name} is an active user` }) ) ), ), 'info', ).then(console.log); // "Billy is an active user" The above example demonstrates that these async methods can be nested, and when promises are included they will still work.
https://npm.runkit.com/async-lodash
CC-MAIN-2020-24
refinedweb
521
50.73
Using the included Tablet Technology that is built into Windows Vista, you can enable your application to perform unique actions for different input devices such as pen, mouse and touch input - even on a desktop PC. With Windows Vista, the software features of the Tablet PC are built into the core OS. This means that you can add support for Ink, handwriting recognition and in your Vista application without having to think about re-distributing dll's or worrying about whether your user is on a Tablet PC or not. In Windows XP, you had to have a Tablet PC in order to even think about using a pen as input using the API's in the OS. Digitizers have been used for years by graphics artists and the Tablet PC hardware integrated that input right into the display. Now, touch (resistive and capacitive) digitizers and electro-magnetic digitizers can be found on Mobile PC's as well as desktop LCD displays. Some PC's even have dual-mode digitizers integrated into them so you can handwrite on the screen if the pen is in range and then will automatically switch to touch mode when you use your finger on the screen. I think this is a great opportunity to add new functionality to applications to make use of this. Think about this scenario: You have a photo application that allows you to handwrite keywords right on the picture using a Pen (that's cool in itself) and then when you reach out and touch the display with your finger, the application switches to panning mode so you can move the photo around. Pretty cool. I used the existing RealTimeStylus sample application that is in the Windows SDK as the base for this sample. The RealTimeStylus object (RTS) allows you to collect data from an attached digitizer or mouse device and gives you the ability to manipulate that data on the fly. You can add, remove & re-order different plug-ins with RealTimeStylus, which means you can change the way the input is rendered dynamically as well. RealTimeStylus Since the sample application has most of the functionality I need, I want to add the ability to determine the input device that the current user input is coming from. RealTimeStylus is part of the Tablet PC features, which implies the use of ink, but that doesn't have to be the case. I can just use the input from RTS for whatever I want. Let's start with making sure we receive the correct events by subscribing to the DataInterestMask. I want to make sure we get notification of when the input starts, when the application is receiving data, and of any errors that might occur: DataInterestMask public DataInterestMask DataInterest { get { return DataInterestMask.StylusDown | DataInterestMask.Packets | DataInterestMask.Error; } } Next, I need to make sure I add handlers for the above notifications. The steps for determining where the input originated, and what to do with it are straightforward: StylusDown The StylusDown event is very similar to a MouseDown event - it notifies you that some input is about to happen. This gives us opportunity to query which input device that with which the user is interacting. The Tablet object here describes all the capabilities of the current input device, including the type, or DeviceKind enumeration. Tablet DeviceKind public void StylusDown(RealTimeStylus sender, StylusDownData data) { Tablet currentTablet = sender.GetTabletFromTabletContextId(data.Stylus.TabletContextId); if(currentTablet != null) { // Keep track of the current input device type tabletKind = currentTablet.DeviceKind; } } Now that you know which input device is about to create data, a handler for the Packets event needs to be added. This simply switches on the current TabletDeviceKind that we kept track of in the previous handler and performs some different action for each device type. I'm just going to draw different colored ellipses for each input device: Packets TabletDeviceKind public void Packets(RealTimeStylus sender, PacketsData data) { // For each new packet received, extract the x,y data // and draw a small circle around the result. for (int i = 0; i < data.Count; i += data.PacketPropertyCount) { // Packet data always has x followed by y followed by the rest Point point = new Point(data[i], data[i+1]); // Since the packet data is in Ink Space coordinates, convert to // Pixels point.X = (int)Math.Round((float)point.X * (float)myGraphics.DpiX/2540.0F); point.Y = (int)Math.Round((float)point.Y * (float)myGraphics.DpiY/2540.0F); // Draw a circle corresponding to the packet switch (this.tabletKind) { case TabletDeviceKind.Pen: // Make the packets from the stylus smaller and green myGraphics.DrawEllipse(Pens.Green, point.X - 2, point.Y - 2, 4, 4); break; case TabletDeviceKind.Mouse: // Make the packets from the mouse/pointing device mid-sized // and red myGraphics.DrawEllipse(Pens.Red, point.X - 2, point.Y - 2, 10, 10); break; case TabletDeviceKind.Touch: // Make the packets from a finger/touch digitizer larger and // blue myGraphics.DrawEllipse(Pens.Blue, point.X - 2, point.Y - 2, 20, 20); break; } } } The sample above is inspecting only the X and Y coordinates of the packets coming in from the Mouse, Touch or Pen devices. However, digitizers can have additional packet data types as well. Electro-magnetic digitizers (either built into Tablet PC's or USB external digitizers) can also have pressure data available. Adding this support would allow this sample application to alter the size of each ellipse drawn based on how hard the user was pressing on the digitizer. Perhaps that will be an update to this article. These API's are new to Windows Vista and are available in .NET 2.0, C++ COM, and in .NET 3.0's Windows Presentation Foundation (WPF). 6th of March, 2007 - Created article This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) using System; using System.Runtime.InteropServices; using System.Windows.Forms; public class Form1 : Form { [DllImport("user32")] public static extern uint GetMessageExtraInfo(); public const int WM_MOUSEMOVE = 0x0200; public const uint MI_WP_SIGNATURE = 0xFF515700; protected override void WndProc(ref Message m) { base.WndProc(ref m); if (m.Msg == WM_MOUSEMOVE) { uint extraInfo = GetMessageExtraInfo(); if ((extraInfo & 0xFFFFFF00) == MI_WP_SIGNATURE) { Text = "Pen"; } else { Text = "Mouse"; } } } [STAThread] static void Main() { Application.Run(new Form1()); } } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/17895/Handling-Touch-Pen-or-Mouse-Digitizer-input-in-you
CC-MAIN-2014-52
refinedweb
1,063
53
4 Control Flow. When writing a computer program, you need to be able to tell the computer what to do in different scenarios. For example, a calculator app would need to perform one action if the user taps the addition button, and another action if the user taps the subtraction button. In computer programming terms, this concept is known as control flow, as you can control the flow of decisions the code makes at multiple points. In this chapter, you’ll learn how to make decisions and repeat tasks in your programs. Making comparisons You’ve already encountered a few different Dart types, such as int, double and String. Each of those types is a data structure which is designed to hold a particular type of data. The int type is for whole numbers while the double type is for decimal numbers. String, by comparison, is useful for storing textual information. A new way of structuring information, though, requires a new data type. Consider the answers to the following questions: - Is the door open? - Do pigs fly? - Is that the same shirt you were wearing yesterday? - Is the traffic light red? - Are you older than your grandmother? - Does this make me look fat? These are all yes-no questions. If you want to store the answers in a variable, you could use strings like 'yes' and 'no'. You could even use integers where 0 means no and 1 means yes. The problem with that, though, is what happens when you get 42 or 'celery'? It would be better to avoid any ambiguity and have a type in which the only possible values are yes and no. Boolean values Dart has a data type just for this. It’s called bool, which is short for Boolean. A Boolean value can have one of two states. While in general you could refer to the states as yes and no, on and off, or 1 and 0, most programming languages, Dart included, call them true and false. const bool yes = true; const bool no = false; const yes = true; const no = false; Boolean operators Booleans are commonly used to compare values. For example, you may have two values and you want to know if they’re equal. Either they are equal, which would be true, or they aren’t equal, which would be false. Next you’ll see how to make that comparison in Dart. Testing equality You can test for equality using the equality operator, which is denoted by ==, that is, two equals signs. const doesOneEqualTwo = (1 == 2); print(doesOneEqualTwo); const doesOneEqualTwo = 1 == 2; Testing inequality You can also find out if two values are not equal using the != operator: const doesOneNotEqualTwo = (1 != 2); const alsoTrue = !(1 == 2); Testing greater and less than There are two other operators to help you compare two values and determine if a value is greater than ( >) or less than ( <) another value. You know these from mathematics: const isOneGreaterThanTwo = (1 > 2); const isOneLessThanTwo = (1 < 2); print(1 <= 2); // true print(2 <= 2); // true print(2 >= 1); // true print(2 >= 2); // true Boolean logic Each of the examples above tests just one condition. When George Boole invented the Boolean, he had much more planned for it than these humble beginnings. He invented Boolean logic, which lets you combine multiple conditions to form a result. AND operator As an introduction to the AND operator, read the following story: const isSunny = true; const isFinished = true; const willGoCycling = isSunny && isFinished; OR operator Vicki would like to draw a platypus, but she needs a model. She could either travel to Australia or she could find a photograph on the internet. If only one of two conditions need to be true for the result to be true, this is an example of a Boolean OR operation. The only instance where the result would be false is if both input Booleans were false. If Vicki doesn’t go to Australia and she also doesn’t find a photograph on the internet, then she won’t draw a platypus. const willTraveledToAustralia = true; const canFindPhoto = false; const canDrawPlatypus = willTraveledToAustralia || canFindPhoto; Operator precedence As was the case in the Ray and Vicki examples above, Boolean logic is usually applied to multiple conditions. When you want to determine if two conditions are true, you use AND, while if you only care whether one of the two conditions is true, you use OR. const andTrue = 1 < 2 && 4 > 3; const andFalse = 1 < 2 && 3 > 4; const orTrue = 1 < 2 || 3 > 4; const orFalse = 1 == 2 || 3 == 4; 3 > 4 && 1 < 2 || 1 < 4 false && true || true false && true || true Overriding precedence with parentheses If you want to override the default operator precedence, you can put parentheses around the parts Dart should evaluate first. 3 > 4 && (1 < 2 || 1 < 4) // false (3 > 4 && 1 < 2) || 1 < 4 // true String equality Sometimes you’ll want to determine if two strings are equal. For example, a children’s game of naming an animal in a photo would need to determine if the player answered correctly. const guess = 'dog'; const dogEqualsCat = guess == 'cat'; Mini-exercises - Create a constant called myAgeand set it to your age. Then, create a constant named isTeenagerthat uses Boolean logic to determine if the age denotes someone in the age range of 13 to 19. - Create another constant named marysAgeand set it to 30. Then, create a constant named bothTeenagersthat uses Boolean logic to determine if both you and Mary are teenagers. - Create a constant named readerand set it to your name as a string. Create a constant named rayand set it to Ray Wenderlich. Create a constant named rayIsReaderthat uses string equality to determine if readerand rayare equal. The if statement The first and most common way of controlling the flow of a program is through the use of an if statement, which allows the program to do something only if a certain condition is true. For example, consider the following: if (2 > 1) { print('Yes, 2 is greater than 1.'); } The else clause You can extend an if statement to provide code to run in the event that the condition turns out to be false. This is known as the else clause. Here’s an example: const animal = 'Fox'; if (animal == 'Cat' || animal == 'Dog') { print('Animal is a house pet.'); } else { print('Animal is not a house pet.'); } Animal is not a house pet. Else-if chains You can go even further with if statements. Sometimes you want to check one condition, and then check another condition if the first condition isn’t true. This is where else-if comes into play, nesting another if statement in the else clause of a previous if statement. const trafficLight = 'yellow'; var command = ''; if (trafficLight == 'red') { command = 'Stop'; } else if (trafficLight == 'yellow') { command = 'Slow down'; } else if (trafficLight == 'green') { command = 'Go'; } else { command = 'INVALID COLOR!'; } print(command); Slow down Variable scope if statements introduce a new concept called scope. Scope is the extent to which a variable can be seen throughout your code. Dart uses curly braces as the boundary markers in determining a variable’s scope. If you define a variable inside a pair of curly braces, then you’re not allowed to use it outside of those braces. const global = 'Hello, world'; void main() { const local = 'Hello, main'; if (2 > 1) { const insideIf = 'Hello, anybody?'; print(global); print(local); print(insideIf); } print(global); print(local); print(insideIf); // Not allowed! } Undefined name 'insideIf'. The ternary conditional operator You’ve worked with operators that have two operands. For example, in (myAge > 16), the two operands are myAge and 16. But there’s also an operator that takes three operands: the ternary conditional operator. It’s strangely related to if statements — you’ll see why this is in just a bit. const score = 83; String message; if (score >= 60) { message = 'You passed'; } else { message = 'You failed'; } (condition) ? valueIfTrue : valueIfFalse; const score = 83; const message = (score >= 60) ? 'You passed' : 'You failed'; Mini-exercises - Create a constant named myAgeand initialize it with your age. Write an ifstatement to print out "Teenager"if your age is between 13and 19, and “Not a teenager” if your age is not between 13and 19. - Create a constant named answerand use a ternary condition to assign to it the result you printed out for the same cases in the above exercise. Then print out answer. Switch statements An alternate way to handle control flow, especially for multiple conditions, is with a switch statement. The switch statement takes the following form: switch (variable) { case value1: // code break; case value2: // code break; ... default: // code } Replacing else-if chains Using if statements are convenient when you have one or two conditions, but the syntax can be a little verbose when you have a lot of conditions. Check out the following example: const number = 3; if (number == 0) { print('zero'); } else if (number == 1) { print('one'); } else if (number == 2) { print('two'); } else if (number == 3) { print('three'); } else if (number == 4) { print('four'); } else { print('something else'); } const number = 3; switch (number) { case 0: print('zero'); break; case 1: print('one'); break; case 2: print('two'); break; case 3: print('three'); break; case 4: print('four'); break; default: print('something else'); } Switching on strings A switch statement also works with strings. Try the following example: const weather = 'cloudy'; switch (weather) { case 'sunny': print('Put on sunscreen.'); break; case 'snowy': print('Get your skis.'); break; case 'cloudy': case 'rainy': print('Bring an umbrella.'); break; default: print("I'm not familiar with that weather."); } Bring an umbrella. Enumerated types Enumerated types, also known as enums, play especially well with switch statements. You can use them to define your own type with a finite number of options. const weather = 'I like turtles.'; enum Weather { sunny, snowy, cloudy, rainy, } Naming enums When creating an enum in Dart, it’s customary to write the enum name with an initial capital letter, as Weather was written in the example above. The values of an enum should use lowerCamelCase unless you have a special reason to do otherwise. Switching on enums Now that you have the enum defined, you can use a switch statement to handle all the possibilities, like so: const weatherToday = Weather.cloudy; switch (weatherToday) { case Weather.sunny: print('Put on sunscreen.'); break; case Weather.snowy: print('Get your skis.'); break; case Weather.cloudy: case Weather.rainy: print('Bring an umbrella.'); break; } Bring an umbrella. Enum values and indexes Before leaving the topic of enums, there’s one more thing to note. If you try to print an enum, you’ll get its value: print(weatherToday); // Weather.cloudy final index = weatherToday.index; Avoiding the overuse of switch statements Switch statements, or long else-if chains, can be a convenient way to handle a long list of conditions. If you are a beginning programmer, go ahead and use them; they’re easy to use and understand. Mini-exercises - Make an enumcalled AudioStateand give it values to represent playing, pausedand stoppedstates. - Create a constant called audioStateand give it an AudioStatevalue. Write a switchstatement that prints a message based on the value. Loops In the first three chapters of this book, your code ran from the top of the main function to the bottom and then it was finished. With the addition of if statements in this chapter, you gave your code the opportunity to make decisions. However, it’s still running from top to bottom, albeit following different branches. While loops A while loop repeats a block of code as long as a Boolean condition is true. You create a while loop this way: while (condition) { // loop code } while (true) { } var sum = 1; while (sum < 10) { sum += 4; print(sum); } Do-while loops A variant of the while loop is called the do-while loop. It differs from the while loop in that the condition is evaluated at the end of the loop rather than at the beginning. Thus, the body of a do-while loop is always executed at least once. do { // loop code } while (condition) sum = 1; do { sum += 4; print(sum); } while (sum < 10); Comparing while and do-while loops It isn’t always the case that while loops and do-while loops will give the same result. For example, here’s a while loop where sum starts at 11: sum = 11; while (sum < 10) { sum += 4; } print(sum); sum = 11; do { sum += 4; } while (sum < 10); print(sum); Breaking out of a loop Sometimes you’ll need to break out of a loop early. You can do this using the break statement, just as you did from inside switch statements before. This immediately stops the execution of the loop and continues on to the code that follows the loop. sum = 1; while (true) { sum += 4; if (sum > 10) { break; } } A random interlude A common need in programming is to be able to generate random numbers. And Dart provides this functionality in the dart:math library, which is pretty handy! import 'dart:math'; final random = Random(); while (random.nextInt(6) + 1 != 6) { print('Not a six!'); } print('Finally, you got a six!'); Not a six! Not a six! For loops In the previous section, you looked at while loops. Now it’s time to learn about another type of loop: the for loop. In this section you’ll learn about C-style for loops, and in the next section, about for-in loops. These are probably the most common loop you’ll see, and you’ll use them to run a block of code a certain number of times. for (var i = 0; i < 5; i++) { print(i); } 0 1 2 3 4 The continue keyword Sometimes you want to skip an iteration only for a certain condition. You can do that using the continue keyword. Have a look at the following example: for (var i = 0; i < 5; i++) { if (i == 2) { continue; } print(i); } 0 1 3 4 For-in loops There’s another type of for loop that has simpler syntax; it’s called a for-in loop. It doesn’t have any sort of index or counter variable associated with it, but it makes iterating over a collection very convenient. const myString = 'I ❤ Dart'; for (var codePoint in myString.runes) { print(String.fromCharCode(codePoint)); } I ❤ D a r t For-each loops You can sometimes simplify for-in loops even more with the forEach() method that is available to collections. const myNumbers = [1, 2, 3]; myNumbers.forEach((number) => print(number)); myNumbers.forEach((number) { print(number); }); 1 2 3 Mini-exercises - Create a variable named counterand set it equal to 0. Create a whileloop with the condition counter < 10which prints out counter is X(where Xis replaced with countervalue) and then increments counterby 1. - Write a forloop starting at 1 and ending with 10 inclusive. Print the square of each number. - Write a for-inloop to iterate over the following collection of numbers. Print the square root of each number. const numbers = [1, 2, 4, 7]; Challenges Before moving on, here are some challenges to test your knowledge of control flow. It is best if you try to solve them yourself, but solutions are available in the challenges folder if you get stuck. Challenge 1: Find the error What’s wrong with the following code? const firstName = 'Bob'; if (firstName == 'Bob') { const lastName = 'Smith'; } else if (firstName == 'Ray') { const lastName = 'Wenderlich'; } final fullName = firstName + ' ' + lastName; Challenge 2: Boolean challenge In each of the following statements, what is the value of the Boolean expression? true && true false || false (true && 1 != 2) || (4 > 3 && 100 < 1) ((10 / 2) > 3) && ((10 % 2) == 0) Challenge 3: Next power of two Given a number, determine the next power of two above or equal to that number. Powers of two are the numbers in the sequence of 2¹, 2², 2³, and so on. You may also recognize the series as 1, 2, 4, 8, 16, 32, 64… Challenge 4: Fibonacci Calculate the nth Fibonacci number. The Fibonacci sequence starts with 1, then 1 again, and then all subsequent numbers in the sequence are simply the previous two values in the sequence added together (1, 1, 2, 3, 5, 8…). You can get a refresher here: Challenge 5: How many times? In the following for loop, what will be the value of sum, and how many iterations will happen? var sum = 0; for (var i = 0; i <= 5; i++) { sum += i; } Challenge 6: The final countdown Print a countdown from 10 to 0. Challenge 7: Print a sequence Print the sequence 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0. Key points - You use the Boolean data type boolto represent trueand false. - The comparison operators, all of which return a Boolean, are:
https://www.raywenderlich.com/books/dart-apprentice/v1.0.ea1/chapters/4-control-flow
CC-MAIN-2021-49
refinedweb
2,805
71.85
In this article, we will know about a critical Django Template Language tag – the Django URL template Tag and know how to use them. What is URL Template Tag? The URL template tag is a typical type of Tag in the Django Template Language framework. This tag is specifically used to add View URLs in the Template Files. In the HTML Template file, URL tags are used with the Anchor, <a href> attribute of HTML, which handles all the URLs in HTML Why do we need the Django URL tag? When we can add the View URL directly, what’s the purpose of the Django URL template tag? Let’s look at a simple HTML a href tag. <a href = "/books/book1">Info about book1</a> We know that it takes a static URL and allows us to click through to a link. Views take data from the client via the URL. For example, in the view below: def View(request,book_id): #Code return render(request, 'Template.html', {'article_id' : article_id}) In this case, The URL path would be: path('book/<int:book_id>' , View, name = 'Books_View') Here the book_id can change from book to book. Hence, directly adding this URL, whose endpoint depends on book_id, is not practical. And that’s where the URL tag comes into the picture. Hands-on with the Template URL tag To use the Template Tag, we are going to need the Views right!! So let us first create a few simple views to work with. The URL Template Tag The URL Template tag syntax is pretty simple: {% url 'View_Name' variable1 variable2 ... %} Here the View Name is the name assigned to it in the urls.py file. Variable1, Variable 2, etc., are the input arguments for the particular view. 1. Create the Views Add the following code in Views.py def View1(request): return render(request, 'page1.html') Now, we will also create a simple view, taking data input from the user as well. Add the following “ ” to your file as well. def View2(request,id): return render(request, 'page2.html',{'id':id}) The URL paths for both the Views will be: path('page/', View1, name = 'webpage1'), path('page/<int:id>', View2, name = 'webpage2'), You can learn more about setting up views in the Django views article. 2. Create the Template file Now create a Template file “page1.html ” and add the code into the file. <h2> The Webpage 1 </h2> <a href = "{% url 'webpage2' id=2 %}"> Click Here for book 2 </a> Lets create the “page2.html” as well. <H2> The Webpage2 </H2> <a href = "{% url 'webpage1' %}"> Go back </a> Implementing Django URL Tag Enough with the coding, let us now run the program. Hence go to the terminal and fire up your server. python manage.py runserver Go to the URL “/ page”: Click the link and check That’s it, see how easy it is to use the URL template tag !! Conclusion That’s it, guys !! This was all about the URL Template Tag. Do check out the Django Templates article as well as the DTL article for more information about Templates. See you in the next article !! Till then, Keep Coding !!
https://www.askpython.com/django/django-url-template
CC-MAIN-2020-50
refinedweb
528
83.36
Folium provides a simple interface to leaflet.js for mapping geoJSON data. Work through example from tutorial that uses a geoJSON layer Folium. These have moved since this classwork was written, a similar version is at . Note that the rewrite of the examples assumes that you are using jupyter and displays in-line. To save an .html file to diplay, you need to save it (mapName.save(outfile='htmlFile.html')) Using your the geoJSON file that you created. What do the different tile options do? That is, how do the following options change your map: An useful way to display data is via choropleth maps, were each region is shaded to varying degrees. In the simplest form, we can use a geoJSON file to add information (such as political or administrative boundaries) as a "layer" on our maps. But, we can also, shade the regions by values such as density, employment, or school test scores: To create a map with shaded regions, we need: (scroll down to "School, Police, Health & Fire" and export as geoJSON, called schoolDistricts.json). (If you have troubles downloading, here's the file: schoolDistricts.json NYC Department of Education (download the District math scores, open in your favorite spreadsheet program, and export the sheet of scores for all students as a CSV file, called math20132016.csv). Let's start by adding the geoJSON layer for our map (we'll add in shading in the next section). Assuming you have the files named as above, and in the same directory, we can start with: import folium #Create a map: schoolMap = folium.Map(location=[40.75, -74.125]) #Create a layer, shaded by test scores: schoolMap.choropleth(geo_path="schoolDistricts.json", fill_opacity=0.5, line_opacity=0.5 ) #Output the map to an .html file: schoolMap.save(outfile='testScores.html') In your favorite IDE, run this program. The import statement, and the creating and saving are the same as before. The only new function is choropleth(). It has as input parameters: Once we have data, we will add a few more parameters to link in the data and change the color shading. For data, we'll use the school test scores that you downloaded above. The file has a lot of information: For our first map, we'll focus on the most recent year, 2016, and the results for 8th grade. import pandas as pd #Read in the test scores fullData = pd.read_csv('math20132016.csv', skiprows = 6) (we're skipping the extraneous information on the first 6 rows). scores2016 = fullData[fullData.Year == 2016] scores8th2016 = scores2016[fullData.Grade == "8"] print(scores8th2016) Our new dataFrame is significantly smaller-- only 33 lines of data, one corresponding to each of the 33 regional school districts in New York City. We would like to shade each district by its average test score. To do this, we modify the choropleth() command to be: schoolMap.choropleth(geo_path="schoolDistricts.json", fill_color='YlGn', fill_opacity=0.5, line_opacity=0.5, threshold_scale = [100,200,300,400], data = scores8th2016, key_on='feature.properties.SchoolDist', columns = ['district', 'Mean Scale Score'] )Let's focus only on the new parameters: Run your program and examine the map it produces. Here's the whole file: testScores.py. File "/Users/stjohn/anaconda/lib/python3.5/site-packages/folium/folium.py", line 601, in choropleth raise ValueError('Please pass a valid color brewer code to ' ValueError: Please pass a valid color brewer code to fill_local. See docstring for valid codes. It suggests looking at the docstring for valid codes. The docstring is simple documentation that (well-written) code should contain. The name of the function that's giving the error is choropleth. It's not a stand-alone function (like print) but one that we apply to objects, so, to find its docstring, we need to give the class of the objects to which it belongs (Map) and the package where Map lives (folium): print(folium.Map.choropleth.__doc__)It prints out a very long message, but we're interested in what colors we can use. Can you find the relevant section in the message: Apply a GeoJSON overlay to the map. Plot a GeoJSON overlay on the base map. There is no requirement to bind data (passing just a GeoJSON plots a single-color overlay), but there is a data binding option to map your columnar data to different feature objects with a color scale. If data is passed as a Pandas dataframe, the "columns" and "key-on" keywords must be included, the first to indicate which DataFrame columns to use, the second to indicate the layer in the GeoJSON on which to key the data. The 'columns' keyword does not need to be passed for a Pandas series. Colors are generated from color brewer ( sequential palettes on a D3 threshold scale. The scale defaults to the following quantiles: [0, 0.5, 0.75, 0.85, 0.9]. A custom scale can be passed to `threshold_scale` of length <=6, in order to match the color brewer range. TopoJSONs can be passed as "geo_path", but the "topojson" keyword must also be passed with the reference to the topojson objects to convert. See the topojson.feature method in the TopoJSON API reference: Parameters ---------- geo_path: string, default None URL or File path to your GeoJSON data geo_str: string, default None String of GeoJSON, alternative to geo_path data_out: string, default 'data.json' Path to write Pandas DataFrame/Series to JSON if binding data data: Pandas DataFrame or Series, default None Data to bind to the GeoJSON. columns: dict or tuple, default None If the data is a Pandas DataFrame, the columns of data to be bound. Must pass column 1 as the key, and column 2 the values. key_on: string, default None Variable in the GeoJSON file to bind the data to. Must always start with 'feature' and be in JavaScript objection notation. Ex: 'feature.id' or 'feature.properties.statename'. threshold_scale: list, default None Data range for D3 threshold scale. Defaults to the following range of quantiles: [0, 0.5, 0.75, 0.85, 0.9], rounded to the nearest order-of-magnitude integer. Ex: 270 rounds to 200, 5600 to 6000.'. fill_opacity: float, default 0.6 Area fill opacity, range 0-1. line_color: string, default 'black' GeoJSON geopath line color. line_weight: int, default 1 GeoJSON geopath line weight. line_opacity: float, default 1 GeoJSON geopath line opacity, range 0-1. legend_name: string, default empty string Title for data legend. topojson: string, default None If using a TopoJSON, passing "objects.yourfeature" to the topojson keyword argument will enable conversion to GeoJSON. reset: boolean, default False Remove all current geoJSON layers, start with new layer Returns ------- GeoJSON data layer in obj.template_vars Examples -------- >>> m.choropleth(geo_path='us-states.json', line_color='blue', ... line_weight=3) >>> m.choropleth(geo_path='geo.json', data=df, ... columns=['Data 1', 'Data 2'], ... key_on='feature.properties.myvalue', ... fill_color='PuBu', ... threshold_scale=[0, 20, 30, 40, 50, 60]) >>> m.choropleth(geo_path='countries.json', ... topojson='objects.countries') Did you find it? It's excerpted below:'. Note, "Blue" is not on the list, but it is the default if we are filling a solid color. Try some of the other color palettes to decipher the naming scheme. All students in Seminar 4 are required to give a presentation to the mock city council in May. Logistics will be sent out by MHC soon, but here's an overview: Given the size of our seminar, we can have 5 to 6 teams. The goal of today's classwork is to organize into teams and sketch out the theme(s) that you would like to address. From the project topics survey, some are (very!) passionate about a given theme, and for others, they have many different interests. To start the process:
https://stjohn.github.io/teaching/seminar4/s17/cw6.html
CC-MAIN-2022-21
refinedweb
1,283
66.13
One of the neat features of Crunchy, suggested to me by Andrew Dalke at Pycon 2007, is its ability to dynamically display images that are generated by some Python code. The example given in the Crunchy distribution uses matplotlib. However, the way this is done is slightly cumbersome. First, some pre-existing vlam (very little added markup) must already have been added to an html page, specifying the name of the image (e.g. foo.png) to be displayed. This special vlam will result in an editor appearing on the page together with two buttons (one to execute the code, the other to load the image). Second, the Python code must be such that an image will be generated with that name. By default, the image is saved in and retrieved from a special Crunchy temp directory. While the approach used works, it does also mean that images can't be generated and displayed using a simple Python interpreter, nor can they be displayed from an arbitrary location. At least that was the case until now. Prompted by a suggestion from Johannes, I wrote a very simple module whose core consists of only 7 Python statements, and which does away entirely with the cumbersome vlam image file option. Images can now be loaded and displayed from anywhere using two lines of code: import image_display image_display.show(path, [width=400, height=400]) And by anywhere, I mean from any interactive Python element (interpreter, editor), and using any image source (local images and images on the web), like image_display.show('') Furthermore, one can use this approach to create a "slide show" by alternating image_display.show() and time.sleep(). Since there is no more need to use the old image_file option, it will be removed in the next Crunchy release. I may have to give some more thoughts to the API for this new option (e.g. is there a better name than image_display? Should I add a function for a slide show, giving a list of images and a time delay? etc.); suggestions are welcome. 1 comment: Marketing? If Crunchy on Python 3 is robust at Python 3 launch time it would be great if you could get a number of Python 3 reviewers to review via crunchy applied to the tutorial. - Paddy.
https://aroberge.blogspot.com/2008/01/more-power-by-removing-option.html
CC-MAIN-2018-09
refinedweb
381
71.34
Hello. I've decided to teach myself C++. So for the past week or so I've been reading the tutorials on this site, doing the examples etc. I finished the tutorials and decided to try the challenges. And I got stuck on the first!!! After trying unsuccessfully to get my program to work, I looked at Cprogramming.com's solution, compiled it, ran it, and even the solution doesn't work! When running it fills in the element of the array (I presume) but it does not list them(I take list to mean print) and instead just exits. So, is my compiler broken? Or is there some incredibly noobish mistake that i can't see(most likely) Below is the solution, as posted by Cprogramming.com: When completed, the following program should first fill in (by asking the user for input) and then list all the elements in an array: Hope I've clarified enoughHope I've clarified enoughCode:#include <iostream> using namespace std; int main() { int array[8]; for(int x=0; x<8; x++) cin>> array[x]; for(int x=0; x<8; x++) cout<< array[x]; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/106439-noob-question-code-completion-array-challenge.html
CC-MAIN-2015-18
refinedweb
193
62.48
Django internationalization switching between languages I have a french and a english site up, very basic. I created my fr language file. I have a few translated strings to test this to make sure it works but I am confused as how to set up the actual link to swap between languages. I have followed this, but receive a 404 for /next/page, am i doing this correctly? Here is my code if this helps: <form action="/i18n/setlang/" method="post"> {% csrf_token %} <input name="next" type="hidden" value="/next/page/" /> <select name="language"> {% get_language_info_list for LANGUAGES as languages %} {% for lang in LANGUAGES %} {% if lang.0 != '' %} <option value="{{lang.0}}">{{lang.1}}</option> {% endif %} {% endfor %} </select> <input type="submit" value="Go" /> </form> I have also added my urls like so: (r'^i18n/', include('django.conf.urls.i18n')), And I have this in my settings for middleware and language: 'django.middleware.locale.LocaleMiddleware', LANGUAGES = ( ('en', 'English'), ('fr', 'French'), ) I am confused as to how to have this working so i can swap between the 2 languages, or, If I am doing it correctly, why am I getting that 404 error with the /next/page when trying to change languages? Thanks! Jeff Answers In this line <input name="next" type="hidden" value="/next/page/" /> you have to substitute "/next/page/" with the page that you want to load after the language change. In case that you want to load the same page, you should write value="". Need Your Help iPhone:Twitter integration “a string with a char: ” + 'c' better or worse than “a string with a char: ” + “c”? java string character concatenationA small question, but one I'm genuinely curious about.
http://unixresources.net/faq/8317530.shtml
CC-MAIN-2019-13
refinedweb
278
59.23
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Collection Interfaces5:06 with Jeremy McLain We can make our code easier to use by using collection interfaces whenever possible. System.Collections.Generic System.Collections.Generic.List System.Collections.Generic Interfaces For a review of interfaces in C#, check out this Treehouse course: Intermediate C# - 0:00 If you take another look at the documentation for the generic - 0:03 list collection, you'll notice that it implements a number of interfaces here. - 0:08 Every collection type implements at least the IEnumerable interface and - 0:14 the ICollection interface. - 0:16 These two interfaces define what it means to be a collection. - 0:20 As you can see, the list collection also implements all of these other interfaces. - 0:26 Including the IList interface, which defines what it means to be a list. - 0:31 Let's take a deeper look at what it means for - 0:33 a collection to implement these interfaces. - 0:36 And how we can use these interfaces to our advantage when working with collections. - 0:40 If you need a refresher on interfaces, check the teacher's notes for - 0:43 a link to a Treehouse course they can get you up to speed. - 0:47 To see a list of other interfaces implemented by collections - 0:50 in the System.Collections.Generic name space - 0:53 we need to take a look at the documentation for that namespace. - 0:56 You'll find a link to this page in the teacher's notes. - 0:59 Or you can click-on this link right here. - 1:02 You wanna keep this documentation page open for - 1:05 the duration of the course because we'll be referring back to it often. - 1:08 Let's scroll down to the section on interfaces. - 1:13 Here's the IEnumerable interface. - 1:16 IEnumerable is one of the interfaces that all collections implement. - 1:20 In order to use a foreach loop to loop through items of a collection, - 1:24 it must implement the IEnumerable interface. - 1:27 It exposes a single method name to get an numerator. - 1:31 We'll learn more about IEnumerators and - 1:33 the IEnumerable interface in a later course. - 1:35 The ICollection interface is another interface that all collections implement. - 1:40 It exposes the Count property which tells us how many items are in the collection. - 1:45 It also defines Add, Clear, Contains and the Remove methods. - 1:51 The Contains method works just like the index of method. - 1:55 Only instead of returning the index of the item it only returns true or false. - 2:00 Not all collections can be indexed into an array or a list. - 2:04 But we can still use the Contains method to determine - 2:07 if an item is in the collection. - 2:10 Notice that the ICollection interface also inherits from the IEnumerable interface. - 2:16 If you implement a collection on your own you can just state that it - 2:20 implements ICollection. - 2:22 Listing IEnumerable is redundant. - 2:24 The IList interface adds one property that the ICollection interface doesn't have. - 2:31 This is the Item property here. - 2:33 This allows us to use the square brackets to index into the list. - 2:38 The IList interface also exposes the IndexOf, Insert, and RemoveAt methods. - 2:44 So these are the bare minimum methods and properties that a list provides. - 2:49 Of course as we've seen with lists classes that implement the IList interface - 2:54 can add additional methods. - 2:55 As you can see there are many other Collection interfaces that - 2:59 the list doesn't implement. - 3:01 We'll learn about these in the rest of this course. - 3:03 So why is it important that developers know about - 3:06 what each of these interfaces does? - 3:08 It's important because we always wanna use the interface whenever possible when we - 3:12 specify what type of collection a method takes as a parameter. - 3:16 This gives the color of the method the most flexibility - 3:20 in what they pass into it. - 3:21 I've created a class named SchoolRole. - 3:24 That helps to manage all of the students of a school. - 3:27 Start a new workspace by clicking on the button on this page to see these changes. - 3:32 There is a method named AddStudents that takes a list of students and - 3:36 then adds all of the new students to the list of students already in the school. - 3:40 The way this method is written right now, - 3:43 the color of this method would have to pass the list collection type. - 3:47 However, they might already have the students they need to add in an array or - 3:52 another type of collection. - 3:53 They'd first have to convert their collection to a list before it can be - 3:57 passed to the AddStudents method. - 3:59 However, the AddStudents method - 4:01 doesn't need to use the full functionality of a list. - 4:05 It only needs the ability to loop through the new students passed in and - 4:08 add them to the SchoolRoll. - 4:10 That's actually exactly what the AddRange method does. - 4:14 So we really only need to accept an IEnumerable here. - 4:19 So we can change List to IEnumerable. - 4:24 This allows the caller of the AddStudents method to pass in any type of collection. - 4:29 Because all collections implement the IEnumerable interface. - 4:33 If the AddStudents method needed to see how many new students were being added, - 4:38 we could change IEnumerable to ICollection. - 4:44 This would allow us to call the Count property. - 4:47 Or if we needed to be able to index into the student's collection - 4:50 we could change IENumerable to IList. - 4:54 However, by using a more complex interface type we're limiting - 4:58 what can be passed into the AddStudents method. - 5:00 So we'll keep this as IEnumerable.
https://teamtreehouse.com/library/collection-interfaces
CC-MAIN-2017-13
refinedweb
1,062
64.41
Rive ReactRive React A wrapper around Rive.js, providing full control over the js runtime while making it super simple to use in React applications. Detailed runtime documentation can be found in Rive's help center. Create and ship interactive animations to any platformCreate and ship interactive animations to any platform Rive is a real-time interactive design and animation tool. Use our collaborative editor to create motion graphics that respond to different states and user inputs. Then load your animations into apps, games, and websites with our lightweight open-source runtimes. InstallationInstallation npm i --save rive-react Note: This library is using React hooks so the minimum version required for both react and react-dom is 16.8.0. UsageUsage ComponentComponent Rive React provides a basic component as it's default import for displaying simple animations. import Rive from 'rive-react'; function Example() { return <Rive src="loader.riv" />; } export default Example; PropsProps src: File path or URL to the .riv file to display. artboard: (optional) Name to display. animations: (optional) Name or list of names of animtions to play. layout: (optional) Layout object to define how animations are displayed on the canvas. See Rive.js for more details. - All attributes and eventHandlers that can be passed to a divelement can also be passed to the Rivecomponent and used in the same manner. useRive HookuseRive Hook For more advanced usage, the useRive hook is provided. The hook will return a component and a Rive.js Rive object which gives you control of the current rive file. import { useRive } from 'rive-react'; function Example() { const params = { src: 'loader.riv', autoplay: false, }; const { RiveComponent, rive } = useRive(params); return ( <RiveComponent onMouseEnter={() => rive && rive.play()} onMouseLeave={() => rive && rive.pause()} /> ); } export default Example; ParametersParameters riveParams: Set of parameters that are passed to the Rive.js Riveclass constructor. nulland undefinedcan be passed to conditionally display the .riv file. opts: Rive React specific options. Return ValuesReturn Values RiveComponent: A Component that can be used to display your .riv file. This component accepts the same attributes and event handlers as a divelement. rive: A Rive.js Riveobject. This will return as null until the .riv file has fully loaded. canvas: HTMLCanvasElement object, on which the .riv file is rendering. setCanvasRef: A callback ref that can be passed to your own canvas element, if you wish to have control over the rendering of the Canvas element. setContainerRef: A callback ref that can be passed to a container element that wraps the canvas element, if you which to have control over the rendering of the container element. For the vast majority of use cases, you can just the returned RiveComponentand don't need to worry about setCanvasRefand setContainerRef. riveParamsriveParams src?: (optional) File path or URL to the .riv file to use. One of srcor buffermust be provided. buffer?: (optional) ArrayBuffer containing the raw bytes from a .riv file. One of srcor buffermust be provided. artboard?: (optional) Name of the artboard to use. animations?: (optional) Name or list of names of animations to play. stateMachines?: (optional) Name of list of names of state machines to load. layout?: (optional) Layout object to define how animations are displayed on the canvas. See Rive.js for more details. autoplay?: (optional) If true, the animation will automatically start playing when loaded. Defaults to false. onLoad?: (optional) Callback that get's fired when the .rive file loads . onLoadError?: (optional) Callback that get's fired when an error occurs loading the .riv file. onPlay?: (optional) Callback that get's fired when the animation starts playing. onPause?: (optional) Callback that get's fired when the animation pauses. onStop?: (optional) Callback that get's fired when the animation stops playing. onLoop?: (optional) Callback that get's fired when the animation completes a loop. onStateChange?: (optional) Callback that get's fired when a state change occurs. optsopts useDevicePixelRatio: (optional) If true, the hook will scale the resolution of the animation based the devicePixelRatio. Defaults to true. NOTE: Requires the setContainerRefref callback to be passed to a element wrapping a canvas element. If you use the RiveComponent, then this will happen automatically. fitCanvasToArtboardHeight: (optional) If true, then the canvas will resize based on the height of the artboard. Defaults to false. useStateMachineInput HookuseStateMachineInput Hook The useStateMachineInput hook is provided to make it easier to interact with state machine inputs on a rive file. import { useRive, useStateMachineInput } from 'rive-react'; function Example() { const STATE_MACHINE_NAME = 'button'; const INPUT_NAME = 'onClick'; const { RiveComponent, rive } = useRive({ src: 'button.riv', stateMachines: STATE_MACHINE_NAME, autoplay: true, }); const onClickInput = useStateMachineInput( rive, STATE_MACHINE_NAME, INPUT_NAME ); // This example is using a state machine with a trigger input. return <RiveComponent onClick={() => onClickInput.fire()} />; } export default Example; See our examples folder for working examples of Boolean and Number inputs. ParametersParameters rive: A Riveobject. This is returned by the useRivehook. stateMachineName: Name of the state machine. inputName: Name of the state machine input. Return ValueReturn Value A Rive.js stateMachineInput object. ExamplesExamples The examples shows a number of different ways to use Rive React. See the instructions for each example to run locally.
https://www.npmjs.com/package/rive-react
CC-MAIN-2022-05
refinedweb
837
52.15
Axis 1.1 used to be the SOAP framework that was integrated with RIFE. The integration was very hackish to say the least since there were no easily replaceable and reusable parts that could be used with another servlet-like gateway in front of it. Since Axis 1.1 doesn't work with Java 5.0, we really had to remove it and either support version 1.2 or use another solution. We decided to move over to XFire and made creating SOAP elements even easier thanks to it. RIFE version 1.5.1 requires the XFire 1.0 release, later versions support the XFire 1.2 API (as of 1.6 snapshots). All that's needed to setup a SOAP webservice in RIFE with XFire now are two properties, like this: <element id="ECHO" extends="rife/soap/xfire.xml" url="/soap" > <property name="home-class">com.uwyn.rife.engine.testwebservices.soap.xfire.Echo</property> <property name="home-api">com.uwyn.rife.engine.testwebservices.soap.xfire.EchoApi</property> </element> The Java source code is very simple: public class Echo implements EchoApi { public String echo(String value) { return "I got : '"+value+"'"; } } public interface EchoApi { public String echo(String value); } The webservice can be accessed at "" now. To see the WSDL, just fetch "". SOAP services now also support the ElementService interface.
http://rifers.org/wiki/display/RIFE/Support+for+SOAP+web+services
crawl-001
refinedweb
219
52.46
AlphaBeta55 + 2 comments Thank you abhaykuma + 1 comment long getWays(long n, vector<long> c) {int64_t A[c.size()][n+1]; for(int64_t i=0;i<c.size();i++){A[i][0]=1;} for(int64_t i=0;i<c.size();i++) {for(int64_t j=1;j<=n;j++) { int64_t x,y; x=(i>0) ? A[i-1][j]:0; y=(j-c[i]>=0)? A[i][j-c[i]]:0; A[i][j]=x+y; }} return A[c.size()-1][n];} what is wrong with this sxy1573 + 1 comment My method is same to you,l also get wrong . But my debug answer is same to the true answer. sauravjack + 1 comment same is happening with me. priyankajiandani + 1 comment same problem I am getting acodebreaker + 2 comments thanks Dream_machine + 1 comment thanks :) mohitduklan + 1 comment thanks amazinghacker + 4 comments Do we have to use recursion with dynamic programming to solve this? robbyoconnor + 1 comment dynamic programming is really just caching previous work piyusman + 5 comments public static long count(long sum,long[] coins , int coin_num){ if(sum==0){ return 1; } else if(sum<0){ return 0; } else if(coin_num<=0){ return 0; } else if(memo[(int)sum][coin_num]!=-1){ return memo[(int)sum][coin_num]; } else{ memo[(int)sum][coin_num]=count(sum,coins,coin_num-1)+count(sum-coins[coin_num-1],coins,coin_num); return memo[(int)sum][coin_num]; } } Here we are caching the previous result in memo multidimensional array. First you need to initialize it wih some number(you can use Arrays.fill for that) Also I tried to make it clear what memo is really doing. Means if the ways for particular sum is already there then no need to calulate again. Else go ahead and solve. sum = amount coins = array of all available coins. coin_num is length of coins array(or you can say that next int in first line of this question's input) So basically if you have sum = 100; coins[] = {25,10,5,1}; coin_num = 4; memo[][] = new long[sum+1][coin_num+1]; and then call countWays(sum,coins,coin_num); I have a question to others though, in java when I am trying to make it long I have to do a lot of casting, any other way to deal with the 'long' problem. sushant001 + 1 comment Your comment helped, so here goes mine ... Rather than using 2D array, create another class (Let call it Change having long sum, int coin . override equals and hashcode- hashcode is really important , use hashcode = sum, that will be good enough ). Now , in place of 2D array, use map (HashMap). and you are done. Thanks bnegrao + 4 comments I don't understand why if sum == 0 then return 1; If sum is 0 and the coin values are >= 1, how can I have one solution if the sum is 0? if sum is 0 I should have 0 solutions... jiwan_chung + 2 comments I didn't understand count(sum,coins,coin_num-1)+count(sum-coins[coin_num-1],coins,coin_num); part, can anyone explain me kindly? I really want to understand DP... :( sumeet_ram + 3 comments Its like, what you do for, finding the number of subsets of a set. Whether an element belongs to your set or it doesn't. Here, either you use the last coin (in your set of given coins) to make change or you dont. In case you dont use the last (say 'm'th) coin, now you are left with the subproblem of solving the toatal amount('sum' variable here) using onle the remaining 'm-1' coins ([0:m-2] in the array). We cant use the 'm'th coin because by our own assumption, its not part of the subproblem. In case you use the last coin, you have the subproblem of solving "sum-Value('m'th coin)" using all the 'm' coins ( because using the 'm' th coin once doesnt bar you from using it again). Finally you can add the count from both the cases as either of them are a possibility in your final solution. Lemme know if this helped!! Snigdha_Sanat + 1 comment @sumeet_ram I did not understand it still, can u please elaborate it a bit more? The choice of the 2 subproblems(count(sum,coins,coin_num-1) and count(sum-coins[coin_num-1],coins,coin_num)) do not look intuitive to me- what am I missing? midnjerry + 3 comments Imagine it like a tree.... let's say we have n = 25 and c = [1,2,3] we can break it down from 25 into 3 branches by: 25 = 1 + f(24) 25 = 2 + f(23) 25 = 3 + f(22) each of these scenarios also branch off: f(24) = 1 + f(23) f(24) = 2 + f(22) f(24) = 3 + f(21) f(23) = 1 + f(22) f(23) = 2 + f(21) f(23) = 3 + f(20) f(22) = 1 + f(21) f(22) = 2 + f(20) f(22) = 3 + f(19) As you keep on propagating... all of the permutations that end up negative are invalid combinations. all of the permutations that end up exactly at 0 are good combinations. this is a simplistic approach but it's not correct. Because it doesn't differentiate between... [2,2,5] and [5,2,2]. To get only unique combinations you need to remove 1 element from the C set everytime you iterate. so... it'll look like this (in pseudo code). static long getWays(long n, long[] c) { if (n < 0) { return 0; } if (n == 0) { return 1; } if (CacheValuePopulated(n)){ return CacheValue(n); } long sum = 0; for (int i = 0; i < c.length; i++) { long target = n - c[i]; long[] subc = subArrayOfCFrom(i to end); long result = getWays(target, subc); if (result > 0) { saveResultToYourCache(result); } sum += result; } return sum; } Finally... what made the cache problematic was that you can't just save values for any given n. You have to save values for n BASED on the C subArray you used to calculate it. Since the problem states that all values of c are unique, I used the first element of the c subarray as a key which pointed to its own n+1 array. So for example, the cache could differentiate between running (5, [1,2,3]) and (5, [2,3]) f(5,[1,2,3]) = 4 (1,1,1,1,1), (1,1,1,2), (1,1,3), (2, 3) f(5,[2,3]) = 1 (2,3) I hope this helps everyone. I didn't want to give the full answer away. Good luck! ErikWitkowski + 1 comment but if you remove the element you just use you cannot have repeated elements like [2,2,5], can you? munnusingh + 0 comments. munnusingh + 1 comment. ljason2020 + 0 comments As I understand it, the variable coin_num begins counting at 1. So coin number 1 would be the first coin in the array, which is actually index 0, hence you must subtract 1 from coin_num. It just depends on how you define the variable. You could use count(sum-coins[coin_num],coins,coin_num), as long as it fits with the rest of your program. dheerajkr2313 + 0 comments where you initialize the memoization array? in the function or as a global? InfinityRedux + 1 comment There is a way of solving this without using recursion at all, just using dynamic programming. Instead of starting with "we want the solution for n" you can instead start with "what can we do with these coins". ljason2020 + 0 comments Yes, that is correct, but I believe you can do dynamic programming both recursively and iteratively (as you stated). The key aspect of dynamic programming is memoization or tabulation. ABcdexter19 + 7 comments Did it in python 8-) agarwal_sumedha1 + 0 comments it is not passing the test case even if the expected value is same as the output anhthong_381996 + 1 comment it's "8-)" which is a smile face and isn't "python 8". frandelarraniaga + 0 comments Can you upload the code?? I get it, but some tests get out of time. My code: def reverse(lista): list2 = [] for i in range(1,len(lista) + 1): list2.append(lista[len(lista) - i]) return(list2) def listasinmax(lista, maxi): if (len(lista) == 0): return([]) if (lista[0] <= maxi): return(lista) if (lista[0] > maxi): return(listasinmax(lista[1:], maxi)) def getchange(value, lista): # Cantidad de cambios global cant_cambios if (len(lista) == 0): return(0) # Algorithm maxi = max(lista) if (maxi > value): getchange(value, lista[1:]) if (maxi == value): cant_cambios.append(1) getchange(value, lista[1:]) if (maxi < value): for val in lista: new_val = value - val getchange(new_val, listasinmax(lista, val)) def getWays(n, c): # Complete this function exchange = n c = sorted(c) values = reverse(c) getchange(exchange, values) return(sum(cant_cambios)) Dularish + 1 comment Can't seem to figure out the problem. It gives correct output on running in my IDE. #!/bin/python3 import math import os import random import re import sys # Complete the getWays function below. memoizing_dict = {} # Complete the getWays function below. def getWays(n, c): global memoizing_dict if n < 0: return 0 elif n == 0: return 1 elif (n,len(c)) in memoizing_dict.keys(): return memoizing_dict[(n,len(c))] else: cum_sum = 0 for c_index in range(len(c)): c_item = c[c_index] if c_item > n: continue else: result = getWays(n-c_item, c[c_index:]) if result > 0: cum_sum += result memoizing_dict[(n,len(c))] = cum_sum return cum_sum if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') nm = input().split() n = int(nm[0]) m = int(nm[1]) c = list(map(int, input().rstrip().split())) # Print the number of ways of making change for 'n' units using coins having the values given by 'c' ways = getWays(n, c) print(ways) fptr.close() CamJohnson26 + 3 comments The example program is broken, change print(ways) to fptr.write(str(ways)) sauravjack + 1 comment Thanks Bro!!! jamesjosephclar1 + 1 comment I'm getting the same problem with Python 3. Doesn't matter what I try to change the output, nothing works. dramforever + 0 comments Can't thank you more...Just got wrong answer for a few test cases and saw your comment ronak319 + 2 comments int64_t ; declare like this . This one is portable and can be used on every machine whether it is 64 or 32 because size of long int and long long int depend on machines also but int64_t guarantee on every machine. In C++; abhi15kk + 1 comment Thanks a lot.. I used long long int. It gave me WA for 3 cases.. Declaring it your way helped me pass those cases.. I don't know why it is not accepting long long int. Saras_Arya + 1 comment Was it test case 9, 10, 14. I am getting WA for only those. Soumya_cbr + 2 comments Hi, How to switch to 64 bit int in c??? Please help... i am also facing the overflow issue guilin + 3 comments the only thing I learn is that alwasy use Long the only thing I learn is that alwasy use Long the only thing I learn is that alwasy use Long --repeat 3 times if it's very oimportant,重要的事情说三遍 skybrown6543 + 3 comments 中国人出现了。。 (盟友),+1s shengpu1126 + 0 comments Thank you very much for the tip! This is definitely something that programmers need to take care of: even though the algorithmic logic is right, be mindful of these details in practicality. tomasz_polgrabia + 0 comments I can only add my note: spending one hour on solving the only problem "integer overflow" (my algorithm was fine) without a possibility to look at the test cases makes my face ( ok, I will not embed any gifs, but you can imagine this) MAAAADDDDDDD. Task's author should warn at the beginning that the range of test cases' outputs exceed the normal range of the integer type. AkankshaRajhans + 1 comment This is my code in Java7 static long getWays(long n, long[] c){ // Complete this function long[] combinations=new long[(int)n]; combinations[0]=1; for(int j=0;j=c[j]) combinations[i]+=combinations[i-(int)c[j]]; } } return combinations[(int)n]; } I am getting AArrayIndexOutOfBoundsException. Can anyone help ? rwan7727 + 3 comments Passed all cases: int main() { int n; int m; cin >> n >> m; vector <long> c(m); for (int i=0;i<m;i++) cin >> c[i]; vector <long> numways(n+1); // numways[x] means # ways to get sum x numways[0]=1; // init base case n=0 // go thru coins 1-by-1 to build up numways[] dynamically // just need to consider cases where sum j>=c[i] for (int i=0;i<m;i++) { for (int j=c[i];j<=n;j++) { // find numways to get sum j given value c[i] // it consists of those found earlier plus // new ones. // E.g. if c[]=1,2,3... and c[i]=3,j=5, // new ones will now include '3' with // numways[2] = 2, that is: // '3' with '2', '3' with '1'+'1' numways[j]+=numways[j-c[i]]; } } cout << numways[n]; return 0; } mayank7577 + 0 comments why can not I use long long int instead of 64 bit int ?? using long long it is giving debugging output same as desired output but still it is showing wrong answer. lancerchao + 3 comments A VERY helpful link for anyone who cannot understand the subproblems involved: mcervera + 3 comments The solution to this problem is a good example of an efficient and tight Dynamic Programming algorithm. For those of you who are struggling with it, here's a tip. The number of ways you can make change for n using only the first m coins can be calculated using: (1) the number of ways you can make change for n using only the first m-1 coins. (2) the number of ways you can make change for n-C[m] using the first m coins (where C is the array of coins). This data allows you to build the recurrence relation. suraj23patel + 1 comment Can anyone please explain how is this recurresnce derived? mcervera + 1 comment Posting the recurrence would be the same as posting the solution to the problem since it is the only thing that you need to do to solve it. Hint: define a bidimensional table. Each (m,n) position stores the number of ways you can obtain the amount "n" using only the first "m" coins. Define the boundary conditions and iteratively fill the table. The solution to the problem will be stored in position (M,N). Tsukuyomi1 + 8 comments Keeping it simple :D n, m = map(int,input().split()) coins = list(map(int,input().split())) dp = [1]+[0]*n for i in range(m): for j in range(coins[i], n+1): dp[j]+=dp[j-coins[i]] print(dp[-1]) sothisislife101 + 1 comment I've seen a lot of solutions similar to this one in other languages, but this is the first one I've seen in python. My solution is much more verbose, and I'm having difficulty understanding what exactly is going on here. Can someone walk me through it? Tsukuyomi1 + 0 comments dp[j] stores the number of solutions for j. For base case j=0, number of solutions is 1(not using any coin). Now in the for loop, i represents the number of coins you are using. Initially i=0, so you cannot create change for any j>0, hence dp[j]=0 for j>0. Now in each iteration you add a new coin and update the number of solutions for those j which have value not less than the value of ith coin. This update is just adding the number of solutions when we use the ith coin which is equal to the number of solutions of creating change for j-coins[i]. We are finally done when we use all the coins and so we print dp[n]. diego_amicabile + 2 comments Once you get around to understand it, this is a much better and straightforward solution IMHO. It is also more in line with the explanations given on Topcoder. I would sum up the algorithm this way. # Initialize an array of combination numbers for all money amounts from 0 until the target amount # For all coin values: # for all money values starting from the coin value up to the target amount: # get the number of combinations for the money value minus the coin value # add this to the number of combinations of the money value, because you can construct them just adding a coin to them This is a more verbose version with descriptive variables, that allows you to see the algorithm in action: import sys target_money, coin_count = input().strip().split(' ') target_money, coin_count = [int(target_money), int(coin_count)] coins = list(map(int, input().strip().split(' '))) combinations = [1] + [0] * target_money for coin_index in range(coin_count): print("Combinations before processing coin {}: {}".format(coins[coin_index], combinations), file=sys.stderr) for money in range(coins[coin_index], target_money + 1): if (combinations[money - coins[coin_index]]) > 0: print(" Combinations for {} added to combinations for {} : {} += {}".format(money - coins[coin_index], money, combinations[money], combinations[money - coins[coin_index]]), file=sys.stderr) combinations[money] += combinations[money - coins[coin_index]] print("Combinations after processing coin {}: {}".format(coins[coin_index], combinations), file=sys.stderr) print("Combinations for {} : {}".format(target_money, combinations[target_money]), file=sys.stderr) print(combinations[target_money]) Now for instance, for test case 10 4 2 5 3 6 it will show the process on stderr: Combinations before processing coin 2: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Combinations for 0 added to combinations for 2 : 0 += 1 Combinations for 2 added to combinations for 4 : 0 += 1 Combinations for 4 added to combinations for 6 : 0 += 1 Combinations for 6 added to combinations for 8 : 0 += 1 Combinations for 8 added to combinations for 10 : 0 += 1 Combinations after processing coin 2: [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1] Combinations before processing coin 5: [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1] Combinations for 0 added to combinations for 5 : 0 += 1 Combinations for 2 added to combinations for 7 : 0 += 1 Combinations for 4 added to combinations for 9 : 0 += 1 Combinations for 5 added to combinations for 10 : 1 += 1 Combinations after processing coin 5: [1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 2] Combinations before processing coin 3: [1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 2] Combinations for 0 added to combinations for 3 : 0 += 1 Combinations for 2 added to combinations for 5 : 1 += 1 Combinations for 3 added to combinations for 6 : 1 += 1 Combinations for 4 added to combinations for 7 : 1 += 1 Combinations for 5 added to combinations for 8 : 1 += 2 Combinations for 6 added to combinations for 9 : 1 += 2 Combinations for 7 added to combinations for 10 : 2 += 2 Combinations after processing coin 3: [1, 0, 1, 1, 1, 2, 2, 2, 3, 3, 4] Combinations before processing coin 6: [1, 0, 1, 1, 1, 2, 2, 2, 3, 3, 4] Combinations for 0 added to combinations for 6 : 2 += 1 Combinations for 2 added to combinations for 8 : 3 += 1 Combinations for 3 added to combinations for 9 : 3 += 1 Combinations for 4 added to combinations for 10 : 4 += 1 Combinations after processing coin 6: [1, 0, 1, 1, 1, 2, 3, 2, 4, 4, 5] Combinations for 10 : 5 shankarpenore + 0 comments can you please explain the code of processing coin and after prossing : Combinations for 0 added to combinations for 5 : 0 += 1 Combinations for 2 added to combinations for 7 : 0 += 1 Combinations for 4 added to combinations for 9 : 0 += 1 Combinations for 5 added to combinations for 10 : 1 += 1 can you plz explain these combitions metamemelord + 0 comments It's all practice dude. THis question in specific, is pretty straight forward. nitishks007 + 0 comments Nice! if we change the order of loop ( i inside j) we will get all possible ways of getting the change. Basically 2,2,5 will counted differently from 2,5,2 fqzhou2012 + 0 comments your code is right. However, the website generated codes make you have to add a standard output codes ,like fptr.write(str(ways)) ,before fptr.close(). Besides, your function has to return the output value (ways[l-1][n] rather than printing it (This way the system will regard it as debug) kittenintofu + 3 comments For someone new to coding, this is a very interesting question and I def learned quite a lot from it. I'm writing this post just to help myself remember it. The core concept is: calculator(coins, numToUse, sum) = calculator(coins, numToUse-1, sum) + calculator(coins, numToUse, sum-coins[numToUse-1]); basically saying for the current sum, it can and only can be achieved by two sub-cases: (1). you use one less kind of coins, and still achevie this sum. (2). you still use the same set of coins, but achieve (sum-oneCoinValue). This "oneCoinValue" is the value of the coin that you eliminates in the sub-case (1) And if you total these two cases up, you get what you want. And the next step is to speed up your program. If you think about it, there are some potential redundent calculations-- for a lot of sub-cases, they ended up trying to calculate the same sum. And here, I made my first improvement, and it FAILED hardcore... i used an array of size sum, as a look up table. Have I calculated the current sum already, if so, don't do the recursive call, but return the value that was already saved in the table; if not do the recursive call, and save the final return value to the lookup table. Apparently, that approach did NOT end up well. I couldn't even get the first two mini test cases passed. And then I realized, apparently, there are two important arguments that are passed in: sum AND the numToUse, which basically tells you how many types of coins that you are allowed to use to get the sum. So if you only use a one dimentioanl array to save the result, that's not enough. Using coins {1,2,3} to achieve the number 10 of course is not going to be the same as if you only allowed to use coins {1,2}. The lookup table needs to be 2D. And the major calculator function look like this: private static long calculator(int[] coins, int numToUse, int sum){ if( sum ==0 ){ return 1; } else if(sum < 0 ){ return 0; } else if (numToUse <= 0){ return 0; } else { if (done[sum][numToUse] == -7){ done[sum][numToUse] = calculator(coins, numToUse-1, sum) + calculator(coins, numToUse, sum-coins[numToUse-1]) ; } return done[sum][numToUse]; } } // -7 is just the value that I intialize the array done, I picked it becuase I like number 7, haha stanfordkansas + 1 comment With yr impelemetation: -Time Complexity is O(2^m) because of all possible subsets of the given coins -Space Complexity is O(nm) Is it right? aitrop + 1 comment Seems like an overly complicated recursive solution to me. Surprised if it passed runtime tests. pendell + 0 comments Others have downvoted the comment, and the reason why is that it is essentally the same solution as in the editorial. There are some additional test cases that the editorial doesn't have, but the logic is otherwise sound. The reason it passes the runtime tests is because of the if (done[sum][numToUse] == -7) check. This ensures that we only compute solutions one time -- if we need the same solution again, we just use the previously computed solution. ammar_lokh1234 + 0 comments Hey I have a javascript program done the same way as OP, however I'm getting a runtime time error with my "done" array. I'm new to JS so can somebody help me spot the issue? function getWays(n, c, m, done){ // Complete this functionc var res=0; if(n==0) { return 1; } if(n<0) return 0; if(m<=0) return 0; else { // Here is where I get the error: TypeError: Cannot read property '2' of undefined if(done[n][m]==-1) done[n][m] = getWays(n-c[m-1],c,m, done) + getWays(n,c,m-1, done); return done[n][m]; } } function main() { var n_temp = readLine().split(' '); var n = parseInt(n_temp[0]); var m = parseInt(n_temp[1]); var done = []; c = readLine().split(' '); c = c.map(Number); for(var i=0;i<n;i++){ var col = []; for(var j=0;j<m;j++) { col[j] =-1; } done[i] = col; } // Print the number of ways of making change for 'n' units using coins having the values given by 'c' var ways = getWays(n, c, m-1, done); console.log(ways); } nelson_a_antunes + 2 comments DP in python2 with 1D list/array: value,_ = map(int,raw_input().strip().split(' ')) coins = map(int,raw_input().strip().split(' ')) ways = [0]*(value+1) for coin in coins: if coin > value: #if coin > value, there's no reason to use it continue ways[coin]+=1 #coin by itself as a set for i in range(coin+1,value+1): #fill the remaining sets with a new possibility with this coin ways[i] += ways[i-coin] print ways[value] ishtiaque_h + 1 comment There's a very good explanation from Gayle (author of Cracking the Coding Interview book) for this problem in YouTube. Watch: gravitation + 0 comments Thanks Nothing in this thread really helped me at all, but after watching one minute of this video I realized the error in my approach to the problem pujakuntal123 + 3 comments Ridiculous .. all answer come correct but compiler throwing message ..wrong answer akshaykabra + 3 comments the input format of the question is not clear/really_bad. Also no constraints are given. PRASHANTB1984 + 2 comments That section got wiped out along with some other changes. We'll be restoring it in a while. roshan_santhosh + 1 comment Its been 9 days already. When will t be changed? The format is very unclear. PRASHANTB1984 + 2 comments just updated it. Can you refresh? akshaykabra + 0 comments Sir if number of coins to be taken would have been in input format it would be easier to take input.. can you please update the format?? JaskaranSingh + 1 comment What is the bound on M-the number of coins? The input format isn't good.Right now the problem looks more of a challenge of input parsing rather than DP. amazinghacker + 1 comment[deleted] Rohit_9083 + 3 comments import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { static long getWays(int amount, int[] coins){ long[] combinations = new long[amount + 1]; combinations[0] = 1; for(int coin : coins){ for(int i = 1; i < combinations.length; i++){ if(i >= coin){ combinations[i] += combinations[i - coin]; } } } return combinations[amount]; } public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int m = in.nextInt(); int[] c = new int[m]; for(int c_i=0; c_i < m; c_i++){ c[c_i] = in.nextInt(); } // Print the number of ways of making change for 'n' units using coins having the values given by 'c' long ways = getWays(n, c); System.out.println(ways); } } perfect solution in java8...check out..must comment if u find better in java 8 Jabbar_Memon + 1 comment Good Solution..Easy to Understand :) rickygiu89 + 0 comments Thanks for the good solution. I suggest a little improvement in the for cycle: for(int i = coin; i < combinations.length; i++){ combinations[i] += combinations[i - coin]; } Instead of iterating from 1 to amount value we start directly from the coin value and we don't need the if statement anymore. Sort 371 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/coin-change/forum
CC-MAIN-2018-47
refinedweb
4,609
60.35
Testing JavaFX UIs: Part 2 of ? Testing JavaFX UIs: Part 2 of ? Join the DZone community and get the full member experience.Join For Free How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices. JFramehosting it. Once we overcome this challenge, we can start simulating user input on the UI under test. Once again, easier said than done. We already had the class ScriptLauncher that executes a JavaFX script using the Java Scripting API. The following is a simplified version of the original class: @RunsInEDT public static void launch(String scriptName) { final InputStream script = ScriptLauncher.class.getResourceAsStream(scriptName); execute(new GuiTask() { protected void executeInEDT() throws Throwable { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByExtension("fx"); InputStreamReader reader = new InputStreamReader(script); engine.eval(reader); } }); } As Michael and I discovered, this approach is very fragile. If the script to launch depends on another one, this launcher will throw an exception. In another words, ScriptLauncher can only execute a JavaFX script that has no dependencies on other scripts. Lame. My second approach involves launching a compiled JavaFX UI. While debugging ScriptEngine when it launches a JavaFX UI, I discovered that all JavaFX UIs have a static method javafx$run$, which takes a com.sun.javafx.runtime.sequence.Sequence as its only parameter. This method is like the "main" method in regular Java applications, and the ScriptEngine calls it passing TypeInfo.String.emptySequence. I gave it a try and it actually launches the JavaFX UI! Sweet! Then I discovered that the method javafx$run$ returns a javafx.stage.Stage. I suspected that the JFrame hosting the JavaFX UI has to be there. After a few prinltns, I found a way to get a com.sun.javafx.stage.FrameStageDelegate, which, as you guessed, contains the JFrame I've been looking for. To make the story short, here is the code: public static JFrame launch(Class javaFxClass) { Stage stage = (Stage) staticMethod("javafx$run$").withReturnType(Object.class) .withParameterTypes(Sequence.class) .in(javaFxClass) .invoke(TypeInfo.String.emptySequence); FrameStageDelegate frameDelegate = (FrameStageDelegate) stage.get$impl_stageDelegate().get(); return (JFrame) frameDelegate.get$window().get(); } BTW, I'm using FEST-Reflect to call the static method javafx$run$ via Java Reflection. The code for the new launcher can be found here. Feedback is always appreciated }}
https://dzone.com/articles/testing-javafx-uis-part-2
CC-MAIN-2018-30
refinedweb
385
51.24
#include <sql_error.h> Stores status of the currently executed statement. Cleared at the beginning of the statement, and then can hold either OK, ERROR, or EOF status. Can not be assigned twice per statement. Const iterator used to iterate through the warning list. Copy Sql_conditions that are not WARN_LEVEL_ERROR from the source Warning_info to the current Warning_info. Mark the diagnostics area as 'DISABLED'. This is used in rare cases when the COM_ command at hand sends a response in a custom format. One example is the query cache, another is COM_STMT_PREPARE. Clear this diagnostics area. Normally called at the end of a statement. Tiny reset in debug mode to see garbage right away. Compare given current warning info and current warning info and see if they are different. They will be different if warnings have been generated or statements that use tables have been executed. This is checked by comparing m_warn_id.
http://mingxinglai.com/mysql56-annotation/classDiagnostics__area.html
CC-MAIN-2021-04
refinedweb
151
69.07
example for adding mta-hash Hi, for compiling mta-configfiles it is sometimes needed to add the configuration hash to the configfile. I've added a Mac OS-X (shellscript) demonstrating the expected results. As I'm not that familiar with C yet I'm not able to implement this myself. regards, Benedikt example for adding mta-hash I've finally implemented a basic way to add that config-hash to mta-bootfiles. #include "openssl/sha.h" is needed to generate the config hash, as there have been some compile issues I've needed to add "-lcrypto" to CFLAGS. use docsis -hash -p <txt> <bin> docsis -hash -m -p <txt> <txt2> <ext> regards, Benedikt adds the config-hash to the MTA-Bootfile adds the config-hash to the MTA-Bootfile without using a tmp-file Log in to post a comment.
https://sourceforge.net/p/docsis/feature-requests/5/
CC-MAIN-2017-30
refinedweb
141
59.13
DocTest 1.0 For Ruby Released - | - - - - - - - Read later My Reading List DocTest allows the easy generation of tests based on output from the standard interpreter shell (IRB for Ruby), cut and pasted into docstrings. Copy/paste your IRB testing output as comment for your function: # doctest: Add 5 and 5 to get 10Later you'll be able to run the test: # >> five_and_five # => 10 def five_and_five 5 + 5 end $ rubydoctest five.rbThis will run with latest sources from the git repository. An updated gem will soon be available === Testing 'five.rb'... OK | Add 5 and 5 to get 10 1 comparisons, 1 doctests, 0 failures, 0 errors Such docstring-driven testing has its pros: - Those who are familiar with IRB live testing will find it useful and natural to use. - One place to look at for the test. - Simple. - Large docstring should be avoided and put into a separate test file. - Some complex testing assertion are not available. - It can be tedious to test large outputs. A screencast demonstrating DocTest usage is available. InfoQ caught up with Duane Johnson to discuss DocTest and docstring-driven tests. InfoQ: Firstly, what led you to write DocTest? I read you looked at 2 previous implementations. The latest code can be found on github? It's a merge with a previous version of DocTest? What did you add? Aren't you satisfied with all the Ruby test frameworks? DocTest is outside the BDD cycle? Is DocTest supposed to be a replacement for the other unit test framework or just an additional tool? Isn't the way DocTest annotates code polluting the reading? Rate this Article - Editor Review - Chief Editor Action Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think DocTests in Zope and Python by Kevin Teague Doc testing is fun! :) Doctesting has been a common form of testing in Zope for a long time (around 2001 or so). The merits and good uses of doc testing were recently discussed in this thread. The summary is it's important to seperate the use cases for doc testing: - * Executable documentation doctesting: Intended as "testable" documentation. Documentation comes first with this form of doc test. Outdated or wrong documentation can be rather annoying, so this is good. The risk is that it's easy to go of on a tangent and start testing corner cases - this spoils the readability of the documentation. Narrative doc tests are also harder to maintain, since you may need to comprehend the whole narrative to expand upon one. - * Edge-case and bug fix doctesting: In this case the narrative between the test cases is there primarily to explain what is being tested and why. Some developers like to bang out doc tests as they work, writing down thoughts and snippets of code as they go. This second form can produce files that are also semi-usable as documentation, and are often preferable when approaching software over reading source code. Sometimes the software you are developing doesn't justify the cost of proper documentation, so this type of doc testing is "better than nothing". The flipside is that this may inhibit initiatives at writing proper documentation, "Look at all these files with words in them that this package already has, I guess it doesn't need anymore documentation." And in an open source project, people can find a packages test-centric doc tests and end up complain about the poor quality of documentation ... Packages such as z3c.testsetup can be directed to more sophisticated automated detetion of doctests. Aside from picking doc tests out of source code, it also can run plain .txt files as doc tests - these files and the testing layer they are associated with (unit, functional, integration) are denoted with a line such as: :Test-Layer: functional or :Test-Layer: unit Re: DocTests in Zope and Python by Sebastien Auvray Thank you for this informative comment! Sébastien. 1.0 Gem Available by Duane Johnson sudo gem install rubydoctest Enjoy!
http://www.infoq.com/news/2008/06/ruby-doctest
CC-MAIN-2016-18
refinedweb
688
64.2
treasury_direct A Dart library package to access the debt of the United States from the US Treasury site. This package can be used in a command line app or in a Flutter app. Example import 'package:treasury_direct/treasury_direct.dart'; import 'package:treasury_direct/src/debt_entry.dart'; void main() async { final td = TreasuryDirect(); final list = await td.downloadDebtFeedAsync(pagesize: 15); print('Total possible rows: ${list.totalRows}'); print('Total row received: ${list.mostRecentList.length}'); for (DebtEntry entry in list.mostRecentList) { print('${DebtEntry.dateFormatted(entry.effectiveDate)}: ${DebtEntry.currencyShortened(entry.totalDebt, false)}'); } } Expected output: Total possible rows: 6486 Total row received: 15 Nov 8, 2018: $21.73T Nov 7, 2018: $21.69T Nov 6, 2018: $21.69T Nov 5, 2018: $21.68T Nov 2, 2018: $21.68T Nov 1, 2018: $21.68T Oct 31, 2018: $21.70T Oct 30, 2018: $21.70T Oct 29, 2018: $21.70T Oct 26, 2018: $21.69T Oct 25, 2018: $21.70T Oct 24, 2018: $21.67T Oct 23, 2018: $21.68T Oct 22, 2018: $21.67T Oct 19, 2018: $21.67T Debt to the Penny API Docs:
https://pub.dev/documentation/treasury_direct/latest/
CC-MAIN-2020-34
refinedweb
178
65.49
Making a loop to save multiple GIFs Hi everyone, I'm trying to generate multiple gifs with a typeface I've created, here's the code: #Variable Font myVarFont = '/Users/user/Documents/folder/variablefont.ttf' variations = listFontVariations(myVarFont) #print(variations) steps = 24 minH, maxH = 60, 90 axis1 = list(variations.keys())[0] minH = variations[axis1]['minValue'] maxH = variations[axis1]['maxValue'] axis2 = list(variations.keys())[1] minV = variations[axis2]['minValue'] maxV = variations[axis2]['maxValue'] #stepH = (maxH - minH) / (steps - 1) # Horizontal variation stepV = (maxV - minV) / (steps - 1) # Vertical variation #Color converter def color_c(r = 255, g = 255, b = 255, a = 100): rc = r / 255 gc = g / 255 bc = b / 255 ac = a / 100 return rc, gc, bc, ac #Creating canvas and background class Canvas: def __init__(self, w, h, o = 0): self.w = w self.h = h self.o = o newPage(self.w, self.h) def bg_fill(self, r, g, b, a): self.r, self.g, self.b, self.a = color_c(r, g, b, a) fill(self.r, self.g, self.b, self.a) def bg_stroke(self, r, g, b, a, sw): self.r, self.g, self.b, self.a = color_c(r, g, b, a) self.sw = sw stroke(self.r, self.g, self.b, self.a) strokeWidth(self.sw) fill(None) def wrks(self): rect(self.o, self.o, self.w, self.h) def stroke_rep(self, step, first, jump=0): self.step = step self.first = first plus = self.first + jump translate(0, 0) for i in range(self.step): rect((width() / 2) - (plus / 2), (height() / 2) - (plus / 2), plus + jump, plus + jump) plus += self.sw * 4 def bg_image(self, imgPath = None): self.imgPath = imgPath self.srcWidth, self.srcHeight = imageSize(self.imgPath) self.dstWidth, self.dstHeight = self.w, self.h self.factorWidth = self.dstWidth / self.srcWidth self.factorHeight = self.dstHeight / self.srcHeight with savedState(): scale(factorWidth, factorHeight) image(imgPath, (self.o, self.o)) #Start design concursos = ["LH", "AT", "INM", "SB", "Mat", "Dec", "T-A", "LT", "TS", "BC", "CEI", "CV", "CL", "Deb", "ICM", "MUN", "TLA", "Rob"] def art(txt): for i in range((steps * 2) - 2): page = Canvas(500, 500, 0) page.bg_fill(255, 255, 255, 100) page.wrks() if i < steps: #H = minH + i * stepH V = minV + i * stepV else: #H = maxH - ((i % steps) + 1) * stepH V = maxV - ((i % steps) + 1) * stepV #print(i, V) w = (width() - 40) / len(txt) save() translate(20, 50) #stepH_ = (maxH - minH) / (len(txt)) stepV_ = (maxV - minV) / (len(txt)) T = FormattedString() T.fontSize(300) T.font(myVarFont) for j, char in enumerate(txt): #H_ = H + j * stepH_ V_ = V + j * stepV_ #if H_ > maxH: H_ -= (maxH - minH) if V_ > maxV: V_ -= (maxV - minV) T.append(char, fill=(0), fontVariations={axis1:100, axis2:V_}) translate(w * 1, 0) #H_ += stepH_ V_ += stepV_ restore() text(T, (width() / 2, (height() / 4)), align='center') page.bg_stroke(255, 255, 255, 100, 110) page.wrks() for i in range(8): page.bg_stroke(0, 0, 0, 100, 8.9) page.stroke_rep(i + 5, 384) saveImage('~/Desktop/PIBA_2020_CATEGORIAS_{name}.gif'.format(name = txt)) f = 0 while f < len(concursos): art(concursos[f]) newDrawing() f += 1 Is there a way to make a smoother animation with the same number of frames? Thanks!! you can set a frameDuration(seconds)for every frame. This will speed up if you keep the same amount of frames. If you want to keep the same length of the animation you you have to use a combo of a higher frameDurationand a higher frame rate. see @frederik thank you very much. I think my explanation was wrong, what I'm looking for is for a way to control the Easy Ease speed. small example to get you started!! there are multiple formulas to ease in ease out.. def linear(factor): return factor def easeIn(factor): return factor*factor def easeOut(factor): return factor * (2-factor) steps = 100 blockWidth = width() / steps for func in [linear, easeIn, easeOut]: newPage() for i in range(steps): factor = i / (steps-1) factor = func(factor) rect(0, 0, blockWidth, height() * factor) translate(blockWidth, 0) and here’s a gist with the classic easing functions by Robert Penner converted to Python. have fun! Great!!! Thank you very much, I'll play with it and I'll share the result. Cheers! - charlielemaignan last edited by @frederik thank you for this examples, it help a lot. I am looking for an "Cubic" Ease In & Out function to make my animation smoother. But I can't figure how to set it as your example shows. Here is the motion, I have applied EaseIn in both ways. Thank you again for your help with the three Easing example now you are going in both direction with in increasing speed, while when you go back you to start fast and end slow: def linear(factor): return factor def easeIn(factor): return factor*factor def easeOut(factor): return factor * (2-factor) steps = 100 blockWidth = width() / steps for func in [linear, easeIn, easeOut]: newPage(2000, 1000) for i in range(steps): factor = i / (steps-1) factor = func(factor) rect(0, 0, blockWidth, height() * factor) translate(blockWidth, 0) for i in range(steps): factor = i / (steps-1) # go the other way around factor = func(1 - factor) rect(0, 0, blockWidth, height() * factor) translate(blockWidth, 0) - charlielemaignan last edited by gferreira Hello @frederik ! Thank you! it works for me May I add those few lines found on this other tutorial about "EaseInOut" function. def easeInOut(factor): assert 0 <= factor <= 1 return (1 - cos(factor * pi)) / 2 It will add a new graph with easeInOut. It is working well with your previous lines. Thank you for your help ! Be safe.
https://forum.drawbot.com/topic/233/making-a-loop-to-save-multiple-gifs/9?lang=en-US
CC-MAIN-2021-10
refinedweb
930
64.3
Flask TodoMVC 2: Backbone Sync This is the second article in the Flask TodoMVC tutorial, a series that creates a Backbone.js backend with Flask for the TodoMVC app. In the first article, we created a Flask app using the Backbone.js example as a starting point. In this article, we will replace the localStorage persistence with server side synchronization. A future article will modify this basic support to use a database backend. We will begin with where we left off in the previous article. If you would like to follow along, but do not have the code, it is available on GitHub. # Optional, use your own code if you followed part 1 $ git clone $ cd flask-todomvc $ git checkout -b backbone-sync part1 Now that we have the code, we are ready to begin. Remove local storage persistence First, a little cleanup. Let's remove the Backbone localStorage initialization in static/js/collections/todos.js. --- a/static/js/collections/todos.js +++ b/static/js/collections/todos.js // Reference to this collection's model. model: app.Todo, - // Save all of the todo items under the `"todos"` namespace. - localStorage: new Backbone.LocalStorage('todos-backbone'), - // Filter down the list of all todo items that are finished. completed: function () { return this.filter(function (todo) { To avoid one network round trip when the page is loaded we are going to remove the automatic fetch of todos during the initialization of the app view. Later, we will revisit this by resetting the todo items within the template. --- a/static/js/views/app-view.js +++ b/static/js/views/app-view.js this.listenTo(app.todos, 'change:completed', this.filterOne); this.listenTo(app.todos, 'filter', this.filterAll); this.listenTo(app.todos, 'all', this.render); - - // Suppresses 'add' events with {reset: true} and prevents the app view - // from being re-rendered for every model. Only renders when the 'reset' - // event is triggered at the end of the fetch. - app.todos.fetch({reset: true}); }, Also, remove the backbone.localStorage.js and todomvc-common/base.js script tags in templates/index.html. --- a/templates/index.html +++ b/templates/index.html - <script src="bower_components/todomvc-common/base.js"></script> <script src="bower_components/jquery/jquery.js"></script> <script src="bower_components/underscore/underscore.js"></script> <script src="bower_components/backbone/backbone.js"></script> - <script src="bower_components/backbone.localStorage/backbone.localStorage.js"></script> <script src="js/models/todo.js"></script> Finally, remove the scripts that we are no longer using. $ rm -r static/bower_components/backbone.localStorage $ rm static/bower_components/todomvc-common/base.js Now that we've done some housekeeping we are ready to support synchronization within our Flask app. Add backend synchronization We need to add the CRUD to REST routes to our Flask app to support synchronization. For now, we are simply going to store the todos in a list. Modify server.py to add these routes, and necessary imports. """ server.py """ from flask import ( Flask, abort, jsonify, render_template, request) TODOS = [] app = Flask(__name__, static_url_path='') app.debug = True @app.route('/') def index(): return render_template('index.html') @app.route('/todos/', methods=['POST']) def todo_create(): todo = request.get_json() todo['id'] = len(TODOS) TODOS.append(todo) return _todo_response(todo) @app.route('/todos/<int:id>') def todo_read(id): todo = _todo_get_or_404(id) return _todo_response(todo) @app.route('/todos/<int:id>', methods=['PUT', 'PATCH']) def todo_update(id): todo = _todo_get_or_404(id) updates = request.get_json() todo.update(updates) return _todo_response(todo) @app.route('/todos/<int:id>', methods=['DELETE']) def todo_delete(id): todo = _todo_get_or_404(id) TODOS[id] = None return _todo_response(todo) def _todo_get_or_404(id): if not (0 <= id < len(TODOS)): abort(404) todo = TODOS[id] if todo is None: abort(404) return todo def _todo_response(todo): return jsonify(**todo) if __name__ == '__main__': app.run(port=8000) We added a route for each CRUD action modifying the TODOS list appropriately. In todo_create we add a new item to the list from the JSON body of the request. We set the id to the new list index to identify in later requests. We then return the todo as a JSON response using _todo_response. This convenience method is reused in all other CRUD routes and returns a response using jsonify, unpacking the dict to kwargs. All other routes lookup the todo item based on the the id identified by the route. The _todo_get_or_404 method returns the identified todo, while aborting with a 404 if not found. The check for None is necessary as it indicates deletion. We also enabled the debug flag to assist during development as it will attempt to reload our app on changes and print stack traces to the browser. Note that you do not want to leave this flag enabled in production. Run the app and view it in a browser. You should be able to add, update and delete todos as before. If you view the network requests using a developer tool, such as Firebug, you will see the appropriate requests to your app. Initialize on load If you reload your browser, you will notice that the todo items disappear. This is because we have not initialized the list of todos on page load. We could add a route to retrieve the list of todos using a Backbone fetch, but we removed this call earlier during cleanup to save a network request. Instead, we are going to reset the list when rendering the template. """ server.py """ @app.route('/') def index(): todos = filter(None, TODOS) return render_template('index.html', todos=todos) We also need to configure the url for the todo items and add the initialization in a new inline script towards the end of the template. We could do this by modifying the collection initialization in JavaScript, but we will instead use url_for in the template so we do not hardcode the URL. <!-- index.html --> <script> app.todos.url = '{{ url_for("todo_create") }}'; app.todos.reset({{ todos | tojson }}); </script> </body> </html> Add a few items to your list again and reload the browser. The todo items should now be persisted across a reload. Conclusion In this article, we replaced the localStorage persistence with backend synchronization with our Flask app. Although this is a good start, our todos are only stored in memory and will not survive a server restart. In the next article, we will replace the crude list persistence with a database backend. The code is available on GitHub with tag part2 or compared to previous article.
http://simplectic.com/blog/2014/flask-todomvc-backbone-sync/
CC-MAIN-2018-47
refinedweb
1,056
51.44
#include <deal.II/lac/block_indices.h> BlockIndices represents a range of indices (such as the range \([0,N)\) of valid indices for elements of a vector) and how this one range is broken down into smaller but contiguous "blocks" (such as the velocity and pressure parts of a solution vector). In particular, it provides the ability to translate between global indices and the indices within a block. This class is used, for example, in the BlockVector, BlockSparsityPattern, and BlockMatrixBase classes. The information that can be obtained from this class falls into two groups. First, it is possible to query the global size of the index space (through the total_size() member function), and the number of blocks and their sizes (via size() and the block_size() functions). Secondly, this class manages the conversion of global indices to the local indices within this block, and the other way around. This is required, for example, when you address a global element in a block vector and want to know within which block this is, and which index within this block it corresponds to. It is also useful if a matrix is composed of several blocks, where you have to translate global row and column indices to local ones. Definition at line 57 of file block_indices.h. Declare the type for container size. Definition at line 63 of file block_indices.h. Default constructor. Initialize for zero blocks. Definition at line 281 of file block_indices.h. Constructor. Initialize the number of entries in each block i as block_sizes[i]. The number of blocks will be the size of block_sizes. Definition at line 303 of file block_indices.h. Move constructor. Initialize a new object by stealing the internal data of another BlockIndices object. Definition at line 316 of file block_indices.h. Copy constructor. Specialized constructor for a structure with blocks of equal size. Definition at line 290 of file block_indices.h. Reinitialize the number of blocks and assign each block the same number of elements. Definition at line 254 of file block_indices.h. Reinitialize the number of indices within each block from the given argument. The number of blocks will be adjusted to the size of block_sizes and the size of block i is set to block_sizes[i]. Definition at line 267 of file block_indices.h. Add another block of given size to the end of the block structure. Definition at line 331 of file block_indices.h. Number of blocks in index field. Definition at line 370 of file block_indices.h. Return the total number of indices accumulated over all blocks, that is, the dimension of the vector space of the block vector. Definition at line 379 of file block_indices.h. The size of the ith block. Definition at line 389 of file block_indices.h. String representation of the block sizes. The output is of the form [nb->b1,b2,b3|s], where nb is n_blocks(), s is total_size() and b1 etc. are the values returned by block_size() for each of the blocks. Definition at line 399 of file block_indices.h. Return the block and the index within that block for the global index i. The first element of the pair is the block, the second the index within it. Definition at line 341 of file block_indices.h. Return the global index of index in block block. Definition at line 357 of file block_indices.h. The start index of the ith block. Definition at line 416 of file block_indices.h. Copy operator. Definition at line 426 of file block_indices.h. Move assignment operator. Move another BlockIndices object onto the current one by transferring its contents. Definition at line 438 of file block_indices.h. Compare whether two objects are the same, i.e. whether the number of blocks and the sizes of all blocks are equal. Definition at line 454 of file block_indices.h. Swap the contents of these two objects. Definition at line 470 of file block_indices.h. Determine an estimate for the memory consumption (in bytes) of this object. Definition at line 480 of file block_indices.h. Global function swap which overloads the default implementation of the C++ standard library which uses a temporary object. The function simply exchanges the data of the two objects. Definition at line 500 of file block_indices.h. Number of blocks. While this value could be obtained through start_indices.size()-1, we cache this value for faster access. Definition at line 214 of file block_indices.h. Global starting index of each vector. The last and redundant value is the total number of entries. Definition at line 220 of file block_indices.h.
https://www.dealii.org/developer/doxygen/deal.II/classBlockIndices.html
CC-MAIN-2017-17
refinedweb
760
68.67
So you are looking for a component/technology/solution to help you generate Office files (documents, workbooks, and presentations) using a server-side application, and of course, using managed code. I can tell you that this need has been brought to my attention at some conferences, questions in DLs, customer feedback, and as a common customer requirement when I was working as a consultant. You know that using automation and interop is definitely not an option. However, the Office XML File Formats are a great option. You can build a server-side application using Visual Studio to generate data-rich documents using the Office XML File Formats and the .NET Framework 3.0 (aka Microsoft WinFX ). Here's how... In my previous blog entry I demonstrated how to build and Word 2007 template and connect an item in the data store. You built a customer letter document template with content controls that map to an XML file. In this blog entry, I will show you how: - Company Name - Contact Name - Contact Title - Phone To create a server-side application that pulls data from a SQL Server database and generates a new Word 2007 document You can create a Web-based application that enables users to select a company name and generate a custom letter. The Web-based application retrieves customer data from a Microsoft SQL Server database, opens the customer letter document template, and creates a new document that displays customer data based on a user selection. This application does not require the use of Word 2007 or VBA. You can use your favorite managed code (Microsoft Visual Basic .NET or C#) language to build this application. To build this application, do the following: - Open Microsoft Visual Studio 2005 or Microsoft Visual Web Developer 2005. - Create an ASP.NET Web site and name it SqlServerSample. - Connect the ASP.NET Web site to a Microsoft SQL Server database. - Add a connection string to the Web.config file as follows: <connectionStrings> <add name="NorthwindConnectionString" connectionString="data source=(local);database=Northwind; integrated security=SSPI;persist security info=false;" providerName="System.Data.SqlClient" /> </connectionStrings> - Add the CustomerLetterTemplate.docx to the App_Data folder. - Download and install the Microsoft .NET Framework 3.0 (formerly Microsoft WinFX). - Configure the assembly in the Web.config file as follows: <compilation debug="false"> <assemblies> <add assembly="WindowsBase, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> - Create a Web Form and replace the default.aspx code with the following sample code. The following sample shows how to bind to a Microsoft SQL Server database to retrieve data based on a customer selection and create a new document based on the CustomerLetterTemplate.docx. [VB.NET] <%@ Private Const strRelRoot As String = "" Private Sub CreateDocument() ' Get the template file and create a stream from it Const TemplateFile As String = "~/App_Data/CustomerTemplate.docx" ' Read the file into memory Dim buffer() As Byte = File.ReadAllBytes(Server.MapPath(TemplateFile)) Dim memoryStream As MemoryStream = New MemoryStream(buffer, True) buffer = Nothing ' Open the document in the stream and replace the custom XML part Dim pkgFile As Package = Package.Open(memoryStream, FileMode.Open, FileAccess.ReadWrite) Dim pkgrcOfficeDocument As PackageRelationshipCollection = pkgFile.GetRelationshipsByType(strRelRoot) For Each pkgr As PackageRelationship In pkgrcOfficeDocument If (pkgr.SourceUri.OriginalString = "/") Then ' Add a custom XML part to the package Dim uriData As Uri = New Uri("/customXML/item1.xml", UriKind.Relative) If pkgFile.PartExists(uriData) Then ' Delete template "/customXML/item1.xml" part pkgFile.DeletePart(uriData) End If ' Load the custom XML data Dim pkgprtData As PackagePart = pkgFile.CreatePart(uriData, "application/xml") GetDataFromSQLServer(pkgprtData.GetStream, ddlCustomer.SelectedValue) End If ' Close the file pkgFile.Close() ' Return the result Response.ClearContent() Response.ClearHeaders() Response.AddHeader("content-disposition", "attachment; filename=document.docx") Response.ContentEncoding = System.Text.Encoding.UTF8 memoryStream.WriteTo(Response.OutputStream) memoryStream.Close() Response.End() End Sub Private Sub GetDataFromSQLServer(ByVal stream As Stream, ByVal customerID As String) 'Connect to a Microsoft SQL Server database and get data Dim source As String = ConfigurationManager.ConnectionStrings("NorthwindConnectionString").ConnectionString Const SqlStatement As String = "SELECT CompanyName, ContactName, ContactTitle, Phone FROM Customers WHERE CustomerID=@customerID" Dim conn As SqlConnection = New SqlConnection(source) conn.Open() Dim cmd As SqlCommand = New SqlCommand(SqlStatement, conn) cmd.Parameters.AddWithValue("@customerID", customerID) Dim dr As SqlDataReader = cmd.ExecuteReader If dr.Read Then Dim writer As XmlWriter = XmlWriter.Create(stream) writer.WriteStartElement("Customer") writer.WriteElementString("CompanyName", CType(dr("CompanyName"), String)) writer.WriteElementString("ContactName", CType(dr("ContactName"), String)) writer.WriteElementString("ContactTitle", CType(dr("ContactTitle"), String)) writer.WriteElementString("Phone", CType(dr("Phone"), String)) writer.WriteEndElement() writer.Close() End If dr.Close() conn.Close() End Sub Protected Sub SubmitBtn_Click(ByVal sender As Object, ByVal e As EventArgs) CreateDocument() End Sub <> [C#] <%@ private const string strRelRoot = ""; private void CreateDocument() { // Get the template file and create a stream from it const string TemplateFile = @"~/App_Data/CustomerTemplate.docx"; // Read the file into memory byte[] buffer = File.ReadAllBytes(Server.MapPath(TemplateFile)); MemoryStream memoryStream = new MemoryStream(buffer, true); buffer = null; // Open the document in the stream and replace the custom XML part Package pkgFile = Package.Open(memoryStream, FileMode.Open, FileAccess.ReadWrite); PackageRelationshipCollection pkgrcOfficeDocument = pkgFile.GetRelationshipsByType(strRelRoot); foreach (PackageRelationship pkgr in pkgrcOfficeDocument) { if (pkgr.SourceUri.OriginalString == "/") { // Add a custom XML part to the package Uri uriData = new Uri("/customXML/item1.xml", UriKind.Relative); if (pkgFile.PartExists(uriData)) { // Delete template "/customXML/item1.xml" part pkgFile.DeletePart(uriData); } // Load the custom XML data PackagePart pkgprtData = pkgFile.CreatePart(uriData, "application/xml"); GetDataFromSQLServer(pkgprtData.GetStream(), ddlCustomer.SelectedValue); } } // Close the file pkgFile.Close(); // Return the result Response.ClearContent(); Response.ClearHeaders(); Response.AddHeader("content-disposition", "attachment; filename=document.docx"); Response.ContentEncoding = System.Text.Encoding.UTF8; memoryStream.WriteTo(Response.OutputStream); memoryStream.Close(); Response.End(); } private void GetDataFromSQLServer(Stream stream, string customerID) { //Connect to a Microsoft SQL Server database and get data String source = ConfigurationManager.ConnectionStrings["NorthwindConnectionString"].ConnectionString; const string SqlStatement = "SELECT CompanyName, ContactName, ContactTitle, Phone FROM Customers WHERE CustomerID=@customerID"; using (SqlConnection conn = new SqlConnection(source)) { conn.Open(); SqlCommand cmd = new SqlCommand(SqlStatement, conn); cmd.Parameters.AddWithValue("@customerID", customerID); SqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { XmlWriter writer = XmlWriter.Create(stream); writer.WriteStartElement("Customer"); writer.WriteElementString("CompanyName", (string)dr["CompanyName"]); writer.WriteElementString("ContactName", (string)dr["ContactName"]); writer.WriteElementString("ContactTitle", (string)dr["ContactTitle"]); writer.WriteElementString("Phone", (string)dr["Phone"]); writer.WriteEndElement(); writer.Close(); } dr.Close(); conn.Close(); } } protected void SubmitBtn_Click(object sender, EventArgs e) { CreateDocument(); } <> If you build and run this application, you will see something like this: Learn how to manipulate Microsoft Office system documents using the Microsoft Office Open XML Formats without the 2007 release. Work through scenarios involving programmatically manipulating documents using the Microsoft Office Open XML Formats. Enjoy! ~Erika Thanks Eri, Both posts are very useful. If anybody is trying to build windows application instead of web-based application, then add reference DLL “Windowsbase.dll” from "C:Program FilesReference AssembliesMicrosoftFrameworkv3.0" folder for “System.IO.Packaging” namespace. (If you have installed .NET Framework 3.0) -Imtiyaz Hi Imi! It’s always nice to see your comments :). I’ve been planning to start pulling together some real world examples around the ways in which people… Hello, I tried to execute your sample code but it fires an exception when it closes the XmlWriter at rutine ‘GetDataFromSQLServer’. The exception says something like ‘Sequence couldn’t be expanded’. Could you orient to me? Thanks a lot. —- The stack at this point is: —- NotSupportedException: No se puede expandir la secuencia de memoria.] System.IO.__Error.MemoryStreamNotExpandable() +54 System.IO.MemoryStream.set_Capacity(Int32 value) +33 System.IO.MemoryStream.EnsureCapacity(Int32 value) +1986396 System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) +1986476 System.IO.BinaryWriter.Write(UInt16 value) +54 MS.Internal.IO.Zip.ZipIOCentralDirectoryFileHeader.Save(BinaryWriter writer) +47 MS.Internal.IO.Zip.ZipIOCentralDirectoryBlock.Save() +748 MS.Internal.IO.Zip.ZipIOBlockManager.SaveContainer(Boolean closingFlag) +659 MS.Internal.IO.Zip.ZipIOBlockManager.SaveStream(ZipIOLocalFileBlock blockRequestingFlush, Boolean closingFlag) +100 MS.Internal.IO.Zip.ZipIOFileItemStream.Flush() +45 MS.Internal.IO.Zip.ProgressiveCrcCalculatingStream.Flush() +31 MS.Internal.IO.Zip.ZipIOModeEnforcingStream.Flush() +31 System.Xml.XmlUtf8RawTextWriter.Flush() +48 System.Xml.XmlWellFormedWriter.Close() +57 ASP.default_aspx.GetDataFromSQLServer(Stream stream, String customerID) in D:Mis DocumentosVisual Studio 2005ProjectsTestsTestOOXmlDefault.aspx:81 ASP.default_aspx.CreateDocument() in D:Mis DocumentosVisual Studio 2005ProjectsTestsTestOOXmlDefault.aspx:46 ASP.default_aspx.SubmitBtn_Click(Object sender, EventArgs e) in D:Mis DocumentosVisual Studio 2005ProjectsTestsTestOOXmlDefault.aspx:88 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +96 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +116 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +31 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +32 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +72 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3840 Jesus: I compiled my solution with B2TR and it works. I attached the code to the blog entry (SQLServerSample.zip). The only thing you need to change is the connection string. I hope this helps, -Erika I accidentally deleted a comment while cleaning spam. But I rememeber that someone hinted me that I didn’t need this line of code: // Get the root part PackagePart pkgpRoot = pkgFile.GetPart(new Uri("/" + pkgr.TargetUri.ToString(), UriKind.Relative)); You are absolutely right, I was following a different approach before I decided to replace the CustomXMLPart and forgot to delete that line. Great catch! Thanks a lot :). I updated the samples and attached the source files. Hi Erika, I have to tell you that I have the same issue as Jesus getting an exception at the same line ("’Sequence couldn’t be expanded’"). I am using the Beta2TR and have WinFx 3.0 Please let me know. Thanks. Hi Erika. How do I open a Word 2007 document (template) add some data (to a contentplaceholder) and print the document, from serverside code, without automating word, like in the bad old days? Karl, You can download the attached project, this code uses System.IO.Packaging and the File Formats to open aWord 2007 document template, replace data in content controls with server-side code. No automation! You should read this article: Server-Side Generation of Word 2007 Docs by Ted Pattison It’s quite helpful. ———————————– Jesus and Alberto. I run the sample using C#, but didn’t test the VB.NET sample with B2TR, are you using VB.NET? I can’t reproduce, but seems to me by reading the error trace that the problem has something to do with the memory stream and the size of the document. The problem happens at the moment of creating the document. Hi Erika. Thank you for the answer. I am quite aware of the posibilites for manipulating a word document (great), and I have also read the material you refer to. But, I am still puzzled regarding the actual printing process. It is not a problem to print a fixed document XPS file. The question is, how do I do exactly the same with a words Office Open XML document, without involving Word itself. Whenever I try to print (using XpsDocumentWriter) I get an error because the document is not a fixed document? Mqyb I am missing the obvious? Could you show a code sample? Karl! Seems I forgot the "printing" part of your question. Printing server-side is an interesting problem so I will have split my answer since there’s two things to consider. Software – hardware? Proposing a server-side printing software solution goes hand-in-hand with the hardware infrastructure that the company has. Printing is probably the slowest thing to do in computer land. Say you have a top end printer, you will be able to print at most 50 pages/minute. If you were to print 1,000 documents of 10 pages each, the operation would take you 200 minutes. The least thing that should worry you (considering the times and resources involved to print) is using COM automation. By the time the printer prints the documents, Word automation will be done. Some banks have printers connected in a network to do load balancing, problem is print spoolers have a limited memory, so you would also need some program to enqueue (some kind of hard disk spooler program) documents. I know some banks use special hardware and software to handle massive printing. Some of this programs either print text files (use a driver to print in certifiers) or translate documents and generate a graphic (in memory) and send to the printer (most solutions involving pdf files use this approach). This programs also take care of the document queueing process. Printing Word documents… To print Word documents you need to run Word. Only Word understands it’s formatting tags. Every single format (bold, underline, colors, spaces, fonts) that you see on the screen are rendered by Word and the same happens with a printer. So the only program that will print you a WYSIWYG Word document is Word. I know it’s not recommended to use server-side automation, but at this point, it’s your best shot. Now, there’s an option. It IS possible to go the XPS way with the only problem that you would need to write some kind of parser that reads the WordML formatting elements and transform documents to XPS format (it’s also XML). I don’t know of any code sample to do this, but if you were to build this solution, probably the Windows SDK documentation can help: I know there are some code samples (including printing XPS documents) that you can download here: You will find information of every single element for WordML here: There are some translator applications that convert WordML to something else, for example: Finally, there are more great samples for manipulating WordML here: I hope this helps, -Erika that is a great example, I’m currently working on a live system that used code like yours. but have you found a way to bind a part of the word document to a repeating element to construct a table? EG you have a list of items in an order and the output document requires a table to show them in (a item per row). i’ve not seen anyway to due this so I have started looking at directly manipulation the word XML for the table. is there a simple way using custom XML? You know, you could save yourself a lot of trouble and use the FOR XML clause in SQL Server to build your XML document for you quite painlessly. -Dan Is it possible to insert a HTMl table as the vale of Content Control Will this kind of setup work with the Business Contact Manager database on a server…. basically, the Office development team kind of forgot to put support in word mailmerge for the userdefined fields that you can create in BCM…. and all i wont to do is create a mail merge letter template to include some of the user-defined fields… im not a happy bunny about this, as we have just spent 4k on 23 office 2007’s with BCM to replace Act… its like taking several steps back in development… Very useful post and comments Erika. Thanks a lot. Continuing from Karl’s question, it was the case with Word 2003 that Microsoft did not recommend nor support the running of Word on a server. Which is not to say that it couldn’t be done of course. Do you know if this has changed with Office 2007 ?? In answer to my own question – it appears not. Office 2007 has been added to the list in this page about server side Office use: Basically it says that MS do not recommend or support Server usage of Office when using ASP or a windows service. The most notable point is that Office could attempt to display a modal dialog which would obviously be bad when running as a service. Hi Erika! Thank you for your post. I’m quite new but I hope my question is not too bad – it belongs to the formatting of the text (which from the database): You created a document using a template and then filled in the data e.g. Company Name: "The Big Cheese". What if we need to write e.g. "Big" italic or bold? So just a part of the data which comes from the database should be formatted anyhow? So overall I want to be able to read from a docx some formatted text – save it in a database with the formatting data – and write it also formatted back to e.g. another document (so not just plain text). i really hope you know what i mean and i would really be happy if you could give me some hints how to do this best?? thank you! markus Hi Erika, I have created Letter Templates using Word 2007. I have added the mapped the content controls to the tags using Word 2007 Content Control Toolkit. My question is that when I use your code, the item1.xml file is replaced with the values from my Oracle Table. The document.docx appears to be generated correctly as I can see the new values in item1.xml. But when I try to open the .docx file I continually get an error saying the Office Open XML file document.docx cannot be opened because there are problems with the contents. Any ideas on what is causing this? Thanks Steve Erika, I changed my file to .docx and I was able to fix the problem. Is there any way to save the file to a server folder without prompting the user to save the file? I am thinking there should be a way to save the pkgFile as we can close it. Thanks again. Steve nice code! I need to code something like this, but I also need to save the doc in some older versions of MS Word. Is there a way to do that? Hi Erika. Erika will u please help me out some solution for this. Thanks I only had to make two modifications: 1. I had to manually add the App_Data folder after publishing the solution. 2. I had to rename the docx file to CustomerTemplate.docx. Otherwise, this worked like a charm. Great code. Thanks. Now how do we print it on the server using managed code? I think that there are more than a few developers scouring the net looking for how to print a docx (or XPS) from managed code. It would be very helpful if some one could post such code… Thanks, Jordan Erika, Thanks for the great example. I have images stored in a sqlserver database that I would would like to display in a .docx. Do you have any idea how to do that? I am able to write code to directly generate the tags I need to display text. This works fine. But so far I am unabe to generate the markup for images. Any help would be much appreciated. Thanks, John Dear erika, like some other users here around I have the well known problem that Schaffer said: ." It would be nice to have an answer about this problem. Salvatore Sorrentino I’m arriving fairly late to this discussion, but after a few hours of puzzling, I think I must be missing something really obvious, and I hope someone could provide a pointer. I’ve downloaded Erika’s sample, changed the connection string (which I know is working, because the dropdown displays the correct data), and then click ‘Generate document’, but the only Word document I ever see is one containing the test data in the template provided by Erika (CompanyName Alfreds Futterkiste etc.). I never see any of my ‘real’ Northwind data. I’ve not altered anything in the sample – only the connection string. What am I missing? Hi Erika, This example is great but I can’t get it work just as I can’t get rid of the error ‘Sequence couldn’t be expanded’ Would you be so kind to check the code again. Thanks in advance. Erika, Can you save the docx file as a Word 2003 file so the users can open the .doc file using Word 2003? Reason I ask is that we have 2007 on server, we generate the files in .docx. When I do a copyto and save as .doc it can’t be opened by 2003. Thanks Steve When I print a word doc using automation it takes longer to send the print data to the print queue ( 2 minutes for 1 page) than it does to print it. This doesnt happen using word 2003. The same doc prints almost immediately when loaded manually in word and selecting print. Public Sub objFilePrint(obj As Object, iNumCopies As Integer, Optional vPrinter As Variant) Dim sOldPrinter As String On Error GoTo CleanupAndExit ‘remember the current printer and change to the one we are asked to If Not IsMissing(vPrinter) Then sOldPrinter = objGetPrinter(obj) objSetPrinter obj, vPrinter End If obj.PrintOut filename:="", Range:=wdPrintAllDocument, Item:= _ wdPrintDocumentContent, Copies:=iNumCopies, Pages:="", PageType:=wdPrintAllPages, _ Collate:=True, Background:=False, PrintToFile:=False CleanupAndExit: ‘Put the old printer back when we have finished If sOldPrinter <> "" Then objSetPrinter obj, sOldPrinter End Sub Erika: I am developing a form for an application. It works fine, except when i try to send some 20 – 25 + strings to as many plain text content controls. Then I get the error: Memory stream is not expandable. I have been reading stuff for several days but cannot find a solution to my problem. I’m using your code (except tha data layer) but somehow it appears to set the size of the memorystream and that size is very limited. I really hope you could help me or tell me wich other technology I need to use. Thanks for any help. Rafael Getting the same error as the others and I’m wondering if it has to do with this bug: The error occurs when one of the database fields SELECTed has a large amount of text. The error: [NotSupportedException: Memory stream is not expandable.] System.IO.__Error.MemoryStreamNotExpandable() +54 System.IO.MemoryStream.set_Capacity(Int32 value) +2845374 System.IO.MemoryStream.EnsureCapacity(Int32 value) +48 System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) +124 System.IO.BinaryWriter.Write(UInt16 value) +55 MS.Internal.IO.Zip.ZipIOCentralDirectoryFileHeader.Save(BinaryWriter writer) +168 MS.Internal.IO.Zip.ZipIOCentralDirectoryBlock.Save() +721 MS.Internal.IO.Zip.ZipIOBlockManager.SaveContainer(Boolean closingFlag) +473 MS.Internal.IO.Zip.ZipIOBlockManager.SaveStream(ZipIOLocalFileBlock blockRequestingFlush, Boolean closingFlag) +64 MS.Internal.IO.Zip.ZipIOFileItemStream.Flush() +24 MS.Internal.IO.Zip.ProgressiveCrcCalculatingStream.Flush() +17 MS.Internal.IO.Zip.ZipIOModeEnforcingStream.Flush() +17 System.Xml.XmlUtf8RawTextWriter.Flush() +26 System.Xml.XmlWellFormedWriter.Close() +41 generate_IAR.GetDataFromSQLServer(Stream stream, Int32 customerID) in c:WebsitesSOCWorkTrackersitegenerate_IAR.aspx.cs:106 generate_IAR.CreateDocument() in c:WebsitesSOCWorkTrackersitegenerate_IAR.aspx.cs:58 generate_IAR.Page_Load(Object sender, EventArgs e) in c:WebsitesSOCWorkTrackersitegenerate_IAR.aspx.cs:27 Figured out the Memory stream is not expandable error. The reason is "Memory streams created with an unsigned byte array provide a non-resizable stream view of the data…" – I added another content control and try to change from database. It does not work. Can you please tell me how to do that. ugh! what if my template has a footer in it, and I want to vary the text value for a field in the footer? I don’t want to create a new footer, I just want to change a content control that is within the footer. you example is great, but I’m not able to take it to level that I need…can you please advise me on how I would create a template with a content control in a footer, and how to programmatically change the field via an asp.net app? thanks alot! Barry Cavanaugh cavanaugh_barry@emc.com >Adam George said: >"Figured out the Memory stream is not expandable error" Adam (or anyone else), so you figured out the problem, but did you find a solution? Can you share the code you used? I am also getting the "Memory stream is not expandable" error, but I don’t see the easy fix from the MSDN documentation you linked to. You can build a server-side application using Visual Studio to generate data-rich documents using the Office XML File Formats and the .NET Framework 3.0 (aka Microsoft WinFX ). Here’s how… I too am getting the "Memory stream is not expandable" error. Where are you Erika? Might have found a solution for the ‘Memory stream is not expandable’ problem: Been messing about with it for hours. After having set the "CompressionOption" to "CompressionOption.Maximum" at pkgFile.CreatePart it worked for me!!! my code: PackagePart pkgprtData = pkgFile.CreatePart(uriData, "application/xml", CompressionOption.Maximum); Hope it helps in your cases as well…. Fix for "Memory stream is not expandable"! // Read the file into memory byte[] buffer = File.ReadAllBytes(templateDoc); // Create in-memory stream as buffer MemoryStream stream = new MemoryStream(); stream.Write(buffer, 0, buffer.Length); buffer = null; Excelent work Wiman, i had Memory stream is not expandable error too. Now it works fine. Thanks Hi Erika, <?xml version="1.0"?> <Customer> ? Thanks, Mohan I want to create a report. How do I add another control? I want to add data that is related to this person? Erika I wonder if you could give some pointers on a problem that I have Im porting a VB6 application to ASP.Net. The exisinting app has approx 200 MSWord mailmerge templates and uses automation to merge data, initially exported from a sql database into CSV files, for bulk letter printing. I am aware that having word installed on the web/app server is not an option recommended or supported by MS, so I am looking for alternative solutions that will enable me to utilise the existing word (2003) templates. I have downloaded the OpenXMLSDK 2.0 and am trying to work my way through some ideas. I have found the MailMerge class in the DocumentFormat.OpenXml.Wordprocessing Namespace but Im finding it impossible to find any examples of how I might use this class on an existing word mailmerge template document. In pseudo code Im trying to do something like this Using doc(MyWord2003MailMergeTemplate.doc) Dim oMM As New MailMerge (doc) oMM.DataSource = MyExcelWorksheetContainingMyData oMM.Destination = NewWordDocument oMM.MergeData oMM.Print End Using Wiman, that solution is just sublime! You are a genius! Is there a way by which I can generate word documents as reports like get rid of Crystal reports to be honest. We use an XML export from SQL with 20+ tables and often many rows within 1/2 of these tables for each project. Is there an effective way to link Word 2007 directly to that XML rather than create another solution? Excellent info, works great for me! Hi, Your example is very work well for me. also very nice explanation. But I don't know how can I implement for Header/ Footer. If you have any idea please explain me. Thanks.
https://blogs.msdn.microsoft.com/erikaehrli/2006/08/16/data-driven-document-generation-with-word-2007-and-the-office-xml-file-formats-part-2/?replytocom=11313
CC-MAIN-2018-34
refinedweb
4,477
50.43
A specialization of GA_AIFDictTuple to access "shared strings". More... #include <GA_AIFSharedDictTuple.h> A specialization of GA_AIFDictTuple to access "shared strings". This class provides the interface to access string table data. Each attribute type may provide this interface if it makes sense. Definition at line 59 of file GA_AIFSharedDictTuple.h. Add (or increment) reference to a string. Definition at line 223 of file GA_AIFSharedDictTuple.h. Compact the string storage. Decrement reference to a handle. Definition at line 225 of file GA_AIFSharedDictTuple.h. Extract data from the string table. This will extract all the unique strings which are referenced by the attribute. The string handles are guaranteed to be in ascending order, but may or may not be contiguous. Extract all of the unique string handles of the attribute. The string handles are guaranteed to be in ascending order, but may or may not be contiguous. Get a single string from the array for a single tuple of an element. Get the handle from the array for a single tuple of an element. Get the full tuple of indices for a single element. Query information about the string storage. Get a string version to save writing marshalling code This lets you use this AIF in string-aware code and get sensible outputs. JSON normally has returns and tabs for formatting, onlyspaces will collapse these for a more compact output. Get a string from the string table (without going through an attribute) Return the number of entries in the shared string table. Definition at line 92 of file GA_AIFSharedDictTuple.h. Get the handle (index) corresponding to the given string, returning -1 if none. GA_DictIndexType indices may not be contiguous, this method allows you to get a string given an ordered index. Strings will be defined for all contiguous strings. This may be an expensive operation, it's better to access strings by their index if possible. This method will return a NULL pointer past the end of the string table Query the tuple size. Replace a string in the shared string table with a new value. Warning, it's possible that the table will "collapse" after the replacement. For example: In the above example, all elements which originally referenced "bar" will now reference "foo". This means that trying to swap two values will not work as expected. Set a single component for a single element. Set a single component for a range of elements. Set a single component for a single element. Set a single component for a range of elements. Set multiple components for a single element. Set multiple components for a range of elements. Set the tuple size. It's possible that there are invalid handle indexes mixed with the valid handles. When iterating over all the handles, you can call validateTableHandle() which will ensure that there's a string associated with the given handle.
https://www.sidefx.com/docs/hdk/class_g_a___a_i_f_shared_dict_tuple.html
CC-MAIN-2022-21
refinedweb
474
68.47
> I'm not 100% sure, but I see some possible dangerous race conditions.Thanks for the review.> > 1. Userspace thread A has a page fault at CS:EIP == 0x000f:0xbffff000.> > Simultaneously, userspace thread B calls modify_ldt() to change> the LDT descriptor base to 0x40000000.> > The LDT descriptor is changed after A enters the page fault> handler, but before it takes mmap_sem. This can happen on SMP or> on a pre-empt kernel.> > __is_prefetch() calls __get_user(0xfffff000), or whatever> address userspace wants to trick it into reading.> > Result: unpriveleged userspace causes an MMIO read and crashes> the system, or worse.Ok, I added access_ok() checks when the fault came from ring 3 code.That should catch everything.I guess they should be added to the AMD64 version too. It ignoresall bases, but I'm not sure if the CPU catches the case where the linearaddress computation wraps.> > 2. segment_base sets "desc = (u32 *)&cpu_gdt_table[smp_processor_id()]".> > Pre-empt switches CPU.True. Fixed.> > desc now points to the _old_ CPU, where the GDT for this task is> no longer valid, and that is read in the next few lines.> > I think for completeness you should check the segment type and limit> too (because they can be changed, see 1.). These are easier to check> than they look (w.r.t. 16 bit segments etc.): you don't have to decode> the descriptor; just use the "lar" and "lsl" instructions.I don't want to do that, the code is already too complicated and I don'tplan to reimplement an x86 here (just dealing with this segmentationhorror is bad enough) If it gets any more complicated I would be inclined to justhandle the in kernel prefetches using __ex_table entries and give upon user space.The x86-64 version just ignores all bases, that should be finetoo. Anybody who uses non zero code segments likely doesn't care aboutperformance and won't use prefetch ;-)-AndiNew patch with the fixes attached:Linus, can you consider merging it?------------------Here is a new version of the Athlon/Opteron prefetch issue workaroundfor 2.6.0test6. The issue was hit regularly by the 2.6 kernelwhile doing prefetches on NULL terminated hlists.These CPUs sometimes generate illegal exception for prefetchinstructions. The operating system can work around this by checkingif the faulting instruction is a prefetch and ignoring it whenit's the case.The code is structured carefully to ensure that the page faultwill never recurse more than once. Also unmapped EIPs are specialcased to give more friendly oopses when the kernel jumps to unmapped addresses.It removes the previous dumb in kernel workaround for this and shrinks the kernel by >10k.Small behaviour change is that a SIGBUS fault for a *_user access willcause an EFAULT now, no SIGBUS.diff -u linux-2.6.0test6-work/include/asm-i386/processor.h-PRE linux-2.6.0test6-work/include/asm-i386/processor.h--- linux-2.6.0test6-work/include/asm-i386/processor.h-PRE 2003-09-11 04:12:39.000000000 +0200+++ linux-2.6.0test6-work/include/asm-i386/processor.h 2003-09-28 10:52:55.000000000 +0200@@ -578,8 +589,6 @@ #define ARCH_HAS_PREFETCH extern inline void prefetch(const void *x) {- if (cpu_data[0].x86_vendor == X86_VENDOR_AMD)- return; /* Some athlons fault if the address is bad */ alternative_input(ASM_NOP4, "prefetchnta (%1)", X86_FEATURE_XMM,diff -u linux-2.6.0test6-work/arch/i386/mm/fault.c-PRE linux-2.6.0test6-work/arch/i386/mm/fault.c--- linux-2.6.0test6-work/arch/i386/mm/fault.c-PRE 2003-05-27 03:00:20.000000000 +0200+++ linux-2.6.0test6-work/arch/i386/mm/fault.c 2003-09-29 19:42:44.000000000 +0200@@ -55,6 +55,108 @@ console_loglevel = loglevel_save; } +/* + * Find an segment base in the LDT/GDT.+ * Don't need to do any boundary checking because the CPU did that already + * when the instruction was executed + */+static unsigned long segment_base(unsigned seg) +{ + u32 *desc;+ /* + * No need to use get/put_cpu here because when we switch CPUs the+ * segment base is always switched too.+ */+ if (seg & (1<<2))+ desc = current->mm->context.ldt;+ else+ desc = (u32 *)&cpu_gdt_table[smp_processor_id()];+ desc = (void *)desc + (seg & ~7); + return (desc[0] >> 16) | + ((desc[1] & 0xFF) << 16) | + (desc[1] & 0xFF000000);+} ++/* + * Sometimes AMD Athlon/Opteron CPUs report invalid exceptions on prefetch.+ * Check that here and ignore it.+ */+static int __is_prefetch(struct pt_regs *regs, unsigned long addr)+{ + unsigned char *instr = (unsigned char *)(regs->eip);+ int scan_more = 1;+ int prefetch = 0; + int i;++ /* + * Avoid recursive faults. This catches the kernel jumping to nirvana.+ * More complicated races with unmapped EIP are handled elsewhere for + * user space.+ */+ if (regs->eip == addr)+ return 0; ++ if (unlikely(regs->eflags & VM_MASK))+ addr += regs->xcs << 4; + else if (unlikely(regs->xcs != __USER_CS &®s->xcs != __KERNEL_CS))+ addr += segment_base(regs->xcs);++ for (i = 0; scan_more && i < 15; i++) { + unsigned char opcode;+ unsigned char instr_hi;+ unsigned char instr_lo;++ if ((regs->xcs & 3) && !access_ok(VERIFY_READ, instr, 1))+ break;+ if (__get_user(opcode, instr))+ break; ++ instr_hi = opcode & 0xf0; + instr_lo = opcode & 0x0f; + instr++;++ switch (instr_hi) { + case 0x20:+ case 0x30:+ /* Values 0x26,0x2E,0x36,0x3E are valid x86 prefixes. */+ scan_more = ((instr_lo & 7) == 0x6);+ break;+ + case 0x60:+ /* 0x64 thru 0x67 are valid prefixes in all modes. */+ scan_more = (instr_lo & 0xC) == 0x4;+ break; + case 0xF0:+ /* 0xF0, 0xF2, and 0xF3 are valid prefixes in all modes. */+ scan_more = !instr_lo || (instr_lo>>1) == 1;+ break; + case 0x00:+ /* Prefetch instruction is 0x0F0D or 0x0F18 */+ scan_more = 0;+ if ((regs->xcs & 3) && !access_ok(VERIFY_READ, instr, 1))+ break;+ if (__get_user(opcode, instr)) + break;+ prefetch = (instr_lo == 0xF) &&+ (opcode == 0x0D || opcode == 0x18);+ break; + default:+ scan_more = 0;+ break;+ } + }++ return prefetch;+}++static inline int is_prefetch(struct pt_regs *regs, unsigned long addr)+{ + if (likely(boot_cpu_data.x86_vendor != X86_VENDOR_AMD ||+ boot_cpu_data.x86 < 6))+ return 0;+ return __is_prefetch(regs, addr); +} + asmlinkage void do_invalid_op(struct pt_regs *, unsigned long); /*@@ -110,7 +212,7 @@ * atomic region then we must not take the fault.. */ if (in_atomic() || !mm)- goto no_context;+ goto bad_area_nosemaphore; down_read(&mm->mmap_sem); @@ -198,8 +300,16 @@ bad_area: up_read(&mm->mmap_sem); +bad_area_nosemaphore: /* User mode accesses just cause a SIGSEGV */ if (error_code & 4) {+ /* + * Valid to do another page fault here because this one came + * from user space.+ */+ if (is_prefetch(regs, address))+ return;+ tsk->thread.cr2 = address; tsk->thread.error_code = error_code; tsk->thread.trap_no = 14;@@ -232,6 +342,14 @@ if (fixup_exception(regs)) return; + /* + * Valid to do another page fault here, because if this fault+ * had been triggered by is_prefetch fixup_exception would have + * handled it.+ */+ if (is_prefetch(regs, address))+ return;+ /* * Oops. The kernel tried to access some bad page. We'll have to * terminate things with extreme prejudice.@@ -286,10 +404,14 @@ do_sigbus: up_read(&mm->mmap_sem); - /*- * Send a sigbus, regardless of whether we were in kernel- * or user mode.- */+ /* Kernel mode? Handle exceptions or die */+ if (!(error_code & 4))+ goto no_context;++ /* User space => ok to do another page fault */+ if (is_prefetch(regs, address))+ return;+ tsk->thread.cr2 = address; tsk->thread.error_code = error_code; tsk->thread.trap_no = 14;@@ -298,10 +420,6 @@ info.si_code = BUS_ADRERR; info.si_addr = (void *)address; force_sig_info(SIGBUS, &info, tsk);-- /* Kernel mode? Handle exceptions or die */- if (!(error_code & 4))- goto no_context; return; vmalloc_fault:-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2003/9/29/192
CC-MAIN-2015-14
refinedweb
1,188
58.38
> (a) the (much reduced) performance limitation: I'm not sure how your hypothetical distributed repository is going to determine that transactions are non-overlapping more cheaply than it can settle revision numbers. But you've admitted this is a small issue. They can decide in advance by tentatively partioning regions of the repository among themselves, coordinating synchronously only as a fallback for txns that span the tentative boundaries. The performance issue is small for source code managment. It isn't a small issue for other quite plausible and valuable applications of FSDB-style technology. > So, I think that both the intra-repository and global > revision names for merging purposes should not be based on > revnum, but on an independent, higher-level namespace. Well, here's how I think we'd implement this if we were going to: Already, I think you're off on the wrong foot. The namespace is useful to tools adjacent to revision control, not just revision control itself. It is something that can have and plausibly deserves a "stand alone" design -- independent of revision control technology. The first question isn't "how do we implement it?", but "what is the form and function of this namespace? -- what is it exactly?" You can't really figure out how to implement it until you understand in a deeper way what it is. I don't really like this idea because: * Ignoring the merge history aspects, it feels like window dressing. "Feels", huh? Hmmm. * I don't really buy that smart merging between different pieces of revision control software is a realistic or desirable goal. arch is an existence proof that it's realistic. Read the recent project-administrative messages on gcc list (and think about them) to begin to get a sense of why it's desirable. Linux kernel development also provides some relevant development patterns. And even if it does come about, using numbers doesn't mean we can't interoperate; it just means that our revision names are less informative. That statement makes presumptions about the namespace and how it is best used that are, if not false, at least completely unsupported. * You can no longer compress merge history using revision ranges (or if you do, you lose the benefit of making the merge history readable). No, you are mistaken. arch can and does compress merge history while maintaining a readable record. You can ask, of a combined merge, "what individuals changes are combined here?". Smart-merging, not just human readers, make use of that information. I'm already concerned about the bulk of merge history information given that we may get stuck storing it for each file. Well then, that's something to figure out for sure then, isn't it? At any rate, it's most likely pointless to try to design a merge history system right now, given that no one is planning to implement it in the immediate future (as far as I know). So this conversation probably shouldn't go on too much longer. In other words: "It isn't worth considering whether or not this is worth planning for because nobody is currently planning for it." Interesting. -t --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Tue Dec 17 00:55:25 2002 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2002-12/1135.shtml
CC-MAIN-2018-17
refinedweb
568
55.13
Essential Studio for AngularJS Release Notes v16.4.0.42 December 17, 2018 Starting with version 16.2 (2018 Vol 2), you need to include a valid license key (either paid or trial key) within your applications. Please refer to this help topic for more information PivotClient Features - #212894 – Provided support to render measure elements inside cube dimension browser either in default order or sorted order. - #215353 – Provided support to update pivot button details in slicer axis through filtered report bound in code behind. ejReportDesigner Preview Features - #205987,#215254,#218981 – Provided tablix report item support to display data in table or matrix format. It includes the following unique features: - Row and column grouping. - Total support for row and column groups. - Filter and sort expressions support for groups. - Cell customization support. - Cell merging support. - Support to insert report items such as text boxes, images, lines, and subreports in tablix cells. - Visual cues to indicate row and column groups. Bug Fixes - #216844 - Fixed the content overflow issue in summary row property of grid report items. - #218559 - Resolved the schema version issue to save the report with the respective schema version instead of 2010 XML namespace by default. - #216246 - Fixed the report preview failure when the text from the web designer contains hexadecimal or Unicode values.
https://help.syncfusion.com/angularjs/release-notes/v16.4.0.42
CC-MAIN-2021-43
refinedweb
213
58.38
It's relatively simple to use an Arduino to measure voltages. The Arduino has several analog input pins that connect to an analog-to-digital converter (ADC) inside the Arduino.. To display the measured voltage, we will use a liquid crystal display (LCD) that has two lines of 16 characters. LCDs are widely used to display data by devices such as calculators, microwave ovens, and many other electrical appliances. This project will also show you how to measure voltages above the reference voltage by using a voltage divider. Experiment 1 In this experiment, we will make digital voltmeter capable of measuring up to 5V using an Arduino board and a 16x2 LCD. Hardware Required - 1 x Arduino Mega2560 - 1x LCD (Liquid Crystal Display) - 1x 5 kohm potentiometer - 1x breadboard - female connectors - jumper wires Wiring Diagram The 16x2 LCD used in this experiment has a total of 16 pins. As shown in the table below, eight of the pins are data lines (pins 7-14), two are for power and ground (pins 1 and 16), three are used to control the operation of LCD (pins 4-6), and one is used to adjust the LCD screen brightness (pin 3). The remaining two pins (15 and 16) power the backlight. Refer to the diagram below to see how to connect the LCD to the Arduino. Note that the potentiometer is connected to the 5V source and GND and the middle terminal is connected to pin 3 of LCD. Rotating this pot changes the brightness of the LCD. The four data pins DB4-DB7 are connected to the Arduino pins 4-7. Enable is connected to pin 9 of the Arduino and RS is connected to pin 8 of the Arduino. RW is connected to ground. The backlight LED is connected to 5V and ground. The following table shows the pin connections: DB4 ----->pin4 DB5 ----->pin5 DB6 ----->pin6 DB7 ----->pin7 RS ----->pin8 EN ----->pin9 Code The program below uses the LiquidCrystal library. This library contains all of the functions needed to write to the LCD. The loop reads the analog value from the the analog input, and because the reference voltage is 5 V, it multiples that value by 5, then divides by 1024 to calculate the actual voltage value. Once the voltage has been calculated, the value is written to the LCD. The photo below shows a typical display. #include "LiquidCrystal.h" LiquidCrystal lcd(8, 9, 4, 5, 6, 7); float input_voltage = 0.0; float temp=0.0; void setup() { Serial.begin(9600); // opens serial port, sets data rate to 9600 bps lcd.begin(16, 2); //// set up the LCD's number of columns and rows: lcd.print("DIGITAL VOLTMETER"); } void loop() { //Conversion formula for voltage int analog_value = analogRead(A0); input_voltage = (analog_value * 5.0) / 1024.0; if (input_voltage < 0.1) { input_voltage=0.0; } Serial.print("v= "); Serial.println(input_voltage); lcd.setCursor(0, 1); lcd.print("Voltage= "); lcd.print(input_voltage); delay(300); } Experiment 2 In order to measure voltages greater than the 5 V reference voltage, you need to divide the input voltage so that the voltage actually input to the Arduino is 5 V or less. in this experiment, we will use a 90.9 kohm resistor and a 10 kohm resistor to create a 10:1 divider. This will allow us to measure voltages up to 50 V. Hardware Required - 1x Arduino Mega2560 - 1x 90.9 kohm resistor - 1x 10 kohm resistor - 1x LCD (Liquid Crystal Display) - 1x 5k potentiometer - 1x breadboard - female connector - jumper wires Wiring Diagram The circuit for this experiment is exactly the same as Experiment #1, except that we now have a voltage divider, made up of a 90.9 kohm resistor and a 10 kohm resistor connected to the input. See the diagram below. Program The program for this experiment is nearly the same as for Experiment #1. The only difference is that now we have to divide the calculated voltage by the ratio R2/(R1 + R2), which in this case is 10,000/(90,900 + 10,000) ≈ 0.1. Code #include "LiquidCrystal.h" LiquidCrystal lcd(8, 9, 4, 5, 6, 7); float input_voltage = 0.0; float temp=0.0; float r1=90900.0; float r2=10000.0; void setup() { Serial.begin(9600); // opens serial port, sets data rate to 9600 bps lcd.begin(16, 2); //// set up the LCD's number of columns and rows: lcd.print("DIGITAL VOLTMETER"); } void loop() { //Conversion formula int analog_value = analogRead(A0); temp = (analog_value * 5.0) / 1024.0; input_voltage = temp / (r2/(r1+r2)); if (input_voltage < 0.1) { input_voltage=0.0; } Serial.print("v= "); Serial.println(input_voltage); lcd.setCursor(0, 1); lcd.print("Voltage= "); lcd.print(input_voltage); delay(300); } 6 CommentsLogin I am currently doing this project and am currently confused. What the video shows doesn’t match up to the given diagram. The LCD is connected to the breadboard, how was this done? “in this experiment, we will use a 100 kohm and a 10 kohm resistor to create a 10:1 divider.” Sorry: Your voltage divider formula isn’t correct. The correct formula is R1+R2/R2 You should use a 90,9KΩ and a 10,0KΩ resistor pair. I just burn 3 POTS. Is it right the second diagram?. The ATMEGA internal ADC has 3 valuable possibilies for this project: first it has in internal reference of 2.56 V and this makes your meter not depending on the 5 V supply. second it allows for differntial ADC conversion which allows for measuring also negativ voltages third it has a build in amplifier x10 which could be used for low voltages. For voltages higher than the reference voltage you can use any voltage divider, with R2 being 10 kohm (because this is best for the internal ADC). The reading is the multiplication of ADC result by a cal constant, which have to fixed of a one time calibration with a second voltage meter. I would recomend a capacitor on the input of ADC to filter AC voltages to achieve more stable readings. Shouldn’t you divide by 1023 instead of 1024? That would also explain why you only get the reading 4.99 V and not 5.00 V. And if you use I2C instead of 4 Data-Lines for the Display-Data, you will save 2 Output-Pins (i am using a 20x4 display with I2C with full satisfaction). There seems to be some disagreement about the proper equation for converting an ADC code to voltage, but the one used in the article is the more correct version. You use 2^N instead of (2^N - 1) because you want to multiply the ADC code by the LSbit step size, and you obtain the LSbit step size by dividing the reference voltage by the number of possible ADC codes. With a 10-bit ADC, there are 2^10 = 1024 possible codes (i.e., from 0 to 1023). You can read more here:
https://www.allaboutcircuits.com/projects/make-a-digital-voltmeter-using-the-arduino/
CC-MAIN-2017-09
refinedweb
1,155
65.22
DebuggerDisplay attribute class in C# lets the developer control on how an object or property needs to be displayed in the debugger variable window. Assume that you have a class that does not have the DebuggerDisplay attribute to it. For example , Employee class. public class Employee { public string Name { get; set; } public int Age { get; set; } } When you create an instance of Employee class and mouse hover on the instance name during debugging , you should see the instance name contains the type name. In this case , the debugger has used the default type name. Alternatively , you can override the ToString() method so that the Debugger can use it. The other option is to use the DebuggerDisplay property and when this is used , it takes the priority. DebuggerDisplay property is useful to view the customized output of the class and display more meaningful text during the debugging. DebuggerDisplay takes single argument which is a string that needs to be evaluated and this can contain {} or parenthesis which is evaluated as field , property or method. [DebuggerDisplay("The employee {Name} is just {Age} years old")] public class Employee { public string Name { get; set; } public int Age { get; set; } } When you mouse hover on the object when debugging , you will notice a meaningful text as displayed in the screenshot. Note that the DebuggerDisplay attribute can be applied to types, delegates, properties, fields, and assemblies.
http://developerpublish.com/using-debuggerdisplay-attribute-in-c/
CC-MAIN-2018-09
refinedweb
230
58.82
- while choice.upper() != "A": choice = raw_input("\nEnter a letter that corresponds to what you would like to order: ") if choice.upper() == "H": print "Hamburger\t$1.29" subtotal = subtotal + 1.29 elif choice.upper() == "O": print "Onion Rings\t$1.09" subtotal = subtotal + 1.09 elif choice.upper() == "C": print "Cheeseburger\t$1.49" subtotal = subtotal + 1.49 elif choice.upper() == "S": print "Small Drink\t$0.79" subtotal = subtotal + .79 elif choice.upper() == "F": print "Fries\t$0.99" subtotal = subtotal + .99 elif choice.upper() == "L": print "Large Drink\t$1.19" subtotal = subtotal + 1.19 elif choice.upper() == "A": subtotal = subtotal else: print "Please enter a correct choice" return choice, subtotal #calc function def calc(choice, subtotal): """calculate""" tax = subtotal * .05 total = subtotal + tax print "Subtotal: ", subtotal print "Tax: ", tax print "Total: $", total amount = float(raw_input("\nEnter the amount collected: ")) change = amount - total print "Change: $", change choice, subtotal = input() calc(choice, subtotal)
http://www.python-forum.org/viewtopic.php?p=4086
CC-MAIN-2015-48
refinedweb
154
65.69
Art is a way of seeing, and they say seeing is believing, but the opposite is also true, believing is seeing and it’s really hard to imagine living in this world without the gift of vision – our eyes, how wonderful it is being an infant when our eyes just open we see the world and started to recognize and see the world around us but as the time passes by, the same wonderful experience becomes a mundane one. But as we are progressing with the technology we are at a verge where the machines are also able to see and understand it. Currently, it doesn’t seem to be a sci-fi if you just unlock your phone with your face, but the story of the development of the machine vision is dated long back to more than 20 years. The initial formal step in this field was taken back in 1999 in an Intel initiative, when all the research going on was collaborated under the OPEN CV (Open Source computer vision), originally written in C++, with its first major release 1.0 in 2006 second in 2009, third in 2015 and fourth just now in 2018. Now OpenCV has C++, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. So it can be easily installed in Raspberry Pi with Python and Linux environment. And Raspberry Pi with OpenCV and attached camera can be used to create many real time image processing applications like Face detection, face lock, object tracking, car number plate detection, Home security system etc. Before going into to learn image processing using openCV it’s important to know what images are and how humans and machines perceive those images. What are Images? Images are a two-dimensional representation of the visible light spectrum. And the visible light spectrum is just a part of the electromagnetic spectrum lying there between infrared and ultraviolet spectrum. How are images formed: - when a light reflects off an object onto a film, a sensor or on retina. This is how our eyes work, using a barrier to block the most point of lights leaving a small opening through which light can pass it is called as aperture, and it forms a much focused image and is a working model for a pin hole camera, but there is a problem in a pin hole camera, that same amount of light will be entering the aperture, which could not be suitable for the film or image formed also we can’t get a focused image, so as to focus the image we need to move film back and forth, but this is problematic in many situations. Or we can fix this problem by using lenses, it allows us to control the aperture size, and in photography its known as fStop, generally lower the value of fStop is better in photography. Aperture size also let us get into nice depth of field called as Bokeh in photography, it allows us to have a blurred background while we focus on image. How computer stores images You may have heard of various image formats like .PNG, .JPEG and etc. all of this are digital representation of our analog world, computers do it by translating the image into digital code for storage and then interpret the file back into an image for display. But at the basics they use a common platform for storing the images, and same is true for the openCV. OpenCV uses RGB (red, green and blue) color space by default for its images, where each pixel coordinate (x, y) contains 3 values ranging for intensities in 8-bit form i.e. (0-255, 28). Mixing different intensities of each color gives us the full spectrum, that’s why in painting or art these three colors are regarded as primary colors and all others as secondary, because most of the secondary colors can be formed by primary colors. Like for yellow, we have the following values: Red – 255; Green – 255; Blue – 0. Now the images are stored in multi-dimensional arrays. In programming, array is a series of collection of objects. And here we deal with three type of arrays 1D, 2D and 3D where ‘D’ stands for dimensional. Colored images are stored in three dimensional arrays, where the third dimensions represents the RGB colors (which we will see later), and in together they form different intensities of pixels for an image, while the black & white images are stored in two dimensional arrays and also there are two types of black & white images Greyscale and binary images. Greyscale images are form from the shades of grey of a two dimensional array [(0,255), (0,255)], while the binary images are of pixels either of black or white. Why it is difficult for a machine to identify images Computer vision is a challenging task in itself, you can yourself imagine how hard it is to give a machine a sense of vision, recognition and identification. The following factors are there that makes computer vision so hard. - Camera sensor and lens limitations - View point variations - Changing lighting - Scaling - Occlusions - Object class variations - Ambiguous Images/Optical Illusions Application and uses of OpenCV Despite the difficulty, Computer Vision has many success stories - Robotic Navigation – Self Driving Cars - Face Detection & Recognition - Search Engine Image Search - License Plate Reading - Handwriting Recognition - Snapchat & Face Filters - Object Recognition - Ball & Player Tracking in Sports - And many more! Installing OpenCV with Python and Anaconda operations quite easily. Basic programming is useful with Exposure to High School Level Math, a webcam, Python 2.7 or 3.6 (Anaconda Package is preferred). Step 1. Download & Install Anaconda Python Package Go to: and choose according to your machine weather its windows, Linux or mac and you can choose for python 2.7 or python 3.7 version for either 64 Bit systems or 32 Bit systems, but now a days most of the system are 64 bit. Anaconda distribution of python comes along with Spyder studio, jupyter notebooks and anaconda prompt, which makes python super friendly to use. We would be using spyder studio for doing the examples. The choice between python 2.7 or 3.7 is completely neutral, but however for the examples we would be using python 3.7 since it’s the future of the python and will take over python 2.7 form 2020, also most of the libraries are being developed in python 3.7 keeping the future aspect of python in mind. Also it also gives the expected results on basic mathematical operations such as (2/5=2.5), while the python 2.7 would evaluate it to 2. Also print is treated as a function in python 3.7 (print(“hello”)), so it gives hands-on to the programmers. Step 2. Creating a virtual platform with OpenCV We are going to install OpenCV by creating a virtual platform for spyder by using Anaconda prompt and the YML file uploaded here. With the YML files we will install all the packages and libraries that would be needed, but however if you want to install any additional packages you can easily install through anaconda prompt, by running the command of that package. Go to your windows search icon and find anaconda prompt terminal, you can find it inside your anaconda folder that you have just installed. Then you have to find your downloaded YML file, and from here you have two choices either changing the directory of your terminal to the location where the your YML file is downloaded or either copy your YML file to the directory where your anaconda is installed in most cases it would be inside C:\ drive, after copying your YML file to the specified location RUN the following command on your prompt conda env create –f virtual_platform_windows.yml Since my system is running on windows the YML file and the command corresponds to the windows, however you can modify according to your system by replacing windows by linux or mac as respective. Note: - If the package extraction gives error do install pytorch and numpy first and then run the above command. Now find the anaconda navigator and there would be a drop down menu of “Applications on ___” and from there select virtual environment and then from there you have to launch Spyder studio. And that’s it, you’re ready to get started! Opening and Saving images in OpenCV Here we are explaining some basic commands and terminology to use Python in OpenCV. We will learn about three basic functions in OpenCV imread, imshow and imwrite. Import opencv in python by command import cv2 Load an image using ‘imread’ specifying the path to the image image =cv2.imread('input.jpg') Now that image is loaded and stored in python as a variable we named as image Now to display our image variable, we use ‘imshow’ and the first parameter for imshow function is the title shown on the image window, and it has to be entered in (‘ ’) to represent the name as a string cv2.imshow('hello world',image) waitkey allows us to input information when image window is open, by leaving it blank it just waits for anykey to be pressed before continuing, by placing numbers (except 0), we can specify a delay for how long you keep the window open (time in milliseconds here). cv2.waitKey() ‘destroyAllWindows’ closes all the open windows, failure to place this will cause your programme to hang. cv2.destroyAllWindows() Now let’s take a look how images are stored in open cv, for this we will use numpy, numpy is a library for python programming for adding support to large multidimensional arrays and matrices. import cv2 #importing numpy import numpy as np image=cv2.imread('input.jpg') cv2.imshow('hello_world', image) #shape function is very much useful when we are looking at a dimensions of an array, it returns a tuple which gives a dimension of an image print(image.shape) cv2.waitKey() cv2.destroyAllWindows() console output - (183, 275, 3), The two dimensions of the image are 183 pixels in height and 275 pixels in width and 3 means that there are three other components (R, G, B) that makes this image (it shows that the colored images are stored in three dimensional arrays). Now let’s print each dimension of image by adding the following lines of code print('Height of image:',(image.shape[0],'pixels')) print('Width of image:',(image.shape[1],'pixels')) console output - Height of image: (183, 'pixels') Width of image: (275, 'pixels') Saving the edited image in OpenCV We use ‘imwrite’ for specifying the filename and the image to be saved. cv2.imwrite('output.jpg',image) cv2.imwrite('output.png',image) First argument is name of the file we want to save, {to read or to save the file we use (‘ ’) to indicate it as a string} and second argument is the file name. OpenCV allows you to save the image in different formats. Grey Scaling Image in OpenCV Greyscaling is the process by which an image is converted from a full color to shades of grey (black and white) In opencv, many functions greyscales the images before processing. This is done because it simplifies the image, acting almost as a noise reduction and increasing the processing time as there is less information in image (as greyscale images are stored in two dimensional arrays). import cv2 # load our input image image=cv2.imread('input.jpg') cv2.imshow('original', image) cv2.waitKey() #we use cvtcolor, to convert to greyscale gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow('grayscale', gray_image) cv2.waitKey() cv2.destroyALLWindows() Simpler way to convert image into grayscale is just add the argument 0 in imread function aside to the image name import cv2 grey_image=cv2.imread('input.jpg',0) cv2.imshow('grayscale',grey_image) cv2.waitKey() cv2.destroyAllWindows() Now let’s see the dimension of each image by the shape function import cv2 import numpy as np image=cv2.imread('input.jpg') print(image.shape) cv2.imshow('original', image) cv2.waitKey() gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow('grayscale', gray_image) print(gray_image.shape) cv2.waitKey() cv2.destroyALLWindows() Console output: - (183, 275, 3) – for colored image (183, 275) – for grayscale image Hence it clearly shows that the colored images are represented by three dimensional arrays, while the gray scale images by two dimensional arrays. Color Spaces Color spaces are the way the images are stored. RGB, HSV, CMYK are the different color spaces, these are just simple ways to represent color. RGB – Red, Green and Blue. HSV – Hue, Saturation and Value. And CMYK is commonly used in inkjet printers. RGB or BGR color space OpenCV’s default color space is RGB. RGB is an additive color model that generates colors by combining blue, green and red colors of different intensities/ brightness. In OpenCV we use 8 bit color depths. - Red (0-255) - Blue (0-255) - Green (0-255) However OpenCV actually stores color in BGR format. Fun Fact: - We use BGR order in computers due to how unsigned 32-bit integers are stored in memory, it still ends up being stored as RGB. The integer representing a color eg:- 0X00BBGGRR will be stored as 0XRRGGBB. HSV (Hue, Saturation & value/ Brightness) is a color space that attempts to represent colors the humans perceive it. It stores color information in a cylindrical representation of RGB color points. Hue – color value (0-179) Saturation – Vibrancy of color (0-255) Value – Brightness or intensity (0-255) HSV color space format is useful in color segmentation. In RGB, filtering specific color isn’t easy, however HSV makes it much easier to set color ranges to filter specific color as we perceive them. Hue represents the color in HSV, the hue value ranges from 0 – 180 and not 360 so it is not completing the full circle and so it is mapped differently than the standard. Color range filters - Red – (165-15) - Green – (45-75) - Blue – (90-120) As we know the images being stored in RGB (Red, Green and Blue) color space and so OpenCV shows us the same, but the first thing to remember about opencv’s RGB format is that it’s actually BGR and we can know it by looking at the image shape. import cv2 import numpy as np image = cv2.imread('input.jpg') #B,G,R value for the first 0,0 pixel B,G,R=image[0,0] print(B,G,R) print(image.shape) #now if we apply this on grayscale image gray_img=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) print(gray_img.shape) #gray_image pixel value for 10,50 pixel print(gray_img[10,50]) Console Output: print(B,G,R) - 6 11 10 print(image.shape) - (183, 275, 3) print(gray_img.shape) - (183, 275) print(gray_img[10,50]) - 69 Now there are only two dimensions in a gray scale image, since we remember the color image is stored in three dimensions, the third dimension being the (R,G,B) while in grayscale only two dimensions are present, since (R,G,B) is absent and for a particular pixel position we only get a single value while in colored image we got three values. Another useful color space is HSV import cv2 image=cv2.imread('input.jpg') hsv_image=cv2.cvtColor(image,cv2.COLOR_BGR2HSV) cv2.imshow('HSV image',hsv_image) cv2.imshow('Hue channel',hsv_image[:,:,0]) cv2.imshow('saturation channel',hsv_image[:,:,1]) cv2.imshow('value channel',hsv_image[:,:,2]) cv2.waitKey() cv2.destroyAllWindows() After running the code you can see the four images of which three are of the individual channels and one is combined HSV image. Hue channel image is quite dark because its value only varies from 0 to 180. Also, note that imshow function tries to show you the RGB or BGR image, but HSV conversion overlaps it. Also, the value channel will be similar to the grayscale of image due to its brightness. Exploring individual components of RGB image import cv2 image=cv2.imread('input.jpg') #opencv's split function splits the imageinti each color index B,G,R=cv2.split(image) cv2.imshow("Red",R) cv2.imshow("Green",G) cv2.imshow("Blue",B) #making the original image by merging the individual color components merged=cv2.merge([B,G,R]) cv2.imshow("merged",merged) #amplifying the blue color merged=cv2.merge([B+100,G,R]) cv2.imshow("merged with blue amplify",merged) #representing the shape of individual color components. # the output wuld be only two dimension whih wouldbe height and width, since third element of RGB component is individually represented print(B.shape) print(R.shape) print(G.shape) cv2.waitKey(0) cv2.destroyAllWindows() Console output: #dimensions of image from shape function (183, 275) (183, 275) (183, 275) Converting image into individual RGB component In below code we have created a matrix of zeros with the dimensions of image HxW, zero return an array filled with zeros but with same dimensions. Shape function is very much useful when we are looking at the dimension of an image, and here we have done slicing of that shape function. So shape[:2] would grab everything up to designated points i.e. upto second designated points which would be height and width of the image as third represents RGB component of image and we don't need it here. import cv2 import numpy as np image = cv2.imread('input.jpg') B,G,R = cv2.split(image) zeros=np.zeros(image.shape[:2],dtype="uint8")(0) cv2.destroyAllWindows() Histogram Representation of Image Histogram representation of image is the method of visualizing the components of images. The following code lets you analyze the image through the color histogram of its combined and individual color components. import cv2 import numpy as np #we need to import matplotlib to create histogram plots import matplotlib.pyplot as plt image=cv2.imread('input.jpg') histogram=cv2.calcHist([image],[0],None,[256],[0,256]) #we plot a histogram, ravel() flatens our image array plt.hist(image.ravel(),256,[0,256]) plt.show() #viewing seperate color channels color=('b','g','r') #we know seperate the color and plot each in histogram for i, col in enumerate (color): histogram2=cv2.calcHist([image],[i],None,[256],[0,256]) plt.plot(histogram2,color=col) plt.xlim([0,256]) plt.show() Let’s understand the calcHist function with each of its individual parameters cv2.calcHist(images, channels, mask, histsize, ranges) Images: its the source image of type uint 8 or float 32. It should be given in square brackets, i.e. “[img]”, which also indicate its second level array since an image for opencv is data in an array form. Channels: it is also given in square brackets. It is the index of channel for which we calulate histogram, for example if input is grayscale image its value is [0], for color images you can pass [0], [1] or [2] to calculate histogram of blue, green and red channel respectively. Mask: mask image. to find the histogram of full image, it is given as “none”. but if you want to find the histogram of particular region of image, you have to create a mask image for that and give it as a mask. Histsize: This represents our BIN count. Needed to be given in square brackets for full scale we pass [256]. Ranges: This is our range, normally is [0,256] Drawing Images and Shapes using OpenCV Below are few Examples for drawing lines, rectangle, polygon, circle etc in OpenCV. import cv2 import numpy as np #creating a black square image=np.zeros((512,512,3),np.uint8) #we can also create this in black and white, however there would not be any changes image_bw=np.zeros((512,512),np.uint8) cv2.imshow("black rectangle(color)",image) cv2.imshow("black rectangle(B&W)",image_bw) Line #create a line over black square #cv2.line(image, starting coordinates, ending coordinates, color, thickness) #drawing a diagonal line of thickness 5 pixels image=np.zeros((512,512,3),np.uint8) cv2.line(image,(0,0),(511,511),(255,127,0),5) cv2.imshow("blue line",image) Rectangle #create a rectangle over a black square #cv2.rectangle(image,starting coordinates, ending coordinates, color, thickness) #drawing a rectangle of thickness 5 pixels image=np.zeros((512,512,3),np.uint8) cv2.rectangle(image,(30,50),(100,150),(255,127,0),5) cv2.imshow("rectangle",image) #creating a circle over a black square #cv2.circle(image,center,radius,color,fill) image=np.zeros((512,512,3),np.uint8) cv2.circle(image,(100,100),(50),(255,127,0),-1) cv2.imshow("circle",image) #creating a polygon image=np.zeros((512,512,3),np.uint8) #lets define four points pts=np.array([[10,50], [400,60], [30,89], [90,68]], np.int32) #lets now reshape our points in form required by polylines pts=pts.reshape((-1,1,2)) cv2.polylines(image, [pts], True, (0,255,255), 3) cv2.imshow("polygon",image) #putting text using opencv #cv2.putText(image,'text to display',bootom left starting point, font,font size, color, thickness) image=np.zeros((512,512,3),np.uint8) cv2.putText(image,"hello world", (75,290), cv2.FONT_HERSHEY_COMPLEX,2,(100,170,0),3) cv2.imshow("hello world",image) cv2.waitKey(0) cv2.destroyAllWindows() Computer Vision and OpenCV are very vast topics to cover but this guide would be good starting point to learn OpenCV and image processing.
https://circuitdigest.com/tutorial/getting-started-with-opencv-image-processing
CC-MAIN-2019-04
refinedweb
3,566
52.39
In software development, logging is an essential part that helps you to monitor the system. So what is logging? There are many definitions of logging but from a developer's perspective, here is one of the best descriptions that I found. It's from Colin Eberhardt’s article. This is exactly what we expected from logging; it can be a message on a console screen, log file, email, third-party service, etc. Logging is an indispensable thing to save time when investigating a bug that developers have missed. It’s also useful to detect common mistakes users make when using the application, as well as security purposes. Introduction to .NET core built-in logging In the .NET core application, logging is a built-in feature for some types of applications such as ASP.NET Core and .NET Core Work Services. For other types that don’t support logging as a built-in feature, you need to install Microsoft.Extensions.Logging package to use the .NET logging API. The Logging API doesn’t work independently. It works with one or more logging suppliers that store (or just display) log messages to a destination place. For example, the default ASP.NET application includes some built-in providers such as: - Console: Display log message to the console screen. - Debug: Log output by using the System.Diagnostics.Debug class. Calls the WriteLine method to write the Debug provider. Log location is dependent on the operator system (ex: /var/log/message on ubuntu). You can see it in the debug windows on many IDEs. - EventLog: Log output to the Windows Event Log (So it only works with Windows) One other thing is the log level. Log levels indicate the importance or severity of log messages. Log providers include extension methods to indicate log levels. Below is the list of log levels in .NET Core: How to use it with examples? If you take a look at the ASP.NET core default template, you will see that we can inject a logger dependency into our Controller class constructor. This is because the default ASP.NET configuration registers a logging provider factory with the built-in dependency injection system. As you can see, the parameter into the constructor is a generic type referencing the class itself. This is a requirement when using the .NET logging as it allows the log factory to create a logger, which instance specific to that class. This approach adds a little extra context into log messages and allows class and namespace specific log level configuration (talk about it later). Now, we have a logger in our class; let’s take it! Add the following code to the Get method: public IEnumerable<WeatherForecast> Get() { _logger.LogInformation("Hello from logger..."); var rng = new Random(); return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }) .ToArray(); } Start the application and send a GET request to the /weatherforecast endpoint. If you look at the IDE’s debug window or the console screen, in the meantime, you are running the application with .NET CLI; you should see our information message. It goes like this: That’s for an ASP.NET core application with the built-in logging feature. How about other .NET application types that don’t include built-in logging? Let create a console application by running the .NET command dotnet new console or wit UI if you are using VS. To write the log. First, you need to add Microsoft.Extensions.Logging package. Run the command: dotnet add package Microsoft.Extensions.Logging Then you need to add a logging provider; let use a console provider. Run the command: dotnet add package Microsoft.Extensions.Logging.Console Now, everything is ready. Update the primary method as below: static void Main(string[] args) { using (var serviceProvider = new ServiceCollection() .AddLogging(config => config .ClearProviders() .AddConsole() .SetMinimumLevel(LogLevel.Trace)) .BuildServiceProvider()) { // Get logger factory service var loggerFactory = serviceProvider.GetService<ILoggerFactory>(); // Create logger var logger = loggerFactory.CreateLogger<Program>(); // run app logic logger.LogInformation("Hello! This is log message from .NET console app..."); } Console.Write("Yup"); } Start the application, and you will see the log message like this. Top third-party logging providers for .NET core As you can see, .NET provides many built-in logging providers that log messages to console, Debug, EvenLog, etc. But we have no available provider that helps us write messages to a log file, which everybody does in staging and production environments. To do that, we will need to implement a custom logging provider, and this may take time as we need to take care of many things like reading/writing performance, storage space, configurations, etc. Fortunately, we have some third-party packages that did well - Nlog, SeriLog, and Log4Net. These packages allow to store messages to a log file and quickly set up. Just install them from the NuGet package store and then add some configurations. Log4Net is one of the leading logging frameworks for .NET. It started in 2001 as a port of the Java framework “log4j”. Log4Net is flexible in where the logging information is stored. It has been the first choice for the .NET logging framework. But unfortunately, it hasn’t seen a significant release since 2017. This explains why many fans of Log4Net are moving to another framework. SeriLog was released in 2013; the main difference between Serilog and the other frameworks is that it supports structured logging. So what is structured logging? The problem with log files is that they are unstructured text data. It's hard to filter, sort, and query useful information. It would be great for a developer to filter all logs by a certain such as user ID or transaction ID. Structured logging allows us to do it and additional analytics as well. NLog is a popular logging framework for .NET, and the first version was released in 2006. It supports writing to various destinations, including file, console, email, and database. Moreover, Nlog also supports structured logging. Advantageously, it is easy to configure both through the configuration file and programmatic configuration. And the benchmarks suggest that using it has some performance improvements. That is why I always choose it for my .NET core project if possible. Here are the steps to use Nlog in your ASP.NET core project with a configuration file: 1. Install NLog.Web.AspNETCore NuGet package. You can install it from Visual Studio or run this command: dotnet add package NLog.Web.AspNetCore 2. Add “nlog.config” file: In this file, we will define a target that will write logs to a file on our local system, the format of log messages, maximum log file size, archived old log file, etc. You can read more about the nlog.config file here. Here is the simple config for “nlog.config” file to write a log file to the bin/logs folder: <?xml version="1.0" ?> <nlog xmlns="" xmlns: <targets> <target name="file" xsi: </targets> <rules> <logger name="*" minlevel="Trace" writeTo="file" /> </rules> </nlog> 3. Update your program.cs. Replace the build-in provider with Nlog. public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }) .ConfigureLogging(logging => { logging.ClearProviders(); logging.SetMinimumLevel(LogLevel.Trace); }) .UseNLog(); // NLog: Setup NLog for Dependency injection; } That's all! Start the project, and you will get the log file under the “.\bin\Debug\netcoreapp3.1\logs'' folder. The message is the same as the example for the built-in provider above. How to log with the monitoring platform “Sentry”? With third-party logging providers, you can save error messages, stack trace to a log file for investigation, and fix the issue. But one disadvantage of this way is you have to connect to the server to get the log files, and the log file usually includes the log message in a whole day, so it will be tricky for us to get the necessary information. That is why we need error tracking systems like Sentry. Sentry.io is an open-source error tracking system that supports real-time error tracking with clear UI. Sentry also supports a wide range of server, browser, desktop, and native mobile languages and frameworks, including PHP, Node.js, Python, Ruby, C#, Java, Go, React, Angular, Vue, JavaScript, and more. Sentry developer accounts are free with commercial options for larger teams generating thousands of events across multiple projects. Real-time logging for ASP.NET project with Sentry 1. First, you need to create a new account; or log in to your existing Google, GitHub, or Azure DevOps account. 2. Next, you need to create a Sentry project to map with your project once you have logged in. Select the “Projects” menu on the left and then click on the “Create Project” button. 3. On the “Create a new Project” screen, select the language or framework you are using. Enter the project name. You can set the alert configuration if needed. And then click on “Create Project” to create your Sentry project. 4. Your project will be added to the “Projects” screen. 5. On your ASP.NET core project, Install Sentry.AspNetCore NuGet package. Run the command: dotnet add package Sentry.AspNetCore 6. Add this configuration to the “appsettings.json” file. "Sentry": { "Dsn": "", "IncludeRequestPayload": true, "SendDefaultPii": true, "MinimumBreadcrumbLevel": "Debug", "MinimumEventLevel": "Warning", "AttachStackTrace": true, "Debug": true, "DiagnosticsLevel": "Error" } With “Dsn” is under project settings/Client Keys (DSN). You can check for all Sentry configuration for ASP.NET core here. 7. Update your “program.cs”, add use Sentry config public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseSentry(); webBuilder.UseStartup<Startup>(); }); } 8. That's all! Now you can start the project and try to write a log. You can see the log message on the issue screen. It will include very detailed information like IP, browser, OS, device, message, stack trace, etc. Our best configuration tips When we log more information, it will be easier to diagnose a fault. But logging code is like any other. If we log too many things, the performance will be affected. So do not log everything, just log the critical information that will help you diagnose a fault like unhandled exceptions. And one of the best ways to do it in ASP.NET is to use middleware. Below is a simple example: public class ExceptionMiddleware { private readonly RequestDelegate _next; private readonly IWebHostEnvironment _env; private readonly ILogger<ExceptionMiddleware> _logger; public ExceptionMiddleware(RequestDelegate next, IWebHostEnvironment env, ILogger<ExceptionMiddleware> logger) { _env = env; _logger = logger; _next = next; } public async Task InvokeAsync(HttpContext httpContext) { try { await _next(httpContext); } catch (Exception ex) { _logger.LogError(ex, ex.Message); await HandleExceptionAsync(httpContext, ex); } } private async Task HandleExceptionAsync(HttpContext context, Exception ex) { context.Response.ContentType = "application/json"; if (context.Response.StatusCode == (int)HttpStatusCode.Unauthorized) { return; } context.Response.StatusCode = (int)HttpStatusCode.InternalServerError; var baseEx = ex.GetBaseException(); // // Convert to model var message = !_env.IsProduction() || ex is DomainException ? baseEx.GetErrorMessages() : "InternalServerError"; var error = new ErrorViewModel(message, !_env.IsProduction() ? baseEx.StackTrace : null); ExceptionLogger.LogToFile(message + "\n" + baseEx.StackTrace); _logger.LogError(baseEx, baseEx.Message); // // Return as json context.Response.ContentType = "application/json"; await context.Response.WriteAsync(System.Text.Json.JsonSerializer.Serialize(error)); } } And to easily access logging information on the production environment, you can use a tracking system like Sentry. In case your project is too small, you can write the log info to your database or send the log information to email, so you don't need to access the server. One important thing that you need to know is the size of the log file. If the file is too large, it will affect log performance, so you need to limit the log file size, and if the file is reached the limit, we will create a new one. You also can archive a log file. For example, in NLog, you can add some configuration as below: <?xml version="1.0" ?> <nlog xmlns="" xmlns: <targets> <target name="file" xsi: </targets> <rules> <logger name="*" minlevel="Trace" writeTo="file" /> </rules> </nlog> Conclusion Effective logging is crucial to debugging in the production environment of an application. A good logging message is important for system administrators and developers to detect errors, threats, as well as the system’s state. It also makes a major difference in the supportability of an application. So when you write a log on your code, don't forget to log several following indications such as: why the application failed, when the failure occurred, what line in the code the application failed, and what the system was doing when the loss occurred. This information will help you to investigate better and resolve an event. I hope this blog post can help you get an overview of login and start it today if you are not already doing it. References 1. David Pine and Nick Schonning, Logging in .NET, docs.microsoft.com, 2020. 2. Chris Hoffman, What Is the Windows Event Viewer, and How Can I Use It? 3. Colin Eberhardt, The Art of Logging, 4. Igor Manushin, .NET Logging Performance Comparison, About the author
https://enlabsoftware.com/development/top-logging-frameworks-for-net-applications-and-the-best-configuration-tips.html
CC-MAIN-2021-10
refinedweb
2,197
51.44
View all headers Newsgroups: comp.lang.python,comp.answers,news.answers Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!newsfeed.internetmci.com!in3.uu.net!EU.net!sun4nl!cwi.nl!guido From: guido@cnri.reston.va.us (Guido van Rossum) Subject: FAQ: Python -- an object-oriented language Message-ID: <DxJ3t1.CJv@cwi.nl> Followup-To: comp.lang.python Originator: guido@voorn.cwi.nl Sender: news@cwi.nl (The Daily Dross) Supersedes: <DFMAv8.3Hp@cwi.nl> Nntp-Posting-Host: voorn.cwi.nl Reply-To: guido@cnri.reston.va.us (Guido van Rossum) Organization: CWI, Amsterdam Date: Tue, 10 Sep 1996 18:10:13 GMT Approved: news-answers-request@MIT.Edu Expires: Fri, 1 Nov 1996 00:00:00 GMT Lines: 2239 Xref: senator-bedfellow.mit.edu comp.lang.python:13071 comp.answers:21066 news.answers:81436 View main headers.N.R.I. 1895 Preston White Drive Reston, VA 20191 U.S.A. The latest version of this FAQ is available by anonymous ftp from <URL:>. It will also be posted regularly to the newsgroups comp.answers <URL:news:comp.answers> and comp.lang.python <URL:news:comp.lang.python>. Many FAQs, including this one, are available by anonymous ftp <URL:>. The name under which a FAQ is archived appears in the Archive-name line at the top of the article. This FAQ is archived as python-faq/part1 <URL:>. and Other Known Bugs. Are there other ftp sites that mirror the Python distribution? 1.6. Q. Is there a newsgroup or mailing list devoted to Python? 1.7. Q. Is there a WWW page devoted to Python? 1.8. Q. Is the Python documentation available on the WWW? 1.9. Q. Is there a book on Python, or will there be one out soon? 1.10. Q. Are there any published articles about Python that I can quote? 1.11. Q. Are there short introductory papers or talks on Python? 1.12. Q. How does the Python version numbering scheme work? 1.13. Q. How do I get a beta test version of Python? 1.14. Q. Are there copyright restrictions on the use of Python? 1.15. Q. Why was Python created in the first place?? 2.6. Q. Is it reasonable to propose incompatible changes to Python? 2.7. Q. What is the future of Python? 2.8. Q. What is the PSA, anyway? 2.9. Q. How do I join the PSA? 2.10. Q. What are the benefits of joining the PSA? 3. Building Python and Other Known Bugs 3.1. Q. Is there a test set? 3.2. Q. When running the test set, I get complaints about floating point operations, but when playing with floating point operations I cannot find anything wrong with them. 3.3. Q. Link errors after rerunning the configure script. 3.4. Q. The python interpreter complains about options passed to a script (after the script name). 3.5. Q. When building on the SGI, make tries to run python to create glmodule.c, but python hasn't been built or installed yet. 3.6. Q. I use VPATH but some targets are built in the source directory. 3.7. Q. Trouble building or linking with the GNU readline library. 3.8. Q. Trouble with socket I/O on older Linux 1.x versions. 3.9. Q. Trouble with prototypes on Ultrix. 3.10. Q. Trouble with posix.listdir on NeXTSTEP 3.2. 3.11. Q. Other trouble building Python on platform X. 3.12. Q. How to configure dynamic loading on Linux. 3.13. Q. Errors when linking with a shared library containing C++ code. 3.14. Q. I built with tkintermodule.c enabled but get "Tkinter not found". 3.15. Q. I built with Tk 4.0 but Tkinter complains about the Tk version. 3.16. Q. Link errors for Tcl/Tk symbols when linking with Tcl/Tk. 3.17. Q. I configured and built Python for Tcl/Tk but "import Tkinter" fails. 3.18. Q. Tk doesn't work right on DEC Alpha. 3.19. Q. Several common system calls are missing from the posix module. 3.20. Q. ImportError: No module named string, on MS Windows. 3.21. Q. Core dump on SGI when using the gl module.. How do I find the current module name? 4.10. Q. I have a module in which I want to execute some extra code when it is run as a script. How do I find out whether I am running as a script? 4.11. Q. I try to run a program from the Demo directory but it fails with ImportError: No module named ...; what gives? 4.12. Q. I have successfully built Python with STDWIN but it can't find some modules (e.g. stdwinevents). 4.13. Q. What GUI toolkits exist for Python? 4.14. Q. Are there any interfaces to database packages in Python? 4.15. Q. Is it possible to write obfuscated one-liners in Python? 4.16. Q. Is there an equivalent of C's "?:" ternary operator? 4.17. Q. My class defines __del__ but it is not called when I delete the object. 4.18. Q. How do I change the shell environment for programs called using os.popen() or os.system()? Changing os.environ doesn't work. 4.19. Q. What is a class? 4.20. Q. What is a method? 4.21. Q. What is self? 4.22. Q. What is a unbound method? 4.23. Q. How do I call a method defined in a base class from a derived class that overrides it? 4.24. Q. How do I call a method from a base class without using the name of the base class? 4.25. Q. How can I organize my code to make it easier to change the base class? 4.26. Q. How can I find the methods or attributes of an object? 4.27. Q. I can't seem to use os.read() on a pipe created with os.popen(). 4.28. Q. How can I create a stand-alone binary from a Python script? 4.29. Q. What WWW tools are there for Python? 4.30. Q. How do I run a subprocess with pipes connected to both input and output? 4.31. Q. How do I call a function if I have the arguments in a tuple? 4.32. Q. How do I enable font-lock-mode for Python in Emacs? 4.33. Q. Is there an inverse to the format operator (a la C's scanf())? 4.34. Q. Can I have Tk events handled while waiting for I/O? 4.35. Q. How do I write a function with output parameters (call by reference)? 4.36. Q. Please explain the rules for local and global variables in Python. 4.37. Q. How can I have modules that mutually import each other? 4.38. Q. How do I copy an object in Python? 4.39. Q. How to implement persistent objects in Python? (Persistent == automatically saved to and restored from disk.) 5. Extending Python 5.1. Q. Can I create my own functions in C? 5.2. Q. Can I create my own functions in C++? 5.3. Q. How can I execute arbitrary Python statements from C? 5.4. Q. How can I evaluate an arbitrary Python expression from C? 5.5. Q. How do I extract C values from a Python object? 5.6. Q. How do I use mkvalue() to create a tuple of arbitrary length? 5.7. Q. How do I call an object's method from C? 5.8. Q. How do I catch the output from print_error()? 5.9. Q. How do I access a module written in Python from C? 5.10. Q. How do I interface to C++ objects from Python? 6. Python's design 6.1. Q. Why isn't there a switch or case statement in Python? 6.2. Q. Why does Python use indentation for grouping of statements? 6.3. Q. Why are Python strings immutable? 6.4. Q. Why don't strings have methods like index() or sort(), like lists? 6.5. Q. Why does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))? 6.6. Q. Why can't I derive a class from built-in types (e.g. lists or files)? 6.7. Q. Why must 'self' be declared and used explicitly in method definitions and calls? 6.8. Q. Can't you emulate threads in the interpreter instead of relying on an OS-specific thread implementation? 6.9. Q. Why can't lambda forms contain statements? 6.10. Q. Why don't lambdas have access to variables defined in the containing scope? 6.11. Q. Why can't recursive functions be defined inside other functions? 6.12. Q. Why is there no more efficient way of iterating over a dictionary than first constructing the list of keys()? 6.13. Q. Can Python be compiled to machine code, C or some other language? 6.14. Q. Why doesn't Python use proper garbage collection? 7. Using Python on non-UNIX platforms 7.1. Q. Is there a Mac version of Python? 7.2. Q. Is there a DOS version of Python? 7.3. Q. Is there a Windows 3.1(1) version of Python? 7.4. Q. Is there a Windows NT version of Python? 7.5. Q. Is there a Windows 95 version of Python? 7.6. Q. Is there an OS/2 version of Python? 7.7. Q. Is there a VMS version of Python? 7.8. Q. What about IBM mainframes, or other non-UNIX platforms? 7.9. Q. Where are the source or Makefiles for the non-UNIX versions? 7.10. Q. What is the status and support for the non-UNIX versions? 7.11. Q. I have a PC version but it appears to be only a binary. Where's the library? 7.12... To find out more, the best thing to do is to start reading the tutorial from the documentation set (see a few questions further down). 1.2. Q. Why is it called Python? A. Apart from being a computer scientist, I'm also a fan of "Monty Python's Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely --! (If you need an icon, use an image of the 16-ton weight from the TV series or of a can of SPAM :-) 1.3. Q. How do I obtain a copy of the Python source? A. The latest complete Python source distribution is always available by anonymous ftp, e.g. <URL:>. It is a gzipped tar file containing the complete C source, LaTeX documentation, Python library modules, example programs, and several useful pieces of freely distributable software. This will compile and run out of the box on most UNIX platforms. (See section 7 for non-UNIX information.) Sometimes beta versions of a newer release are available; check the subdirectory "beta" of the above-mentioned URL (i.e. <URL:>). (At the time of writing, beta3 for Python 1.4 is available there, and should be checked before reporting problems with version 1.3.) Occasionally a set of patches is issued which has to be applied using the patch program. These patches are placed in the same directory, e.g. <URL:>. (At the time of writing, no patches exist.) An index of said ftp directory can be found in the file INDEX. An HTML version of the index can be found in the file index.html, <URL:>. 1.4. Q. How do I get documentation on Python? A. The LaTeX source for the documentation is part of the source distribution. If you don't have LaTeX, the latest Python documentation set is always available by anonymous ftp, e.g. <URL:>. It is a gzipped tar file containing PostScript files of the reference manual, the library manual, and the tutorial. (a separate file on the ftp site). 1.5. Q. Are there other ftp sites that mirror the Python distribution? A. The following anonymous ftp sites keep mirrors of the Python distribution: USA: <URL:> <URL:> <URL:> <URL:> <URL:> <URL:> <URL:> Europe: <URL:> <URL:> <URL:> <URL:> <URL:> <URL:> <URL:> Australia: <URL:> Or try archie on the string "python". 1.6. Q. Is there a newsgroup or mailing list devoted to Python? A. There is a newsgroup, comp.lang.python <URL:news:comp.lang.python>, and a mailing list. The newsgroup and mailing list are gatewayed into each other -- if you can read news it's unnecessary to subscribe to the mailing list. Send e-mail to <python-list-request@cwi.nl> to (un)subscribe to the mailing list. Hypermail archives of (nearly) everything posted to the mailing list (and thus the newsgroup) are available on our WWW server, <URL:>. The raw archives are also available by ftp, e.g. <URL:>. The uncompressed versions of these files can be read with the standard UNIX Mail program ("Mail -f file") or with nn ("nn file"). To read them using MH, you could use "inc -file file". (The archival service has stopped archiving new articles around the end of April 1995. I hope to revive it on the PSA server sometime in the future.) 1.7. Q. Is there a WWW page devoted to Python? A. Yes, <URL:> is the official Python home page. At the time of writing, this page is not yet completely operational; you may have a look at the old Python home page: <URL:> or at the U.S. copy: <URL:>. 1.8. Q. Is the Python documentation available on the WWW? A. Yes, see <URL:> (Python's home page). It contains pointers to hypertext versions of the whole documentation set (as hypertext, not just PostScript). If you wish to browse this collection of HTML files on your own machine, it is available bundled up by anonymous ftp, e.g. <URL:>. An Emacs-INFO set containing the library manual is also available by ftp, e.g. <URL:>. 1.9. Q. Is there a book on Python, or will there be one out soon? A. Mark Lutz is writing a Python book for O'Reilly and Associates, to be published early 1996. See the outline (in PostScript): <URL:>. 1.10.. See also the next section (supposedly Aaron Watters' paper has been refereed). 1.11. Q. Are there short introductory papers or talks on Python? A. A recent, very entertaining introduction to Python is the tutorial by Aaron Watters in UnixWorld Online: Aaron R. Watters: "The What, Why, Who, and Where of Python", <URL:> An olded paper is available by ftp as <URL:> and <URL:>, respectively. Slides for a talk on Python that I gave at the Usenix Symposium on Very High Level Languages in Santa Fe, NM, USA in October 1994 are available as <URL:>. 1.12. Q. How does the Python version numbering scheme work? A. Python versions are numbered A.B.C or A.B. A is the major version number -- it is only incremented for major changes in functionality or source structure. B is the minor version number, incremented for less earth-shattering changes to a release. C is the patchlevel -- it is incremented for each new patch release. Not all releases have patch releases. Note that in the past, patches have added significant changes; in fact the changeover from 0.9.9 to 1.0.0 was the first time that either A or B changed! Beta versions have an additional suffix of "betaN" for some small number N. Note that (for instance) all versions labeled 1.4betaN *precede* the actual release of 1.4. 1.4b3 is short for 1.4beta3. 1.13. Q. How do I get a beta test version of Python? A. If there are any beta releases, they are published in the normal source directory (e.g. <URL:>). 1.14. Q. Are there copyright restrictions on the use of Python? A. Hardly. You can do anything you want with the source, as long as you leave the copyrights in, and display those copyrights in any documentation about Python that you produce. Also, don't use the author's institute's name in publicity without prior written permission, and don't hold them responsible for anything (read the actual copyright for a precise legal wording). In particular, if you honor the copyright rules, it's OK to use Python for commercial use, to sell copies of Python in source or binary form, or to sell products that enhance Python or incorporate Python (or part of it) in some form. I would still like to know about all commercial use of Python! 1.15. Q. Why was Python created in the first place? A. Here's a *very* brief summary of what got me started: - M3 report). M3. 2. Python in the real world =========================== 2.1. Q. How many people are using Python? A. I don't know, but the maximum number of simultaneous subscriptions to the Python mailing list before it was gatewayed into the newsgroup was about 180 (several of which were local redistribution lists). I believe that many active Python users don't bother to subscribe to the list, and now that there's a newsgroup the mailing list subscription is even less meaningful. I see new names on the newsgroup all the time and my best guess is that there are currently at least several thousands of users. Another statistic is the number of accesses to the Python WWW server. Have a look at <URL:>. 2.2. Q. Have any significant projects been done in Python? A. Here at CWI (the home of Python), we have written a 20,000 line authoring environment for transportable hypermedia presentations, a 5,000 line multimedia teleconferencing tool, as well as many many smaller programs. The University of Virginia uses Python to control a virtual reality engine. Contact: Matt Conway <conway@virginia.edu>. The ILU project at Xerox PARC can generate Python glue for ILU interfaces. See <URL:>. The University of California, Irvine uses a student administration system called TELE-Vision written entirely in Python. Contact: Ray Price <rlprice@uci.edu>. See also the next question. If you have done a significant project in Python that you'd like to be included in the list above, send me email! 2.3. Q. Are there any commercial projects going on using Python? A. Several companies have revealed to me that they are planning or considering use of Python in a future product. Sunrise Software has a product out using Python -- they use Python for a GUI management application and an SNMP network management application. Contact: <info@sunrise.com>. Infoseek uses Python to implement their commercial WWW information retrieval service <URL:>. Contact: <info@infoseek.com>. Paul Everitt of Connecting Minds is planning a Lotus Notes gateway. Contact: <Paul.Everitt@cminds.com>. Or see their WWW server <URL:>. KaPRE in Boulder, CO is using Python for on-site customization of C++ applications, rapid-prototyping/development, language-based-components, and possibly more. This is pretty solid: Python's being shipped with their tool-set now, to beta sites. Contact: <lutz@KaPRE.COM> (Mark Lutz). Individuals at many other companies are using Python for internal development or for as yet unannounced products (witness their contributions to the Python mailing list or newsgroup). SGI has advertised in the Python list looking for Python programmers for a project involving interactive television. See also the workshop minutes at <URL:> -- in general the WWW server is more up to date than the FAQ for these issues. Python has also been elected as an extension language by MADE, a consortium supported by the European Committee's ESPRIT program and consisting of Bull, CWI and some other European companies. Contact: Ivan Herman <ivan@cwi.nl>. If you'd like to be included in the list above, send me email! 2.4. Q. How stable is Python? A. Very stable. While the current version number would suggest it is in the early stages of development, in fact new, stable releases (numbered 0.9.x through 1.3) have been coming out roughly every 3 to 6 months for the past four years. 2.5. Q. What new developments are expected for Python in the future? A. See my Work-In-Progress web page, currently at <URL:>, and the pages for the Second Python Workshop (best reached via the Python home page, <URL:>). Also follow the newsgroup discussions! 2.6. Q. Is it reasonable to propose incompatible changes to Python? A. In general, no. There are already millions of lines of Python code around the world, so any changes in the language that invalidates more than a very small fraction of existing programs has to be frowned upon. Even if you can provide a conversion program, there still is the problem of updating all documentation. Providing a gradual upgrade path is the only way if a feature has to be changed. 2.7. Q. What is the future of Python? A. If I knew, I'd be rich :-) Seriously, the formation of the PSA (Pyton Software Activity, see <URL:>) ensures some kind of support even in the (unlikely! event that I'd be hit by a bus (actually, here in the US, a car accident would be more likely :-), were to join a nunnery, or would be head-hunted. A large number of Python users have become experts at Python programming as well as maintenance of the implementation, and would easily fill the vacuum created by my disappearance. In the mean time, I have no plans to disappear -- rather, I am committed to improving Python, and my current benefactor, CNRI (see <URL:>) is just as committed to continue its support of Python and the PSA. In fact, we have great plans for Python -- we just can't tell yet! 2.8. Q. What is the PSA, anyway? A. The Python Software Activity <URL:> was created by a number of Python aficionados who want Python to be more than the product and responsibility of a single individual. It has found a home at CNRI <URL:>. Anybody who wishes Python well should join the PSA. 2.9. Q. How do I join the PSA? A. The full scoop is available on the web, see <URL:>. Summary: send a check of at least $50 to CNRI/PSA, 1895 Preston White Drive, Suite 100, in Reston, VA 20191. Full-time students pay $25. Companies can join for a mere $500. 2.10. Q. What are the benefits of joining the PSA? A. Like National Public Radio, if not enough people join, Python will wither. Your name will be mentioned on the PSA's web server. Workshops organized by the PSA <URL:> are only accessible to PSA members (you can join at the door). The PSA is working on additional benefits, such as reduced prices for books and software, and early access to beta versions of Python. 3. Building Python and Other Known Bugs =======================================. NOTE: if "make test" fails, run the tests manually ("import testall") to see what goes wrong before reporting the error. after rerunning the configure script. A. It is generally necessary to run "make clean" after a configuration change. 3.4. Q. The python interpreter complains about options passed to a script (after the script name). A. You are probably linking with GNU getopt, e.g. through -liberty. Don't. The reason for the complaint is that GNU getopt, unlike System V getopt and other getopt implementations, doesn't consider a non-option to be the end of the option list. A quick (and compatible) fix for scripts is to add "--" to the interpreter, like this: #! /usr/local/bin/python -- You can also use this interactively: python -- script.py [options] Note that a working getopt implementation is provided in the Python distribution (in Python/getopt.c) but not automatically used. 3.6. Q. I use VPATH but some targets are built in the source directory. A. On some systems (e.g. Sun), if the target already exists in the source directory, it is created there instead of in the build directory. This is usually because you have previously built without VPATH. Try running "make clobber" in the source directory. 3.7. Q. Trouble building or linking with the GNU readline library. A. Consider using readline 2.0. Some hints: - You can use the GNU readline library to improve the interactive user interface: this gives you line editing and command history when calling python interactively. You need to configure and build the GNU readline library before running the configure script. Its sources are no longer distributed with Python; you can ftp them from any GNU mirror site, or from its home site <URL:> (or a higher version number -- using version 1.x is not recommended)..bug <URL:news:gnu.bash.bug> for specific problems with the readline library (I don't read this group but I've been told that it is the place for readline bugs). 3.8. Q. Trouble with socket I/O on older Linux 1.x versions. A. Once you've built Python, use it to run the regen.py script in the Lib/linux1 directory. Apparently the files as distributed don't match the system headers on some Linux versions. 3.9. Q. Trouble with prototypes on Ultrix. A. Ultrix cc seems broken -- use gcc, or edit config.h to #undef HAVE_PROTOTYPES. 3.10. Q. Trouble with posix.listdir on NeXTSTEP 3.2. A. (This often manifests itself as a weird error from the compileall.py script run by "make libinstall".) Don't use gcc, use the Next C compiler (cc). Even though it is derived from (an old version of) gcc, its interpretation of the "-posix" switch is different; in this particular case, cc is right and gcc is wrong. 3.11. Q. Other trouble building Python on platform X. A. Please email the details to <guido@cnri.reston.va.us> and I'll look into it. Please provide as many details as possible. In particular, if you don't tell me what type of computer and what operating system (and version) you are using it will be difficult for me to figure out what is the matter. If you get a specific error message, please email it to me too. 3.12. Q. How to configure dynamic loading on Linux. A. This is now automatic as long as your Linux version uses the ELF object format (all recent Linuxes do). 3.13. Q. Errors when linking with a shared library containing C++ code. A. Link the main Python binary with C++. Change the definition of LINKCC in Modules/Makefile to be your C++ compiler. You may have to edit config.c slightly to make it compilable with C++. 3.14. Q. I built with tkintermodule.c enabled but get "Tkinter not found". A. Tkinter.py (note: upper case T) lives in a subdirectory of Lib, Lib/tkinter. If you are using the default module search path, you probably didn't enable the line in the Modules/Setup file defining TKPATH; if you use the environment variable PYTHONPATH, you'll have to add the proper tkinter subdirectory. 3.15. Q. I built with Tk 4.0 but Tkinter complains about the Tk version. A. Several things could cause this. You most likely have a Tk 3.6 installation that wasn't completely eradicated by the Tk 4.0 installation (which tends to add "4.0" to its installed files). You may have the Tk 3.6 support library installed in the place where the Tk 4.0 support files should be (default /usr/local/lib/tk/); you may have compiled Python with the old tk.h header file (yes, this actually compiles!); you may actually have linked with Tk 3.6 even though Tk 4.0 is also around. Similar for Tcl 7.4 vs. Tcl 7.3. 3.16. Q. Link errors for Tcl/Tk symbols when linking with Tcl/Tk. Quite possibly, there's a version mismatch between the Tcl/Tk header files (tcl.h and tk.h) and the tck/tk libraries you are using (the "-ltk4.0" and "-ltcl7.4" arguments for _tkinter in the Setup file). If you have installed both versions 7.4/4.0 and 7.5/4.1 of Tcl/Tk, most likely your header files are for The newer versions, but the Setup line for _tkinter in some Python distributions references 7.4/4.0 by default. Changing this to 7.5/4.1 should take care of this. 3.17. Q. I configured and built Python for Tcl/Tk but "import Tkinter" fails. A. Most likely, you forgot to enable the line in Setup that says "TKPATH=:$(DESTLIB)/tkinter". 3.18. Q. Tk doesn't work right on DEC Alpha. A. You probably compiled either Tcl, Tk or Python with gcc. Don't. For this platform, which has 64-bit integers, gcc is known to generate broken code. The standard cc (which comes bundled with the OS!) works. If you still prefer gcc, at least try recompiling with cc before reporting problems to the newsgroup or the author; if this fixes the problem, report the bug to the gcc developers instead. (As far as we know, there are no problem with gcc on other platforms -- the instabilities seem to be restricted to the DEC Alpha.) See also question 3.6. 3.19. Q. Several common system calls are missing from the posix module. A. Most likely, *all* test compilations run by the configure script are failing for some reason or another. Have a look in config.log to see what could be the reason. A common reason is specifying a directory to the --with-readline option that doesn't contain the libreadline.a file. 3.20. Q. ImportError: No module named string, on MS Windows. A. Most likely, your PYTHONPATH environment variable should be set to something like: set PYTHONPATH=c:\python;c:\python\lib;c:\python\scripts (assuming Python was installed in c:\python) 3.21. Q. Core dump on SGI when using the gl module. There are conflicts between entry points in the termcap and curses libraries and an entry point in the GL library. There's a hack of a fix for the termcap library if it's needed for the GNU readline library, but it doesn't work when you're using curses. Concluding, you can't build a Python binary containing both the curses and gl modules.. Yes -- Lance Ellinghaus has written a module that interfaces to System V's "ncurses". If you know a little curses and some Python, it's straightforward to use. It is part of the standard Python distribution, but not configured by default -- you must enable it by editing Modules/Setup. It requires a System V curses implementation. You could also consider using the "alfa" (== character cell) version of STDWIN. (Standard Window System Interface, a portable windowing system interface by myself An alternative solution uses default arguments, e.g.: def generate_multiplier(factor): def multiplier(arg, fact = factor): return arg*fact return reasons of efficiency as well as consistency, Python only reads the module file on the first time a module is imported. (Otherwise a program consisting of many modules, each of which imports the same basic module, would read the basic module over and over again.) To force rereading of a changed module, do this: import modname reload(modname) Warning: this technique is not 100% fool-proof. In particular, modules containing statements like from modname import some_objects will continue to work with the old version of the imported objects. 4.9. Q. How do I find the current module name? A. A module can find out its own module name by looking at the (predefined) global variable __name__. If this has the value '__main__' you are running as a script. 4.10. Q. I have a module in which I want to execute some extra code when it is run as a script. How do I find out whether I am running as a script? A. See the previous question. E.g. if you put the following on the last line of your module, main() is called only when your module is running as a script: if __name__ == '__main__': main() 4.11. Q. I try to run a program from the Demo directory but it fails with ImportError: No module named ...; what gives? A. This is probably an optional module (written in C!) which hasn't been configured on your system. This especially happens with modules like "Tkinter", "stdwin", "gl", "Xt" or "Xm". For Tkinter, STDWIN and many other modules, see Modules/Setup.in for info on how to add these modules to your Python, if it is possible at all. Sometimes you will have to ftp and build another package first (e.g. STDWIN). Sometimes the module only works on specific platforms (e.g. gl only works on SGI machines). NOTE: if the complaint is about "Tkinter" (upper case T) and you have already configured module "tkinter" (lower case t), the solution is *not* to rename tkinter to Tkinter or vice versa. There is probably something wrong with your module search path. Check out the value of sys.path. For X-related modules (Xt and Xm) you will have to do more work: they are currently not part of the standard Python distribution. You will have to ftp the Extensions tar file, e.g. <URL:> and follow the instructions there. See also the next question. 4.12. Q. I have successfully built Python with STDWIN but it can't find some modules (e.g. stdwinevents). A. There's a subdirectory of the library directory named 'stdwin' which should be in the default module search path. There's a line in Modules/Setup(.in) that you have to enable for this purpose -- unfortunately in the latest release it's not near the other STDWIN-related lines so it's easy to miss it. 4.13. Q. What GUI toolkits exist for Python? A. Depending on what platform(s) you are aiming at, there are several. - There's a neat object-oriented interface to the Tcl/Tk widget set, called Tkinter. As of python 1.1, it is part of the standard Python distribution -- all you need to do is enable it in Modules/Setup (provided you have already installed Tk and Tcl). <URL:>. - The standard Python distribution comes with an interface to STDWIN, a platform-independent low-level windowing interface. You have to ftp the source for STDWIN separately, e.g. <URL:> or gatekeeper.dec.com in pub/misc/stdwin <URL:>. STDWIN runs under X11 or the Mac; a Windows port has been attempted but I can't seem to get it working. Note that STDWIN is really not powerful enough to implement a modern GUI (no widgets, etc.) and that I don't have the time to maintain or extend it, so you may be better off using Tkinter or the Motif interface, unless you require portability to the Mac (which is also offered by SUIT, by the way -- see below). - For SGI IRIX only, there's an interface to the complete GL (Graphics Library -- low level but very good 3D capabilities) as well as to FORMS (a buttons-and-sliders-etc package built on top of GL by Mark Overmars -- ftp'able from <URL:>). - There's an interface to X11, including the Athena and Motif widget sets (and a few individual widgets, like Mosaic's HTML widget and SGI's GL widget) in the Extensions set, which is separately ftp'able <URL:>. - There's an interface to SUIT, the U of Virginia's Simple User Interface Toolkit; it can be ftp'ed from <URL:>. A PC binary of Python 1.0.2 compiled with DJGPP and with SUIT support built-in has been made available by Antonio Costa <URL:> (a self-extracting archive). Note that the UVa people themselves have expressed doubts about SUIT, and are planning to build a Python GUI API based upon Tk (though not necessarily on Tkinter); see <URL:>. - There's an interface to WAFE, a Tcl interface to the X11 Motif and Athena widget sets. Last I heard about it it was included in the WAFE 1.0 prerelease <URL:>. - The NT port by Mark Hammond (see question 7.4) includes an interface to the Microsoft Foundation Classes and a Python programming environment using it that's written mostly in Python. See <URL:>. - There's an interface to wxWindows. wxWindows is a portable GUI class library written in C++. It supports XView, Motif, MS-Windows as targets. There is some support for Macs and CURSES as well. wxWindows preserves the look and feel of the underlying graphics toolkit. See the wxPython WWW page at <URL:>. - There's an object-oriented GUI based on the Microsoft Foundation Classes model called WPY. Programs written in WPY run unchanged and with native look and feel on NT, Windows 3.1 (using win32s) and on Unix (using Tk). Source and binaries for NT and Linux are available in <URL:>. - (The Fresco port that was mentioned in earlier versions of this FAQ no longer seems to exist. Inquire with Mark Linton.) 4.14. Q. Are there any interfaces to database packages in Python? A. There's a whole collection of them in the contrib area of the ftp server, see <URL:>. 4.15. Q. Is it possible to write obfuscated one-liners in Python? A. Yes. map(lambda x,f=lambda x,f:(x<=1) or (f(x-1,f)+f(x-2! 4.16. Q. Is there an equivalent of C's "?:" ternary operator? A. Not directly. In many cases you can. Steve Majewski (or was it Tim Peters?) 'if'. 4.17. Q. My class defines __del__ but it is not called when I delete the object. A. There are several possible reasons for this. - The del statement does not necessarily call __del__ -- it simply decrements the object's reference count, and if this reaches zero __del__ is called. - If your data structures contain circular links (e.g. a tree where each child has a parent pointer and each parent has a list of children) the reference counts will never go back to zero. You'll have to define an explicit close() method which removes those pointers. Please don't ever call __del__ directly -- __del__ should call close() and close() should make sure that it can be called more than once for the same object. - If the object has ever been a local variable (or argument, which is really the same thing) to a function that caught an expression in an except clause, chances are that a reference to the object still exists in that function's stack frame as contained in the stack trace. Normally, deleting (better: assigning None to) sys.exc_traceback will take care of this. If you a stack was printed for an unhandled exception in an interactive interpreter, delete sys.last_traceback instead. - There is code that deletes all objects when the interpreter exits, but if your Python has been configured to support threads, it is not called (because other threads may still be active). You can define your own cleanup function using sys.exitfunc (see question 4.4). - Finally, if your __del__ method raises an exception, this will be ignored. Starting with Python 1.4beta3, a warning message is printed to sys.stderr when this happens. 4.18. Q. How do I change the shell environment for programs called using os.popen() or os.system()? Changing os.environ doesn't work. A. Modifying the environment passed to subshells was left out of the interpreter because there seemed to be no well-established portable way to do it (in particular, some systems, have putenv(), others have setenv(), and some have none at all). However if all you want is to pass environment variables to the commands run by os.system() or os.popen(), there's a simple solution: prefix the command string with a couple of variable assignments and export statements. The following would be universal for popen: import os from commands import mkarg # nifty routine to add shell quoting def epopen(cmd, mode, env = {}): # env is a dictionary of environment variables prefix = '' for key, value in env.items(): prefix = prefix + '%s=%s\n' % (key, mkarg(value)[1:]) prefix = prefix + 'export %s\n' % key return os.popen(prefix + cmd, mode) 4.19. Q. What is a class? A. A class is the particular object type that is created by executing a class statement. Class objects are used as templates, to create class instance objects, which embody both the data structure and program routines specific to a datatype. 4.20. Q. What is a method? A. A method is a function that you normally call as x.name(arguments...) for some object x. The term is used for methods of classes and class instances as well as for methods of built-in objects. (The latter have a completely different implementation and only share the way their calls look in Python code.) Methods of classes (and class instances) are defined as functions inside the class definition. 4.21. Q. What is self? A. Self is merely a conventional name for the first argument of a method -- i.e. a function defined inside a class definition. A method defined as meth(self, a, b, c) should be called as x.meth(a, b, c) for some instance x of the class in which the definition occurs; the called method will think it is called as meth(x, a, b, c). 4.22. Q. What is a unbound method? A. An unbound method is a method defined in a class that is not yet bound to an instance. You get an unbound method if you ask for a class attribute that happens to be a function. You get a bound method if you ask for an instance attribute. A bound method knows which instance it belongs to and calling it supplies the instance automatically; an unbound method only knows which class it wants for its first argument (a derived class is also OK). Calling an unbound method doesn't "magically" derive the first argument from the context -- you have to provide it explicitly. 4.23. Q. How do I call a method defined in a base class from a derived class that overrides it? A. If your class definition starts with "class Derived(Base): ..." then you can call method meth defined in Base (or one of Base's base classes) as Base.meth(self, arguments...). Here, Base.meth is an unbound method (see previous question). 4.24. Q. How do I call a method from a base class without using the name of the base class? A. DON'T DO THIS. REALLY. I MEAN IT. It appears that you could call self.__class__.__bases__[0].meth(self, arguments...) but this fails when a doubly-derived method is derived from your class: for its instances, self.__class__.__bases__[0] is your class, not its base class -- so (assuming you are doing this from within Derived.meth) you would start a recursive call. 4.25. Q. How can I organize my code to make it easier to change the base class? A.) ... 4.26. Q. How can I find the methods or attributes of an object? A. This depends on the object type. For an instance x of a user-defined class, instance attributes are found in the dictionary x.__dict__, and methods and attributes defined by its class are found in x.__class__.__bases__[i].__dict__ (for i in range(len(x.__class__.__bases__))). You'll have to walk the tree of base classes to find *all* class methods and attributes. Many, but not all built-in types define a list of their method names in x.__methods__, and if they have data attributes, their names may be found in x.__members__. However this is only a convention. For more information, read the source of the standard (but undocumented) module newdir. 4.27. Q. I can't seem to use os.read() on a pipe created with os.popen(). A. os.read() is a low-level function which takes a file descriptor (a small integer). os.popen() creates a high-level file object -- the same type used for sys.std{in,out,err} and returned by the builtin open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n). 4.28. Q. How can I create a stand-alone binary from a Python script? The demo script "Demo/scripts/freeze.py" does what you want. (It's actually not a demo but a support tool -- there is some extra code in the interpreter to accommodate it.) It requires that you have the Python build tree handy, complete with all the lib*.a files. This works by scanning your source recursively for import statements (both forms) and looking for the modules on the standard Python path as well as in the source directory (for built-in modules). It then "compiles" the modules written in Python to. Hint: the freeze program only works if your script's filename ends in ".py". 4.29. Q. What WWW tools are there for Python? A. See the chapter titled "Internet and WWW" in the Library Reference Manual. There's also a web browser written in Python, called Grail -- see <URL:>. Steve Miale <smiale@cs.indiana.edu> has written a modular WWW browser called Dancer. An alpha version can be FTP'ed from <URL:>. (There are a few articles about Dancer in the (hyper)mail archive <URL:>.) 4.30. Q. How do I run a subprocess with pipes connected to both input and output? A. This is really a UNIX question. Also, in general, it is unwise to do so, because you can easily cause a deadlock where the parent process is blocked waiting for output from the child, while the child is blocked waiting for input from the child. can easily have been written to never explicitly flush its output, even if it is interactive, since flushing is normally automatic. In many cases, all you really need is to run some data through a command and get the result back. Unless the data is infinite in size, the easiest (and often the most efficient!) way to do this is to write it to a temporary file and run the command with that temporary file as input. The standard module tempfile exports a function mktemp() which generates unique temporary file names. If after reading all of the above you still want to connect two pipes to a subprocess's standard input and output, here's a simple solution, due to Jack Jansen: import os import sys import string MAXFD = 100 # Max number of file descriptors in this system def popen2(cmd): cmd = string.split(cmd) p2cread, p2cwrite = os.pipe() c2pread, c2pwrite = os.pipe() pid = os.fork() if pid == 0: # Child os.close(0) os.close(1) if os.dup(p2cread) != 0: sys.stderr.write('popen2: bad read dup\n') if os.dup(c2pwrite) != 1: sys.stderr.write('popen2: bad write dup\n') for i in range(3, MAXFD): try: os.close(i) except: pass try: os.execv(cmd[0], cmd) finally: os._exit(1) os.close(p2cread) tochild = os.fdopen(p2cwrite, 'w') os.close(c2pwrite) fromchild = os.fdopen(c2pread, 'r') return fromchild, tochild Note that many interactive programs (e.g. vi) don't work well with pipes substituted for standard input and output. You will have to use pseudo ttys ("ptys") instead of pipes. There is some undocumented code to use these in the library module pty.py -- I'm afraid you're on your own here. A different answer is a Python interface to Don Libes' "expect" library. A prerelease of this is available on the Python ftp mirror sites in the contrib subdirectory as expy-0.3.tar.gz, e.g. <URL:>. 4.31. Q. How do I call a function if I have the arguments in a tuple? A. Use the built-in function apply(). For instance, func(1, 2, 3) is equivalent to args = (1, 2, 3) apply(func, args) Note that func(args) is not the same -- it calls func() with exactly one argument, the tuple args, instead of three arguments, the integers 1, 2 and 3. 4.32. Q. How do I enable font-lock-mode for Python in Emacs? A. Assuming you're already using python-mode and font-lock-mode separately, all you need to do is put this in your .emacs file: (defun my-python-mode-hook () (setq font-lock-keywords python-font-lock-keywords) (font-lock-mode 1)) (add-hook 'python-mode-hook 'my-python-mode-hook) 4.33. Q. Is there an inverse to the format operator (a la C's scanf())? A. Not as such. For simple input parsing, the easiest approach is usually to split the line into whitespace-delimited words using string.split(), and to convert decimal strings to numeric values using string.atoi(), string.atol() or string.atof(). (Python's atoi() is 32-bit and its atol() is arbitrary precision.) If you want to use another delimiter than whitespace, use string.splitfield() (possibly combining it with string.strip() which removes surrounding whitespace from a string). For more complicated input parsing, regular expressions (see module regex) are better suited and more powerful than C's scanf(). 4.34. Q. Can I have Tk events handled while waiting for I/O? A.). 4.35. Q. How do I write a function with output parameters (call by reference)? A. [Mark Lutz] The thing to remember is that arguments are passed by assignment in Python. Since assignment just creates references to objects, there's no alias between an argument name in the caller and callee, and so no call-by-reference per se. But you can simulate it in a number of ways: 1) By using global variables; but you probably shouldn't :-) 2) By passing a mutable (changeable in-place) object: def func1(a): a[0] = 'new-value' # 'a' references a mutable list a[1] = a[1] + 1 # changes a shared object args = ['old-value', 99] func1(args) print args[0], args[1] # output: new-value 100 3) By return a tuple, holding the final values of arguments: def func2(a, b): a = 'new-value' # a and b are local names b = b + 1 # assigned to new objects return a, b # return new values x, y = 'old-value', 99 x, y = func2(x, y) print x, y # output: new-value 100 4) And other ideas that fall-out from Python's object model. For instance, it might be clearer to pass in a mutable dictionary: def func3(args): args['a'] = 'new-value' # args is a mutable dictionary args['b'] = args['b'] + 1 # change it in-place args = {'a':' old-value', 'b': 99} func3(args) print args['a'], args['b'] 5) But there's probably no good reason to get this complicated :-). [Python' author favors solution 3 in most cases.] 4.36. Q. Please explain the rules for local and global variables in Python. A. [Ken Manheimer] In Python, procedure variables are implicitly global, unless they assigned anywhere within the block. In that case they are implicitly local, and you need to explicitly declare them as 'global'. Though a bit surprising at first, a moments consideration explains this. On one hand, requirement of 'global' for assigned vars provides a bar against unintended side-effects. On the other hand, if global were required for all global references, you'd be using global all the time. Eg, you'd have to declare as global every reference to a builtin function, or to a component of an imported module. This clutter would defeat the usefulness of the 'global' declaration for identifying side-effects. 4.37. Q. How can I have modules that mutually import each other? A. Jim Roskind recommends the following order in each module: First: all exports (like globals, functions, and classes that don't need imported bases classes). Then: all import statements. Finally: all active code (including globals that are initialized from imported values). Python's author doesn't like this approach much because the imports appear in a strange place, but has to admit that it works. His recommended strategy is to avoid all uses of "from <module> import *" (so everything from an imported module is referenced as <module>.<name>) and to place all code inside functions. Initializations of global variables and class variables should use constants or built-in functions only. 4.38. Q. How do I copy an object in Python? A. There is no generic copying operation built into Python, however most object types have some way to create a clone. Here's how for the most common objects: - For immutable objects (numbers, strings, tuples), cloning is unnecessary since their value can't change. - For lists (and generally for mutable sequence types), a clone is created by the expression l[:]. - For dictionaries, the following function returns a clone: def dictclone(o): n = {} for k in o.keys(): n[k] = o[k] return n - Finally, for generic objects, the "copy" module defines two functions for copying objects. copy.copy(x) returns a copy as shown by the above rules. copy.deepcopy(x) also copies the elements of composite objects. See the section on this module in the Library Reference Manual. 4.39. Q. How to implement persistent objects in Python? (Persistent == automatically saved to and restored from disk.) A. The library module "pickle" now solves this in a very general way (though you still can't store things like open files, sockests or windows), and the library module "shelve" uses pickle and (g)dbm to create presistent mappings containing arbitrary Python objects.. 5.3. Q. How can I execute arbitrary Python statements from C? A. The highest-level function to do this is run_command() which takes a single string argument which is executed in the context of module __main__ and returns 0 for success and -1 when an exception occurred (including SyntaxError). If you want more control, use run_string(); see the source for run_command() in Python/pythonrun.c. 5.4. Q. How can I evaluate an arbitrary Python expression from C? A. Call the function run_string() from the previous question with the start symbol eval_input; it then parses an expression, evaluates it and returns its value. See exec_eval() in Python/bltinmodule.c. 5.5. Q. How do I extract C values from a Python object? A. That depends on the object's type. If it's a tuple, gettuplesize(o) returns its length and gettupleitem(o, i) returns its i'th item; similar for lists with getlistsize(o) and getlistitem(o, i). For strings, getstringsize(o) returns its length and getstringvalue(o) a pointer to its value (note that Python strings may contain null bytes so strlen() is not safe). To test which type an object is, first make sure it isn't NULL, and then use is_stringobject(o), is_tupleobject(o), is_listobject(o) etc. 5.6. Q. How do I use mkvalue() to create a tuple of arbitrary length? A. You can't. Use t = newtupleobject(n) instead, and fill it with objects using settupleitem(t, i, o) -- note that this "eats" a reference count of o. Similar for lists with newlistobject(n) and setlistitem(l, i, o). Note that you *must* set all the tuple items to some value before you pass the tuple to Python code -- newtupleobject(n) initializes them to NULL, which isn't a valid Python value. 5.7. Q. How do I call an object's method from C? A. Here's a function (untested) that might become part of the next release in some form. It uses <stdarg.h> to allow passing the argument list on to vmkvalue(): object *call_method(object *inst, char *methodname, char *format, ...) { object *method; object *args; object *result; va_list va; method = getattr(inst, methodname); if (method == NULL) return NULL; va_start(va, format); args = vmkvalue(format, va); va_end(va); if (args == NULL) { DECREF(method); return NULL; } result = call_object(method, args); DECREF(method); DECREF(args); return result; } This works for any instance that has methods -- whether built-in or user-defined. You are responsible for eventually DECREF'ing the return value. To call, e.g., a file object's "seek" method with arguments 10, 0 (assuming the file object pointer is "f"): res = call_method(f, "seek", "(OO)", 10, 0); if (res == NULL) { ... an exception occurred ... } else { DECREF(res); } Note that since call_object() *always* wants a tuple for the argument list, to call a function without arguments, pass "()" for the format, and to call a function with one argument, surround the argument in parentheses, e.g. "(i)". 5.8. Q. How do I catch the output from print_error()? A. (Due to Mark Hammond): * in Python code, define an object that supports the "write()" method. * redirect sys.stdout and sys.stderr to this object. * call print_error, or just allow the standard traceback mechanism to work. Then, the output will go wherever your write() method sends it. 5.9. Q. How do I access a module written in Python from C? A. You can get a pointer to the module object as follows: module = import_module(" = getattr(module, "<attrname>"); Calling setattr(), to assign to variables in the module, also works. 5.10. Q. How do I interface to C++ objects from Python? A. Depending on your requirements, there are many approaches. Begin by reading the "Extending and Embedding" document (Doc/ext.tex, see also <URL:>). Realize that for the Python run-time system, there isn't a whole lot of difference between C and C++ -- so the strategy to build a new Python type around a C structure (pointer) type will also work for C++ objects. Automatic generation of interfaces between Python and C++ is still at the horizon -- parsing C++ header files requires an almost complete C++ parser, and many features aren't easily translated from C++ to Python: certain forms of operator oveloading, function overloading (best approached by a varargs function which explicitly type-checks its arguments), and reference arguments are just a number of features that are hard to translate correctly if at all. The hardest problem is to transparently translate the C++ class hierarchy to Python, so that Python programs derive classes from C++ classes. Given suitable constraints, this may be possible, but it would require more space than I have in this FAQ to explain how. In any case, you can get quite a bit done without this, using just the existing classes from Python. If this all seems rather daunting, that may be because it is -- C++ isn't exactly a baby to handle without gloves! However, people have accomplished amazing feats of interfacing between Python and C++, and a detailed question posted to the Python list is likely to elicit some interesting and useful responses. 6. Python's design ================== 6.1. Q. Why isn't there a switch or case statement in Python? A. You can do this easily enough with a sequence of if... elif... elif... else. There have been some proposals for switch statement syntax, but there is no consensus (yet) on whether and how to do range tests. 6.2. Q. Why does Python use indentation for grouping of statements? A. Basically I believe that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while. Some arguments for it: - Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. I remember long ago seeing a C fragment like this: if (x <= y) x++; y--; z++; and staring a long time at it wondering why y was being decremented even for x > y... (And I wasn't a C newbie then either.) - Since there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are loads of different ways to place the braces (including the choice whether to place braces around single statements in certain cases, for consistency). over a program. Ideally, a function should fit on one basic tty screen (say, 20 lines). 20 lines of Python are worth a LOT more than 20 lines of C. This is not solely due to the lack of begin/end brackets (the lack of declarations also helps, and the powerful operations of course), but it certainly helps! 6.3. Q. Why are Python strings immutable? A. There are two. (Adapted from Jim Roskind) 6.4. Q. Why don't strings have methods like index() or sort(), like lists? A. Good question. Strings currently don't have methods at all (likewise tuples and numbers). Long ago, it seemed unnecessary to implement any of these functions in C, so a standard library module "string" written in Python was created that performs string related operations. Since then, the cry for performance has moved most of them into the built-in module strop (this is imported by module string, which is still the preferred interface, without loss of performance except during initialization). Some of these functions (e.g. index()) could easily be implemented as string methods instead, but others (e.g. sort()) can't, since their interface prescribes that they modify the object, while strings are immutable (see the previous question). 6.5. Q. Why does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))? A. Functions are used for those operations that are generic for a group of types and which should work even for objects that don't have methods at all (e.g. numbers, strings, tuples). Also, implementing len(), max(), min() as a built-in function is actually less code than implementing them as methods for each type. One can quibble about individual cases but it's really too late to change such things fundamentally now. 6.6. Q. Why can't I derive a class from built-in types (e.g. lists or files)? A. This is caused by the relatively late addition of (user-defined) classes to the language -- the implementation framework doesn't easily allow it. See the answer to question 4.2 for a work-around. This *may* be fixed in the (distant) future. 6.7. Q. Why must 'self' be declared and used explicitly in method definitions and calls? A. By asking this question you reveal your C++ background. :-) When I added classes, this was (again) the simplest way of implementing methods without too many changes to the interpreter. I borrowed the idea from Modula-3. It turns out to be very useful, for a variety of reasons. First, it makes. Second, it means that no special syntax is necessary if you want to explicitly reference or call the method from a particular class. In C++, if you want to use a method from base class that. Lastly,. 6.8. Q. Can't you emulate threads in the interpreter instead of relying on an OS-specific thread implementation? A. Unfortunately, the interpreter pushes at least one C stack frame for each Python stack frame. Also, extensions can call back into Python at almost random moments. Therefore a complete threads implementation requires thread support for C. 6.9. Q. Why can't lambda forms contain statements? A.'ll have to invent a name for the function -- but that's just a local variable to which the function object (which is exactly the same type of object that a lambda form yields) is assigned! 6.10. Q. Why don't lambdas have access to variables defined in the containing scope? A. Because they are implemented as ordinary functions. See question 4.5 above. 6.11. Q. Why can't recursive functions be defined inside other functions? A. See question 4.5 above. 6.12. Q. Why is there no more efficient way of iterating over a dictionary than first constructing the list of keys()? A. Have you tried it? I bet it's fast enough for your purposes! In most cases such a list takes only a few percent of the space occupied by the dictionary -- it needs only 4 bytes (the size of a pointer) per key -- a dictionary costs 8 bytes per key plus between 30 and 70 percent hash table overhead, plus the space for the keys and values -- by necessity all keys are unique objects and a string object (the most common key type) costs at least 18 bytes plus the length of the string. Add to that the values contained in the dictionary, and you see that 4 bytes more per item really isn't that much more memory... A call to dict.keys() makes one fast scan over the dictionary (internally, the iteration function does exist) copying the pointers to the key objects into a pre-allocated list object of the right size. The iteration time isn't lost (since you'll have to iterate anyway -- unless in the majority of cases your loop terminates very prematurely (which I doubt since you're getting the keys in random order). I don't expose the dictionary iteration operation to Python programmers because the dictionary shouldn't be modified during the entire iteration -- if it is, there's a very small chance that the dictionary is reorganized because the hash table becomes too full, and then the iteration may miss some items and see others twice. Exactly because this only occurs rarely, it would lead to hidden bugs in programs: it's easy never to have it happen during test runs if you only insert or delete a few items per iteration -- but your users will surely hit upon it sooner or later. 6.13. Q. Can Python be compiled to machine code, C or some other language? A.". Thus, the performance gain would probably be minimal. Internally, Python source code is always translated into a "virtual machine code" or "byte code" representation before it is interpreted (by the "Python virtual machine" or "bytecode interpreter"). In order to avoid the overhead of parsing and translating modules that rarely change over and over again, this byte code is written on a file whose name ends in ".pyc" whenever a module is parsed (from a file whose name ends in ".py"). When the corresponding .py file is changed, it is parsed and translated again and the .pyc file is rewritten. There is no performance difference once the .pyc file has been loaded will generally improve start-up time of Python scripts. If desired, the Lib/compileall.py module/script can be used to force creation of valid .pyc files for a given set of modules. If you are looking for a way to translate Python programs in order to distribute them in binary form, without the need to distribute the interpreter and library as well, have a look at the freeze.py script in the Tools/freeze directory. This creates a single binary file incorporating your program, the Python interpreter, and those parts of the Python library that are needed by your program. Of course, the resulting binary will only run on the same type of platform as that used to create it. Hints for proper usage of freeze.py: - the script must be in a file whose name ends in .py - you must have installed Python fully: make install make libinstall make inclinstall make libainstall 6.14. Q. Why doesn't Python use proper garbage collection? A. It's looking less and less likely that Python will ever get "automatic" garbage collection (GC). For one thing, unless this were added to C as a standard feature, it's a portability pain in the ass. And yes, I know about the Xerox library. It has bits of assembler code for *most* *common* platforms. Not for all. And although it is mostly transparent, it isn't completely transparent (when I once linked Python with it, it dumped core). "Proper" GC also becomes a problem when Python gets embedded into other applications. While in a stand-alone Python it may. Besides, the predictability of destructor calls in Python is kind of attractive. With GC, the following code (which is fine in current Python) will run out of file descriptors long before it runs out of memory: for file in <very long list of files>: f = open(file) c = file.read(1) Using the current reference counting and destructor scheme, each new assignment to f closes the previous file. Using GC, this is not guaranteed. Sure, you can think of ways to fix this. But it's not off-the-shelf technology. 7. Using Python on non-UNIX platforms ===================================== 7.1. Q. Is there a Mac version of Python? A. Yes, see the "mac" subdirectory of the distribution sites, e.g. <URL:>. 7.2. Q. Is there a DOS version of Python? A. Yes, see the "pc" subdirectory of the distribution sites, e.g. <URL:>. 7.3. Q. Is there a Windows 3.1(1) version of Python? A. Yes, also see the "pc" subdirectory of the distribution sites, e.g. <URL:>. You may also be able to run either of the Windows NT versions (see next question) if you have Microsoft's "win32s". 7.4. Q. Is there a Windows NT version of Python? A. There are two, both sporting DLL support for dynamic loading of Python modules, and extensions to access the Win32 GUI API. Mark Hammond <MHammond@cmutual.com.au> maintains an NT port which includes an interface to the Microsoft Foundation Classes and a Python programming environment using it that's written mostly in Python. See <URL:>. Jim Ahlstrom's WPY portable GUI runs on Windows NT and is modeled after the Microsoft Foundation Classes. Source and binaries are available in <URL:>. Sam Rushing <rushing@squirl.oau.org> once announced he knows how to build Python for the Windows NT on the DEC Alpha AXP. Note that currently there is no unified compilation environment for all NT platforms -- hopefully Microsoft will fix this with the release of Visual C++ 2.0. 7.5. Q. Is there a Windows 95 version of Python? A. The Windows NT versions might work, otherwise the Windows 3.1(1) version should work (isn't Windows 95 supposed to be backwards compatible?). 7.6. Q. Is there an OS/2 version of Python? A. Yes, also see the "pc" subdirectory of the distribution sites, e.g. <URL:>. 7.7. Q. Is there a VMS version of Python? A. Donn Cave <donn@cac.washington.edu> did a partial port. The results of his efforts are on public display in <<URL:>. Someone else is working on a more complete port, for details watch the list. 7.8. Q. What about IBM mainframes, or other non-UNIX platforms? A. I haven't heard about these, except I remember hearing about an OS/9 port and a port to Vxworks (both operating systems for embedded systems). If you're interested in any of this, go directly to the newsgroup and ask there, you may find exactly what you need. For example, a port to MPE/iX 5.0 on HP3000 computers was just announced, see <URL:>. 7.9. Q. Where are the source or Makefiles for the non-UNIX versions? A. The standard sources can (almost) be used. Additional sources can be found in the platform-specific subdirectories of the distribution. 7.10. Q. What is the status and support for the non-UNIX versions? A. I don't have access to most of these platforms, so in general I am dependent on material submitted by volunteers(*). However I strive to integrate all changes needed to get it to compile on a particular platform back into the standard sources, so porting of the next version to the various non-UNIX platforms should be easy. (*) For the Macintosh, that volunteer is me, with help from Jack Jansen <jack@cwi.nl>. 7.11. Q. I have a PC version but it appears to be only a binary. Where's the library? A. You still need to copy the files from the distribution directory "python/Lib" to your system. If you don't have the full distribution, you can get the file lib<version>.tar.gz from most ftp sites carrying Python; this is a subset of the distribution containing just those files, e.g. <URL:>. Once you have installed the library, you need to point sys.path to it. Assuming the library is in C:\misc\python\lib, the following commands will point your Python interpreter to it (note the doubled backslashes -- you can also use single forward slashes instead): >>> import sys >>> sys.path.insert(0, 'C:\\misc\\python\\lib') >>> For a more permanent effect, set the environment variable PYTHONPATH, as follows (talking to a DOS prompt): C> SET PYTHONPATH=C:\misc\python\lib 7.12. Q. Where's the documentation for the Mac or PC version? A. The documentation for the Unix version also applies to the Mac and PC versions. Where applicable, differences are indicated in the text.)? A. Use an external editor. On the Mac, BBEdit seems to be a popular no-frills text editor. I work like this: start the interpreter; edit a module file using BBedit; import and test it in the interpreter; edit again in BBedit; then use the built-in function reload() to re-read the imported module; etc. Regarding the same question for the PC, Kurt Wm. Hemr writes: "While anyone with a pulse could certainly figure out how to do the same on MS-Windows, I would recommend the NotGNU Emacs clone for MS-Windows. Not only can you easily resave and "reload()" from Python after making changes, but since WinNot auto-copies to the clipboard any text you select, you can simply select the entire procedure (function) which you changed in WinNot, switch to QWPython, and shift-ins to reenter the changed program unit."
http://www.faqs.org/faqs/python-faq/part1/
CC-MAIN-2014-52
refinedweb
12,363
66.13
In the very first post, I wrote simple Teamcenter login program. Following is the step-wise description of Teamcenterlogin program. All ITK programs must include tc/tc.h. It has the prototype for ITK_init_module and the definition of many standard datatypes, constants, and functions that almost every ITK program requires, such as tag_t and ITK_ok. Normal Execution of c\c++ program requires main() in same way ITK has ITK_User_main(int argc,char* argv[]) function from which main program gets executed. ITK_ok gives the return value of ITK functions .it is an integer value .If value return 0 means code runs successfully .and if value return other than 0 means code get failed. For login into Teamcenter we have two functions - ITK_auto_login() - Description: If the value of ITK_auto_login is set to true, the user does not need to enter in a user name, password, or group in the command line. Auto logon uses operating system user names and passwords to log on to Teamcenter. However, if the user wants to use a login name other than their operating system name, they may use the command line format. - Values : TRUE Enables automatic logon. FALSE Suppresses automatic logon. - ITK_init_module( userName, password, group); - If your program is called without the ITK_auto_login function, supply the arguments for user, password and group. If program fails to login then it returns non -zero value of ITK_ok Memory management in Teamcenter Data types in ITK programming Teamcenter ITK Programming uses same data types that are there in C\C++ languages. Example:- int ,char*, enum ,struct etc The new APIs use the char* type instead of fixed char array i.e char filesize[50] size in the input and output parameters. The new APIs use dynamic buffering to replace the fixed-size buffering method.fixed-size array got deprecated from TC 10.1 onwards. Special data types tag_t tag_t is special data type where All objects in the ITK are identified by tags of C type tag_t. It is a unique identifier of a reference (typed, untyped, or external ) An unset tag_t variable is assigned the NULLTAG for null value Tags may be compared by use of the C language == and != operators. An unset tag_t variable is assigned the NULLTAG null value Example :- tag_t itemTag if(itemTag==NULLTAG) or if(itemTag!=NULLTAG) Creation of DLL in ITK Teamcenter also comes with a set of include files that are similar with C include files. For example, instead of using: #include “string.h” You could use: #include <fclasses/tc_string.h> fclasses/tc_errno.h fclasses/tc_limits.h fclasses/tc_math.h fclasses/tc_stat.h fclasses/tc_stdarg.h fclasses/tc_stddef.h etc These include files are used in Teamcenter to insulate it from the operating system. We will more post on PLM TUTORIAL–>Teamcenter Customization in upcoming days. Kindly provide your valuable comment on below Comment section and also have you any question kindly ask to ASK QUESTION in FORUM. Our Team will try to provide the best workaround. 2 thoughts on “Explanation of ITK Login Program” this is really helpful to start customization ..can you please provide some more help for basic itk programing For ITK learning this is really really Helpful…Plz provide more itk programs ASAP. Thank you..
http://globalplm.com/explanation-of-itk-login-program/
CC-MAIN-2020-05
refinedweb
537
55.54
Created on 2010-06-22.12:59:57 by babelmania, last changed 2010-07-28.07:18:49 by babelmania. Given a java class that can work with PyArrays within the following methods: class MyJava { MyJava(int[]); int[] getArray(); PyObject __add__(PyObject); PyObject __radd__(PyObject); } Then the commutative property is lost: a=MyJava([1,1]) b=jarray.array([1,1],'i') a+b b+a # ERROR In Jython-2.1 this worked, but in Jython-2.5 an error is raised because only PyArray types can be considered in the hard-wired implementation of PyArray? I tried to add a new PyBuiltinMethod for PyArray, but that road seems also to be blocked becuse it is considered a builtin type. Possible workaround (other than hacking the Jython code) would also be appreciated. What is the exact exception/stacktrace? Better still, please provide the implementation of your class MyJava. /** * On request: BOGUS class to reflect the issue * @author jbakker */ public class MyJava21 { public MyJava21() { } public PyObject __add__(PyObject rhs) { System.out.println("this + "+rhs); return Py.java2py(this); } public PyObject __radd__(PyObject lhs) { System.out.println(lhs+" + this"); return Py.java2py(this); } } Jython 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54) [Java HotSpot(TM) 64-Bit Server VM (Apple Inc.)] on java1.6.0_20 Type "help", "copyright", "credits" or "license" for more information. >>> from jarray import array >>> from issue1622 import MyJava21 >>> x=array([1,1],'i') >>> y=MyJava21() >>> x+y Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only append another array to an array >>> y+x this + array('i', [1, 1]) issue1622.MyJava21@6d56d7c8 Jython 2.1 on java1.6.0_20 (JIT: null) Type "copyright", "credits" or "license" for more information. >>> from jarray import array from issue1622 import MyJava21 x=array([1,1],'i') y=MyJava21() x+y y+x >>> >>> >>> >>> org.python.core.PyArray@b24044e + this issue1622.MyJava21@208cdf50 >>> this + org.python.core.PyArray@b24044e issue1622.MyJava21@208cdf50 Above a bogus java class and its behavior under Jython 2.1 and Jython 2.5.1 As you can see, commutative property is lost I am not sure if "commutative property is lost" is a right description here. The exception is thrown because org.jython.core.PyArray has performed some check in __add__(). One of them is final PyObject array___add__(PyObject other) { PyArray otherArr = null; if(!(other instanceof PyArray)) { throw Py.TypeError("can only append another array to an array"); } So from PyArray point of view, it cannot operate on your class MyJava21 I think it is a right behavior. In C python, you cannot just add anything to a list. For example, "1 + [2, 3]" will throw an example. The appearance of __add__ is commutative between your class and list probably arises because 1) no such check is enforced in jython 2.1, and 2) your code did not check if the operand can be added to MyJava21 Allow me to point out that overriding of __radd__ within my class has the desired behavior on other classes such as PyList In the example below 'MyJava' is actually an integer array of rank 1 (called Int1d). It allows, like in any scripting language to do operations like: x+1 == 1+x == x+y == y+x == x+z == z+x where e.g. x=Int1d([1,2]), y=array([1,1],'i'), z=[1,1] Suddenly blocking this mechanism on only a fraction (PyArray) of your classes alone does not make sense, and I consider that a bug. I agree that it's a bug. The whole point of __r*__ methods is to add support for operations in reversed situation where first class doesn't know about the second. If works perfectly fine in CPython: class Foo (object): def __add__(self, that): print self, that def __radd__(self, that): print self, that [] + Foo () Foo () + [] This never throws TypeError. The correct behavior for PyArray is to _return_ NotImplemented, not raise a TypeError. E.g. consider the following example: class Foo (object): def __add__(self, that): return NotImplemented def __radd__(self, that): return NotImplemented class Bar (object): def __add__(self, that): print self, that def __radd__(self, that): print self, that Foo () + Bar () Bar () + Foo () Foo () + 1 Class Foo is declared to never allow any addition whatsoever. However, Bar can override this with __radd__ method and it works fine. But the last statement fails, because neither Foo.__add__(int) nor int.__radd__(Foo) produces a result: Traceback (most recent call last): File "/home/developer/test/test.py", line 15, in <module> Foo () + 1 TypeError: unsupported operand type(s) for +: 'Foo' and 'int' Thanks babelmania and doublep for the explanation of __r*__. Yes, PyArray array___add__ and array__iadd__ were throwing TypeErrors preventing the __radd__ from being tried. Instead they need to return NotImplemented (actually null from Java methods exposed as MethodType.BINARY) fixed in 7082 Thank you!
http://bugs.jython.org/issue1622
CC-MAIN-2015-18
refinedweb
810
58.18
03 January 2012 08:06 [Source: ICIS news] By Loh Bowei ?xml:namespace> The prices of industrial-grade propylene glycol (PGI) on a CFR (cost & freight) northeast (NE)/southeast (SE) Asia basis were assessed at $1,650-1,680/tonne (€1,270-1,294/tonne) CFR on 23 December 2011, according to data from ICIS. The prices of pharmaceutical-grade propylene glycol (PG USP) were at $1,880-1,930/tonne CFR NE Asia and $1,910-1,940/tonne CFR SE Asia on the same day, the data showed. “It is really hard to predict PG prices now because the poor macroeconomic environment has affected people psychologically,” a downstream unsaturated polyester resins (UPR) maker from southeast Asia said. UPR is used in housing such as sinks and water tanks. “[The prices of] PG follow a country’s GDP [gross domestic product] because downstream UPR is used in the construction sector,” a Malaysian buyer said. A major PG USP is used in the production of personal care products, such as cosmetics, and in the food additives sector. Feedstock propylene oxide (PO) supply is expected to grow because of Dow Chemicals’ 390,000 tonne/year PO plant in “If This may weaken the prices of PG, but the demand forecast for the product in the first quarter is unclear and the current macroeconomic climate will result in an uncertain outlook, the trader said. However, regional end-users’ inventory levels of PGI are low because most players maintained minimal purchasing over the past few weeks in view of the eurozone debt crisis, which may increase prices, market sources said. “If demand [were to] pick up [next quarter], the prices [of PGI] will increase,” a southeast Asian buyer said. On the other hand, the PG USP market is generally more stable than that of PGI and the demand for the former is expected to be stable in the first quarter, a northeast Asian PG USP maker said. ($1 = €0.77) For more on propylene glycol,
http://www.icis.com/Articles/2012/01/03/9519597/outlook-12-asian-mpg-market-uncertain-in-q1-on-macroeconomic-woes.html
CC-MAIN-2014-52
refinedweb
332
54.05
Devel::GDB::Reflect - Reflection API for GDB/C++ use Devel::GDB; use Devel::GDB::Reflect; my $gdb = new Devel::GDB( -file => $foo ); my $reflector = new Devel::GDB::Reflect( $gdb ); print $gdb->get( "b foo.c:123" ); $gdb->print( "myVariable" ); implements the functionality used by the gdb++ script, which serves as a wrapper around GDB. You should probably familiarize yourself with the basic functionality of this script first, before diving into the gory details presented here. The following global variables control the behavior of the "print" method. The number of spaces to indent at each level of recursion. Defaults to 4. The maximum recursion depth. Defaults to 5. The maximum number of elements to show from a given container. Defaults to 10. Create a new Devel::GDB::Reflect instance. Takes a single parameter, an instance of Devel::GDB. When the constructor is invoked, it searches @INC for modules named Devel::GDB::Reflect::DelegateProvider::*, and recruits them as delegates. See "Delegates". $reflector->print( "myVar" ); Given a variable or expression, recursively print the contents of the referenced container. Specifically, this checks the type of the variable, iterates over the delegates to determine the best one, then uses that delegate to print out the contents of the container. The recursion is limited by $MAX_DEPTH, and for each container, the number of elements is limited by $MAX_WIDTH. Although this module is designed primarily for printing the contents of STL containers, it is fully extensible to support custom data types. The "print" method works by iterating over a set of delegates to determine how to print out a given variable. A delegate is a hash consisting of: A numeric value used to disambiguate which delegate to use when there is more than one to choose from. For example, the fallback delegate ( Devel::GDB::Reflect::DelegateProvider::Fallback) can print any data type, but has very low priority (-1000) to prevent it from being invoked unless no other delegate is available. A boolean value, 1 if the delegate is used to print a container that should be iterated (such as a vector), or 0 if it is used to print a single value (such as a string). If can_iterate is true, then the delegate's factory must provide has_next and print_next; otherwise, it must provide The string to print before and after the contents of the variable; defaults to "[" and "]" respectively. The string to print between elements within the variable; defaults to ",". Only makes sense with can_iterate is true. A boolean indicating whether or not to print a newline after printing the contents of the container. Typically this should be 1 (true) except for simple types. A sub taking a single parameter, $var (a C++ expression) and returning an object. This object is expected to contain either can_iterate is false) or has_next and print_next: Takes two parameters: $callback and $fh. Either prints the contents of $var directly to the file handle $fh, or invokes $callback to print $var recursively. Like Java's Iterator.hasNext(), this function is called to determine whether or not there are any items remaining to print out. Prints out the current element and advances the iterator (similarly again to Java's Iterator.next()). Like print(), this function takes two parameters, $callback and $fh, and either prints directly to $fh or invokes $callback recursively. A delegate provider is an object containing a method called get_delegates. This module searches for delegate providers by looking in @INC for modules by the name of Devel::GDB::Reflect::DelegateProvider::*. The get_delegates method takes three parameters ($type, $var, $reflector): a type, a C++ expression, and an instance of Devel::GDB::Reflect. The $type is a hash, containing: fullname: the full name of the type, including its namespace and template specialization, e.g. class std::vector<int,std::allocator<int> > *. This type should never be passed to GDB; use quotenameinstead. shortname: the type name without the template or namespace, e.g. vector. quotename: the full name, properly quoted to pass to GDB, e.g. class 'std::vector<int,std::allocator<int> >' *. template: a ref to an array of types, denoting the template parameters (if any). In the above example, $type-{template}->[1]> would contain { fullname => "std::allocator<int>", shortname => "allocator", quotename => "'std::allocator<int>'", template => ... } Antal Novak afn@cpan.org
http://search.cpan.org/~afn/Devel-GDB-Reflect-0.2/lib/Devel/GDB/Reflect.pm
CC-MAIN-2015-35
refinedweb
709
56.05
BlobStashBlobStash BlobStash is both a content-addressed blob store and a key value store accessible via an HTTP API. Key value pairs are stored as "meta" blobs, this mean you can build application on top of BlobStash without the need for another database. Still in early development. ManifestoManifesto You can store all your life's data in BlobStash, from raw blobs to full file systems. To store your data and build app, you can use a combination of: - The Blob store - The Key-Value store - The JSON document store - Build Lua app that runs inside BlobStash, will be accessible over HTTP and can access all the previous APIs. Everything is private by default, but can support public and semi-private sharing (via Lua scripting and Hawk bewit). Projects built on top of BlobStashProjects built on top of BlobStash Make a pull request if your project uses BlobStash as data store or if you built an open-source Lua app for BlobStash. FeaturesFeatures - All data you put in it is deduplicated (thanks to content-addressing). - Create app with a powerful Lua API (like OpenResty) - Optional encryption (using go.crypto/nacl secretbox) - BLAKE2b as hashing algorithm for the content-addressed blob store - Backend routing, you can define rules to specify where blobs should be stored ("if-meta"...) - Fine-grained permissions - TLS and HTTP2 support - A full featured Go client using the HTTP API - Can be embedded in your Go program (embedded client) Getting startedGetting started $ go get github.com/tsileo/blobstash/cmd/blobstash $ $GOPATH/bin/blobstash INFO[02-25|00:05:40] Starting blobstash version 0.0.0; go1.6beta1 (darwin/amd64) INFO[02-25|00:05:40] opening blobsfile for writing backend=blobsfile-/Users/thomas/var/blobstash/blobs INFO[02-25|00:05:40] server: HTTP API listening on 0.0.0.0:8050 Blob storeBlob store You can deal directly with blobs when needed using the HTTP API, full docs here. $ curl -F "c0f1480a26c2fd4deb8e738a52b7530ed111b9bcd17bbb09259ce03f129988c5=ok" Key value storeKey value store Updates on keys are store in blobs, and automatically handled by BlobStash. Perfect to keep a mutable pointer. $ curl -XPUT -d value=v1 {"key":"k1","value":"v1","version":1421705651367957723} $ curl {"key":"k1","value":"v1","version":1421705651367957723} ExtensionsExtensions BlobStash comes with few bundled extensions making it easier to build your own app on top of it. Extensions only uses the blob store and the key value store, nothing else. Document StoreDocument Store A JSON document store running on top of an HTTP API. Support a subset of the MongoDB Query language. JSON documents are stored as blobs and the key-value store handle the indexing. Perfect for building app designed to only store your own data. See here for more details. ClientClient Lua App/ScriptingLua App/Scripting You can create app, custom API endpoint running Lua script (like OpenResty). See the Lua API here. ExamplesExamples - BlobsBin, a pastebin like service. - A Markdown-powered blog app - Sharing script - Lua iCal feed script - IoT data store (temp/humid with avg) - Pebble app backend example BackendBackend Blobs are stored in a backend. The backend handle operations: - Put - Exists - Get - Enumerate Delete is not implemented for all backends. Available backendsAvailable backends - BlobsFile (local disk, the preferred backend) - AWS S3 (useful for secondary backups) - Mirror (mirror writes to multiple backend) - A remote BlobStash instance (working, bot full-featured) - Fallback backend (store failed upload locally and try to re-upload them periodically) AWS Glacier (only as a backup, development paused) Submit a pull request! You can combine backend as you wish, e.g. Mirror( Encrypt( S3() ), BlobsFile() ). RoutingRouting You can define rules to specify where blobs should be stored, depending on whether it's a meta blob or not, or depending on the namespace it come from. Blobs are routed to the first matching rule backend, rules order is important. [ [["if-ns-myhost", "if-meta"], "customHandler2"], ["if-ns-myhost", "customHandler"], ["if-meta", "metaHandler"], ["default", "blobHandler"] ] The minimal router config is: [["default", "blobHandler"]] Embedded modeEmbedded mode package main import ( "github.com/tsileo/blobstash/server" ) func main() { blobstash := server.New(nil) blobstash.SetUp() // wait till all meta blobs get scanned blobstash.TillReady() bs := blobstash.BlobStore() kvs := blobstash.KvStore() blobstash.TillShutdown() } Roadmap / IdeasRoadmap / Ideas - Bind a Lua app to root ( /) - Enable vendoring of deps - A blobstash-syncsubcommand - Fine grained permission for the document store - A File extension with tree suport (files as first-class citizen) - Display mutation history for the docstore document ( /{doc _id}/history) - A lua module for nacl box? - A Lua module for the document store - Find a way to handle/store? app logs - A better template module for Lua app -> load a full directory as an app - Integrate with Let's Encrypt (via lego) - Snappy encoding support for the HTTP blobstore API - A slave blobstash mode (e.g. for blog/remote apps) - A Lua LRU module - A better documentation - A web interface? - An S3-like HTTP API to store archive? - Support OTP authentication (session cookies) for the docstore API (yubikey)? - Fill an issue! ContributionContribution Pull requests are welcome but open an issue to start a discussion before starting something consequent. Feel free to open an issue if you have any ideas/suggestions! DonationDonation BTC 12XKk3jEG9KZdZu2Jpr4DHgKVRqwctitvj LicenseLicense Copyright (c) 2014-2015 Thomas Sileo and contributors. Released under the MIT license.
https://libraries.io/go/github.com%2Ftsileo%2Fblobstash%2Fpkg%2Fhashutil
CC-MAIN-2020-40
refinedweb
872
55.34
Web services enable applications to interact with one another over the Web in a platform-neutral, language independent environment. In a typical Web services scenario, a business application sends a request to a service at a given URL by using the HTTP protocol. The service receives the request, processes it, and returns a response. You can incorporate calls to external Web services in applications developed in Oracle Application Express. Web services in Oracle Application Express are based on SOAP (Simple Object Access Protocol). SOAP is a World Wide Web Consortium (W3C) standard protocol for sending and receiving requests and responses across the Internet. SOAP messages can be sent back and forth between a service provider and a service user in SOAP envelopes. Web services are called from within an Oracle Application Express application by: Using the Universal Description, Discovery, and Integration (UDDI) registry Manually providing the WSDL URL This tutorial illustrates the later method. Topics in this section include: About Creating Web Service References Creating a New Application Specifying an Application Proxy Server Address Creating a Web Service Reference from a WSDL Creating a Web Service Reference Manually Note:The SOAP 1.1 specification is a W3C note. (The W3C XML Protocol Working Group has been formed to create a standard that will supersede SOAP.) For information about Simple Object Access Protocol (SOAP) 1.1 see: See Also:"Implementing Web Services" in Oracle Database Application Express User's Guide To utilize Web services in Oracle Application Express, you create a Web service reference using a wizard. When you create the Web reference, you can follow one of these methods: You supply the URL to a WSDL document. The wizard then analyzes the WSDL and collects all the necessary information to create a valid SOAP message. The wizard provides a step where you can locate a WSDL using the Universal Description, Discovery, and Integration (UDDI) registry. A UDDI registry is a directory where businesses register their Web services. You search for the WSDL by entering either a service or business name. You supply the relevant information on how to interact with the Web reference, including the SOAP request envelope, and create the Web reference manually. This tutorial describes the second method, creating a Web service reference manually. First, create a new application. To create an application: On the Workspace home page, click the Application Builder icon. On the Application Builder home page, click Create. For Method, select Create Application, and click Next. For Name: For Name - Enter Web Services. Accept the remaining defaults and click Next. Add a blank page: Under Select Page Type, accept the default, Blank. In Page Name, enter Web Services and then click Add Page. The new page appears in the list at the top of the page. Click Next. For Tabs, accept the default, One Level of Tabs, and click Next. For Shared Components, accept the default, No, and click Next. For Attributes, accept the default for Authentication Scheme, Language, and User Language Preference Derived From and click Next. For User Interface, select Theme 2 and click Next. Review your selections and click Create. The Application home page appears. If your environment requires a proxy server to access the Internet, you must specify a proxy server address on the Application Attributes page before you can create a Web service reference. To specify a proxy address: On the Application home page, click Shared Components. Under Application, click Definition. Under Name, enter the proxy server in the Proxy Server field. Click Apply Changes. The Application home page appears. In this exercise, you create a Web service by supplying the location of a WSDL document to a Web service. You then create a form and report for displaying movie theaters and locations. Note:The following exercise is dependent upon the availability of the specified Web service ultimately invoked. If the Web service is unavailable, you may experience difficulties completing this exercise. To create a new Web reference by supplying the WSDL location: On the Application home page, click Shared Components. The Shared Components page appears. Under Logic, select Web Service References. Click Create When prompted whether to search a UDDI registry to find a WSDL, select No and click Next. In the WSDL Location field enter: Click Next. A summary page appears describing the selected Web service. Click Create Reference. The Create Web Service Reference page appears. The Web service reference for MovieInformation is added to the Web Service References Repository. Next, you need to create a page that contains a form and report to use with your Web Service Reference. To create a form and report after creating a Web Service Reference: On the Create Web Service Reference success page, select Create Form and Report on Web Service. For Choose Service and Operation: Web Service Reference - Select MovieInformation. Operation - Select GetTheatersAndMovies. Click Next. For Page and Region Attributes: Form Region Title - Change to Theater Information. Accept the other defaults and click Next. For Input Items: For P2_ZIPCODE and P2_RADIUS, accept the default, Yes, in the Create column. For P2_ZIPCODE, change the Item Label default to ZIP Code. Click Next. For Web Service Results: Temporary Result Set Name (Collection) - Accept the default. Result Tree to Report On - Select Theater (tns:Theater). Click Next. For Result Parameters, select all the parameters and click Finish. Click Run Page. If prompted to log in, enter the user name and password for your workspace and click Login. A form and report resembling Figure 7-1 appear. Notice that the Theater Information Form at the top of the page contains a data entry field and a submit button, but the Results Report does not contain any data. Figure 7-1 Theater Information Form and Report without Data To test the form, enter 43221 in the ZIP Code field and 5 in the Radius field. Then click Submit. The report at the bottom of the page should resemble Figure 7-2. The report lists the names and addresses of movie theaters matching the entered ZIP code and radius. Figure 7-2 Theater Information Report Showing Resulting Data In this exercise, you create a Web reference by supplying information about the Web service and using the manual facility. Manual Web references are created by visually inspecting the WSDL document as well as using a tool to determine the SOAP envelope for the Web service request. Topics in this section include: Create a Web Service Reference Manually Create a Page to Call the Manual Web Service Create Items for ZIP Code and Radius Create a Process to Call the Manually Created Web Reference Create a Report on the Web Service Result To create a Web reference manually, you will copy code from the WSDL for a service called MovieInformation. Please note the example settings provided in the following steps are based on the MovieInformation service at the time this document was released.. In the Name field, enter Movie Info. Locate the endpoint of the MovieInformation service: Open the WSDL by going to: In the WSDL, find the location attribute of the soap:address element, which is a child of the port element. You can search for the following term within the code: soap:address location. At the time of this release, it was this attribute: In the URL field on the Create/Edit Web Service page, enter the endpoint of the MovieInformation service you located. For example: Locate the SOAP action for the GetTheatersAndMovies operation: If necessary, open the WSDL again. See Step 7a. In the WSDL, find the soapAction attribute of the soap:operation element, which is a child of the operation element that has a name attribute of GetTheatersAndMovies. You can search for the following term within the code: soap:operation soapAction. At the time of this release, it was this attribute: In the Action field on the Create/Edit Web Service page, enter the SOAP action you located. For example: In the SOAP Envelope field on the Create/Edit Web Reference page, enter the xml code representing the SOAP Request message. For example: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <tns:GetTheatersAndMovies> <tns:zipCode>#ZIP#</tns:zipCode> <tns:radius>#RADIUS#</tns:radius> </tns:GetTheatersAndMovies> </soap:Body> </soap:Envelope> You can use a SOAP message generating tool, such as MindReef, to construct a valid SOAP Request for a given Web service. In the Store Response in Collection field, enter MOVIE_RESULTS. This is where the response from the Web service will be stored. The Create/Edit Web Service page should resemble Figure 7-3. Figure 7-3 Create/Edit Web Service Page Click Create. The Web Service References page appears, showing Movie Info in the list. To test the Web service: On the Web Service References page, click the Test icon next to the Movie Info Web reference. The Web Services Testing page appears. Note View must be set to Details, otherwise the Test icon is not displayed. In the SOAP Envelope field, replace #ZIP# with 43221 and #RADIUS# with 5. Click Test. Review the Result field and note the following about the response: The base node in the return envelope is called: GetTheatersAndMoviesResponse The namespace for the message is: The XPath to the interesting parts of the response under the result element is: /GetTheatersAndMoviesResponse/GetTheatersAndMoviesResult/Theater/Movies/Movie The interesting elements in the results are called: Name Rating RunningTime ShowTimes Next, you want to create a page to call the manual Web Service. To create a page to call the manual Web service: Click the Application breadcrumb link. On the Application home page, click Create Page. For Page, select Blank Page and click Next. Accept the default for the Page Number and click Next. In Name, enter Find Movies and click Next. For Tabs, accept the default, No, and click Next. Click Finish. On the Success page, click Edit Page. On the Page Definition, locate the Regions section. Click the Create icon. For Region, select HTML and click Next. Select HTML as the HTML region container and click Next. In the Title field, enter Movie Information and click Next. Click Create Region. Next, you want to add a Submit button to the region to initiate a search from the page. To create a Submit button: On the Page Definition, click the Create icon in the Buttons section. For Button Region, accept the default, Movie Information, and click Next. For Button Position, accept the default, Create a button in a region position, and click Next. For Button Attributes, enter SUBMIT in the Button Name and click Next. For Button Template, accept the default, Button, and click Next. For Display Properties, select Region Template Position #CREATE# from the Position list and click Next. In the Branch to Page field, select Find Movies from the list. The page number appears in the field. Click Create Button. Next, you want to create two items where users can enter a search term. To create the ZIP Code item: On the Find Movies Page Definition, click the Create icon in the Items section. For Item Type, select Text and click Next. For Text Control Display Type, accept the default, Text Field, and click Next. For Display Position and Name, specify the following: Item Name - Enter ZIP. The Movie Info Web Service Reference defines the zip code sent in the SOAP Request as #ZIP#. Therefore, this Item Name must be ZIP in order for it's value to be substituted into the SOAP Request sent to the Web Service. Region - Accept the default, Movie Information. Click Next. In the Label field, replace the existing text with ZIP Code and click Next. Click Create Item. To create the Radius item: On the Find Movies Page Definition, click the Create icon in the Items section. For Item Type, select Text and click Next. For Text Control Display Type, accept the default, Text Field, and click Next. For Display Position and Name, specify the following: Item Name - Enter RADIUS. The Movie Info Web Service Reference defines the radius sent in the SOAP Request as #RADIUS#. Therefore, this Item Name must be RADIUS in order for it's value to be substituted into the SOAP Request sent to the Web Service. Region - Accept the default, Movie Information. Click Next. In the Label field, enter Radius and click Next. Click Create Item. Next, you want to create a process that calls the manually created Web reference. To create a process to call the manually created Web reference: On the Find Movies Page Definition, click the Create icon in the Processes section. For Process Type, select Web Services and click Next. In the Name field, enter Call Movie Info and click Next. From the Web Service Reference list, select Movie Info and click Next. In the Success Message area, enter Called Movie Info. In the Failure Message area, enter Error calling Movie Info and click Next. From the When Button Pressed list, select SUBMIT and click Create Process. Next, you want to add a report that displays the results of the called Web service. To create a report on the Web service result: On the Find Movies Page Definition, click the Create icon in the Regions section. Select Report and click Next. For Region, select Report on collection containing Web service result and click Next. In the Title field, enter Search Results and click Next. For Web Reference Type, select Manually Created and click Next. For Web Reference Information, specify the following: Web Service Reference - Select Movie Info from the list. SOAP Style - Select Document. Message Format - Select Literal. Note that these two attributes can be determined by manually inspecting the WSDL document for the service. Result Node Path - Enter: /GetTheatersAndMoviesResponse/GetTheatersAndMoviesResult/Theater/Movies/Movie Message Namespace - Enter: Note that you reviewed both the Result Node Path and Message Namespace when testing the service. Click Next. In the first four Parameter Names, enter Name, Rating, RunningTime, and ShowTimes, and click Create SQL Report. To test the page: Click Run Page. If prompted to log in, enter your workspace user name and password. The Movie Information form appears, as shown in Figure 7-4. Figure 7-4 Movie Information Form with No Data In the ZIP Code and Radius fields, enter information and click Submit. The results appear in the Search Results area.
http://docs.oracle.com/cd/E11882_01/appdev.112/e11945/web_serv.htm
CC-MAIN-2013-20
refinedweb
2,379
57.16
bsearch_s Performs a binary search of a sorted array. This is version of bsearch with security enhancements as described in Security Enhancements in the CRT. Parameters - key Object to search for. - base Pointer to base of search data. - num Number of elements. - width Width of elements. - compare Callback function that compares two elements. The first argument is the context pointer. The second argument is a pointer to the key for the search. The third argument is a pointer to the array element to be compared with key. - context A pointer to an object that can be accessed in the comparison function. bsearch_s returns a pointer to an occurrence of key in the array pointed to by base. If key is not found, the function returns NULL. If the array is not in ascending sort order or contains duplicate records with identical keys, the result is unpredictable. If invalid parameters are passed to the function, the invalid parameter handler is invoked as described in Parameter Validation. If execution is allowed to continue, errno is set to EINVAL and the function returns NULL. For more information, see errno, _doserrno, _sys_errlist, and _sys_nerr. The bsearch_s_s calls the compare routine one or more times during the search, passing the context pointer and pointers to two array elements on each call. The compare routine compares the elements, and then returns one of the following values: The context pointer may be useful if the searched data structure is part of an object, and the compare function needs to access members of the object. The compare function may cast the void pointer into the appropriate object type and access members of that object. The addition of the context parameter makes bsearch_s more secure since additional context may be used to avoid reentrancy bugs associated with using static variables to make data available to the compare function. For additional compatibility information, see Compatibility in the Introduction. This program sorts a string array with qsort_s, and then uses bsearch_s to find the word "cat". // crt_bsearch_s.cpp // This program uses bsearch_s to search a string array, // passing a locale as the context. // compile with: /EHsc #include <stdlib.h> #include <stdio.h> #include <search.h> #include <process.h> #include <locale.h> #include <locale> #include <windows.h> using namespace std; // The sort order is dependent on the code page. Use 'chcp' at the // command line to change the codepage. When executing this application, // the command prompt codepage must match the codepage used here: #define CODEPAGE_850 #ifdef CODEPAGE_850 #define ENGLISH_LOCALE "English_US.850" #endif #ifdef CODEPAGE_1252 #define ENGLISH_LOCALE "English_US.1252" #endif // The context parameter lets you create a more generic compare. // Without this parameter, you would have stored the locale in a // static variable, thus making it vulnerable to thread conflicts // (if this were a multithreaded program). int compare( void *pvlocale, char **str1, char **str2) { char *s1 = *str1; char *s2 = *str2; locale& loc = *( reinterpret_cast< locale * > ( pvlocale)); return use_facet< collate<char> >(loc).compare( s1, s1+strlen(s1), s2, s2+strlen(s2) ); } int main( void ) { char *arr[] = {"dog", "pig", "horse", "cat", "human", "rat", "cow", "goat"}; char *key = "cat"; char **result; int i; /* Sort using Quicksort algorithm: */ qsort_s( arr, sizeof(arr)/sizeof(arr[0]), sizeof( char * ), (int (*)(void*, const void*, const void*))compare, &locale(ENGLISH_LOCALE) ); for( i = 0; i < sizeof(arr)/sizeof(arr[0]); ++i ) /* Output sorted list */ printf( "%s ", arr[i] ); /* Find the word "cat" using a binary search algorithm: */ result = (char **)bsearch_s( &key, arr, sizeof(arr)/sizeof(arr[0]), sizeof( char * ), (int (*)(void*, const void*, const void*))compare, &locale(ENGLISH_LOCALE) ); if( result ) printf( "\n%s found at %Fp\n", *result, result ); else printf( "\nCat not found!\n" ); } Sample Output cat cow dog goat horse human pig rat cat found at 002F0F04
http://msdn.microsoft.com/en-us/library/2w9185b8(v=vs.80).aspx
CC-MAIN-2014-35
refinedweb
616
56.35
MP(3X) MP(3X) NAME mp, madd, msub, mult, mdiv, mcmp, min, mout, pow, gcd, rpow, itom, xtom, mtox, mfree - multiple precision integer arithmetic SYNOPSIS #include <<mp.h>> madd(a, b, c) MINT *a, *b, *c; msub(a, b, c) MINT *a, *b, *c; mult(a, b, c) MINT *a, *b, *c; mdiv(a, b, q, r) MINT *a, *b, *q, *r; mcmp(a,b) MINT *a, *b; min(a) MINT *a; mout(a) MINT *a; pow(a, b, c, d) MINT *a, *b, *c, *d; gcd(a, b, c) MINT *a, *b, *c; rpow(a, n, b) MINT *a, *b; short n; msqrt(a, b, r) MINT *a, *b, *r; sdiv(a, n, q, r) MINT *a, *q; short n, *r; MINT *itom(n) short n; MINT *xtom(s) char *s; char *mtox(a) MINT *a; void mfree(a) MINT *a; DESCRIPTION These routines perform arithmetic on integers of arbitrary length. The integers are stored using the defined type MINT. Pointers to a MINT should be initialized using the function itom(), which sets the initial value to n. Alternatively, xtom() may be used to initialize a MINT from a string of hexadecimal digits. mfree() may be used to release the storage allocated by the itom() and xtom() routines. madd(), msub() and mult() assign to their third arguments the sum, dif- ference, and product, respectively, of their first two arguments. mdiv() assigns the quotient and remainder, respectively, to its third and fourth arguments. sdiv() is like mdiv() except that the divisor is an ordinary integer. msqrt produces the square root and remainder of its first argument. mcmp() compares the values of its arguments and returns 0 if the two values are equal, a value greater than 0 if the first argument is greater than the second, and a value less than 0 if the second argument is greater than the first. rpow raises a to the nth power and assigns this value to b. pow() raises a to the bth power, reduces the result modulo c and assigns this value to d. min() and mout() do decimal input and output. gcd() finds the greatest com- mon divisor of the first two arguments, returning it in the third argu- ment. mtox() provides the inverse of xtom(). To release the storage allocated by mtox(), use free() (see malloc(3V)). Use the -lmp loader option to obtain access to these functions. DIAGNOSTICS Illegal operations and running out of memory produce messages and core images. FILES /usr/lib/libmp.a SEE ALSO malloc(3V) 7 September 1989 MP(3X)
http://modman.unixdev.net/?sektion=3&page=mout&manpath=SunOS-4.1.3
CC-MAIN-2017-30
refinedweb
425
68.1
User Name: Published: 23 Jul 2008 By: Brian Mains Brian Mains overviews the client portion of the ASP.NET AJAX framework.. One important note to the reader: the ASP.NET AJAX framework exists in both a server and client form. In this article, I’m mentioning the client form mostly, which I’ll refer to as the ASP.NET AJAX client library, client library, or JavaScript library. JavaScript is an object-oriented language, but the language makes it a challenge because it’s very flexible, while still being case-sensitive (which can cause object referencing issues due to typos). To create an object’s definition (or class) in JavaScript, use the following: The following code example defines a constructor for the Nucleo.JSObject object. JavaScript makes use of the new keyword to instantiate a new instance of the object’s definition. For performance reasons, it is recommended that all local variables be defined in the constructor’s definition, even if they are used in the prototype. In the .NET world, Nucleo would be the namespace and JSObject would be the class name; however, in plain-old JavaScript, this is defining one object as Nucleo.JSObject. Nucleo isn’t recognized as a namespace per-se. new In .NET, each class has a definition, where it defines properties, methods, and events. A similar approach for this exists in the ASP.NET AJAX framework through the use of a prototype. The prototype defines the class definition that a JavaScript object instance uses whenever it is instantiated through the constructor; the prototype construct is already supported in the JavaScript language. By its very nature, JavaScript doesn’t really support properties or events the same way .NET does. In JavaScript, most structural definitions use methods; the constructor is essentially a method, and the definition is a method as well. In the examples included, we’re going to be essentially working with methods, which use different constructs about that method. Let’s begin to look at these constructs. In the Figure 1 example, the code sample illustrates the definition of a constructor and class using JavaScript without the ASP.NET AJAX framework. Let’s take a look at our first example of how the ASP.NET AJAX library extends this to include support for namespaces: Let’s look at what is new: registerNamespace registerClass One item I didn’t include in the previous section was the use of Sys.Component. Sys.Component is a class that Nucleo.JSObject inherits from in this example (which most objects in the client library inherit from), although no inheritance is also allowed. The ASP.NET AJAX approach supports inheritance, as well as implementing interfaces. Both are applied using the registerClass method in the previous code segment; interfaces happen to be the third parameter (if supplied). Let’s take a look at an interface; this implementation requires some methods be implemented by the class that consumes it: Our Nucleo.JSObject class can now define Nucleo.IJSObject for the third parameter of the registerClass method, which states that the JSObject definition implements the required methods defined in IJSObject. In an interface definition, nothing gets implemented; rather, only the signature needs defined. Note that the methods of the interface throw an exception. One of the new features added is the ability to throw specific errors using the Error object. The Error object defines a preexisting set of exceptions at your disposal, each with different sets of parameters, specifying the message, value in error, etc. In JavaScript, the concept of a property is essentially a method with a get_ or set_ prefix. For instance, below is the definition of a property. Remember that previously I said that a property’s definition would be in the prototype. get_ set_ The body of the property getter/setter works with the _text local variable stored the underlying data. Remember that the _text variable is defined in the constructor. Using the property in JavaScript is shown below: _text Although JavaScript uses the get_ and set_ prefixes, these are omitted when referencing them from the server-side code (we’ll talk about this later). Let’s quickly look at enumerations, though I won’t delve too deeply into it: Values for the enumeration are in a key/value pair separated by a colon. This is pretty self explanatory, so I’ll skip the description. Events are exposable through a new object: EventHandlerList. Luckily, any component inheriting from Sys.Component (which includes any custom AJAX control or extender) can use the get_events property and use the ASP.NET AJAX events architecture. get_events The EventHandlerList object works like a dictionary object. Let’s look at an example of how an event handler gets registered, unregistered, and raised. This is the typical approach used in the prototype definition: As you can see, events use the same concept of defining a handler for an event; however, the approach taken to register for events is different. An event is similar to a dictionary, where the event is simply a string name in the dictionary, and the handler as the value. Events can be raised by getting an instance of the event, and triggering it, using the event signature. In the previous example, the myEvent event uses an object parameter (the sender or object raising the event), and an event argument (another new feature in the client library). myEvent I intentionally left out methods in the ASP.NET AJAX architecture, simply because everyone who develops JavaScript code should know what a method, or function, is. Taking this a step further, it’s possible to define static methods on objects as well. To define a static method, assign the method to the object definition itself and not the prototype. Below is an example of a static method: By separating the definition of properties, and providing the registration of classes, namespaces, interfaces, etc, the ASP.NET AJAX framework makes JavaScript come closer to the .NET framework. Depending on your view of the .NET framework determines whether you believe that’s a good thing or a bad thing, but here’s why I believe it’s a good thing: it’s creating a managed object model client-side JavaScript code and for ASP.NET controls. In the perspective of custom controls, these controls can manage themselves on the client side as well as the server side. There are many areas where these concepts can be applied to, not being limited to custom controls. Custom controls and extenders is a big topic that this can be applied to. In thinking about it from this aspect only, imagine your controls having the ability to tap into an event lifecycle on the client-side. Right now, many developers write manual JavaScript to change how the interface works. For instance, developers show or hide a control based upon a button click. This has to be coded manually, but in ASP.NET AJAX, this can be done using an extender, which makes the developer’s life easy. I’ve developed such a component with the greatest of ease. This is the goal of the ASP.NET AJAX framework, along with the AJAX Data Controls framework () that you’ve seen discussed here on. Over the next few years, there is going to be a large shift in control development in the Microsoft realm, leveraging more of this control architecture on the client side, rather than the server side. Let’s look at an example that puts this all in perspective. We are going to look at creating a component (that inherits from Sys.Component), and uses all of the features above. The script below starts by defining the namespace and class definition: The prototype is filled in later. For now, we have a skeleton of a MenuItem class that represents an item in a menu (theoretically in a web control, but for now I’m working very generically). This class defines a constructor that takes a text definition to show as the menu item’s text. Notice the call to initializeBase; this method is similar to using MyBase.New() in VB.NET; it calls the base class’s constructor. This class also inherits from the Sys.Component base class, which is already defined in the client library. initializeBase MyBase.New() I don’t know if you noticed, but in all my examples, the registerNamespace call is repeated over and over again. This is to be expected, and is the process that the ASP.NET AJAX framework, and the AJAX Control Toolkit, uses. Even though there are multiple registrations, this doesn’t cause a problem. To expand on the class further, let’s add some properties to the prototype: Each property has a getter and setter. The getter simply returns the value, while the setter compares the value for differences and raises a property changed event when there are changes to the value (this method is defined in the Sys.Component class). Next, we move to methods. The following method is defined for the class: However, let’s add some event features to this component. Whenever the toggleVisibility method gets executed, the visibilityToggled event will fire. To do this, we need some methods to work with events. Remember that the Sys.Component base class defines a get_events() property. This property is of type EventHandlerList that contains all of the handlers for the respective event. Below are the methods that work with events. Again, these methods are defined in the prototype. toggleVisibility visibilityToggled get_events() Now, outside objects can register for the visibilityToggled event, and can act upon it. In this case, the toggleVisibility method can raise it using the following definition: Let’s take this a step further by adding an interface. The following interface definition is below, and will be added to the registerClass call: This interface definition defines the same properties and method already defined in the MenuItem class. All that’s required of us is to add it to the class registration, shown below: And now we have an AJAX component that we can use in our .NET applications. To use this component in our ASP.NET pages, add the script to the ScriptManager’s script references, or include it with another component. This article is the first of a series of AJAX introduction articles. The building blocks you’ve seen in this article are built upon in subsequent articles. The ASP.NET AJAX framework defines new constructs for use in custom development.
http://dotnetslackers.com/articles/ajax/ASP-NET-AJAX-Development-Approach-Part-1.aspx
crawl-002
refinedweb
1,740
55.84
Java program reading from 2 files. so i'm creating a program for u.s population by state per 2010 census. Below is the question: create a program that will read from two files and fills two HashMaps to find the population (according to the 2010 census) of an individual state or the United States. There will be 2 text files in your program: The name of the states and their abbreviations, is a comma separated file - name,abbreviation. The abbreviations for each state and their respective population, is a tab separated file – abbreviation(tab) integer. You should read the first file into a HashMap that can handle the types. and then you should read the second file into a HashMap that can handle those types. You should create a menu driven system that requests either the name or the abbreviation for a state. If the user enters a name of the state, your program should find the abbreviation for that state (using the first HashMap) and then call the second HashMap to find the population. or if the user enters the abbreviation, your program should look up the population directly. Please display the results of the request in a clear and concise manner. If the user simply requests the population of the United States you may use the second HashMap and sum the values for all of the states and display that directly. i created a menu and then created 2 hash maps but i don't know how to get the user input to search only for one key. At this point my hash map reads everything from the file instead of just what the user enters. Print Menu: public class Main { //public static void main(String[] args) { public static int menu() { int selection; Scanner input = new Scanner(System.in); /***************************************************/ System.out.println(" Choose from these choices "); System.out.println(" -------------------------\n"); System.out.println("1 - Type 1 to Enter an Abbreviation of the state you would like the population for. "); System.out.println("2 - Type 2 to Enter Name of the State you would like the population for. "); System.out.println("3 - Type 3 to find the population for whole United States(Census 2010). "); System.out.println("4 - Type 4 to Exit the Program."); selection = input.nextInt(); return selection; } //Once you have the method complete you would display it accordingly in your main method as follows: public static void main(String[] args) { int userChoice; Scanner input = new Scanner(System.in); /*********************************************************/ userChoice = menu(); //from here I can either use a switch statement on the userchoice //or use a while loop (while userChoice != the fourth selection) //using if/else statements to do my actual functions for my choices. } } First Hash Map import java.util.Scanner; import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.*; public class hash1 { public static void main(String[] args) throws IOException { File fi1 = new File("StatePlusAbbrev.txt"); Scanner sc1 = new Scanner(fi1).useDelimiter("[,\n\r]+"); HashMap<String, String> states = new HashMap<>(); // HashMap<String, Integer> abbrevToPop = new HashMap<>(); String stateName; String stateAbbrev; while (sc1.hasNextLine()) { stateName = sc1.next(); stateAbbrev = sc1.next(); states.put(stateName, stateAbbrev); //System.out.println("The abbreviation for " +stateName + " is " + // stateAbbrev); } // Set<Entry<String,String>> pod = states.entrySet(); // iterate through the key set and display key and values Set<String> keys = states.keySet(); for (String key : keys) { // System.out.print("Key = " + key); System.out.println(" Value = " + states.get(key) +"\n"); // } //Set set = states.entrySet(); // check set values //System.out.println("Set values: " + set); // states.forEach((k, v) -> // System.out.println(k + "=\t" + v)); // // // String out = states.get("New York"); // System.out.println(out); } /** * */ public void menu() { } } Any help would be appreciated. the second hash map is very similar except it has the string and int to read the abbreviations and number of population
https://www.daniweb.com/programming/software-development/threads/501237/hash-map-reading-from-2-files
CC-MAIN-2018-39
refinedweb
639
60.21
Valery Reznic wrote: > --- Erez Zilber <erezz voltaire com> wrote: > > >> Hi, >> >> I'm building rpms for 2 different distributions >> (SuSE & RedHat) using a >> single spec file. In order to do that, I'm using >> "%package". The 2 >> packages contain more or less the same files, but >> the package name is >> different. >> > > You can make use %if and/or %? construction to make > you package name, rpm filename, filelist depend on > distribution. No need for subpackages. > > Valery > > Using something like: %if "%{_vendor}" == "suse" %if %sles_version == 10 %define _build_name_fmt %_arch/sles-name-%{version}.%{release}.%_arch.rpm %endif %else %if %_vendor == "redhat" %define _build_name_fmt %_arch/redhat-name-%{version}.%{release}.%_arch.rpm %endif %endif was problematic and results in behavior like having 2 debuginfo rpms in RedHat instead of 1 normal rpm and 1 rpm with debuginfo. Why don't you recommend using "%package"? Thanks, Erez > >> The problem that I'm trying to solve is that when I >> build the binary rpm >> from the src rpm, I want to create only the package >> for the distribution >> that I run on: if I build it on a SuSE machine, I >> want to create only >> the binary rpm for SuSE and not the binary rpm for >> RedHat. >> >> I know that if I don't have "%files <package name>" >> in the spec file, >> the rpm for that package will not be created. Can I >> do that dynamically? >> Is there another solution for this problem? >> >> Thanks, >> -- >> >> >> > ____________________________________________________________ > >> Erez Zilber | 972-9-971-7689 >> >> Software Engineer, Storage Team >> >> Voltaire – _The Grid Backbone_ >> >> __ >> >> <> >> >> >> >> _______________________________________________ >> Rpm-list mailing list >> Rpm-list redhat com >> >> >> > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > > > _______________________________________________ > Rpm-list mailing list > Rpm-list redhat com > >
http://www.redhat.com/archives/rpm-list/2007-January/msg00008.html
CC-MAIN-2015-14
refinedweb
285
64.41
Learn to use Visual Studio, Visual Studio Online, Application Insights and Team Foundation Server to decrease rework, increase transparency into your application and increase the rate at which you can ship high quality software throughout the application lifecycle As I have mentioned,. Modern application development focuses on improving the line of business development and developer experience. Highlights for the line of business developers include: LightSwitch. The developer experience improvements in the IDE include a new theme, Code Map debugger integration and performance in key areas areas including XAML design and Code Map rendering. The CTP Can be downloaded here: Please let us know if you have any issues! /> Advanced SharePoint 2010 Load Testing Visual Studio Update 1 released Visual Studio Update 2 has also added the extensibility of web and load testing. You can now parameterize the input data in an XML file. Examples where you might use this: tokens for authentication or columns lists. Please note this extensibility is only available for SharePoint Applications. Watch out for a detailed post on this new extensibility later early next month. TDD for Windows Phone Unit Tests Please see the entire article at: Windows Phone Unit Tests in Visual Studio 2012 Update 2 Improvements in Windows Store applications Unit Tests Visual Studio Update2 CTP adds two new features in the Microsoft.VisualStudio.TestPlatform.UnitTestFramework.AppContainer namespace to make unit testing Window Store apps even easier: Watch for a complete post on this topic early next month! Code Map Debugger Integration and Responsiveness Improvements Code Map has been enhanced to provide debugger integration enabling a visual overview of what you are debugging and stepping through to help understand and debug complex code without getting lost. Git as version control on Team Foundation Service Not really an Update 2 feature, since this is only a preview feature for the Team Foundation Service. But, since Brian Harry announced it today I feel I have to at least mention that fact Team Foundation Service now has Git version control. Please see the entire article at: Getting Started with Git in Visual Studio and Team Foundation Service Known Issues While there is a KB for Visual Studio Update 2 at - I wanted to highlight one issue I ran into: Problem: Team Explorer will not work properly if you have VS 2012 RTM and installs TFS Update 2 CTP 2 side by side. Resolution:1. Install VS Update CTP 2 in the machine. B. User has VS 2012 RTM and installs TFS Update 2 CTP 2 side by side and then un-installs TFS Update 2 CTP 2. How to fix the problem? 1. Un-install the TFS Update 2 CTP 2 Object Model (see instructions below). 2. From Add/Remove/Programs repair VS 2012 RTM. C. User has VS 2012 RTM and installs VS Update 2 CTP 2 and then un-installs VS Update 2 CTP2. How to un-install TFS Update 2 CTP 2 Object Model (OM)? Run the following commands (in the specified order) from a cmd window running as administrator (replace “C:\” with your system drive). In a 32-bit operation system: msiexec /x "C:\ProgramData\Package Cache\{722645F7-6EC1-3CED-A011-207435B9DA56}v11.0.60115\packages\TFSObjectModel\x86\enu\TFSObjectModelLP-x86_enu.msi" msiexec /x "C:\ProgramData\Package Cache\{583E6CA2-1620-328B-ACB0-82E32CD138DD}v11.0.60115\packages\TFSObjectModel\x86\Core\TFSObjectModel-x86.msi" In a 64-bit operation system: msiexec /x "C:\ProgramData\Package Cache\{DF3A8165-FCE9-360E-A050-63C2D7C4CE28}v11.0.60115\packages\TFSObjectModel\x64\enu\TFSObjectModelLP-x64_enu.msi" msiexec /x "C:\ProgramData\Package Cache\{EAD5D2E9-E465-3242-AFD9-2F1BA4EC90DE}v11.0.60115\packages\TFSObjectModel\x64\Core\TFSObjectModel-x64.msi" Question: How does the TFS team come up with these features? You guys told us to put our suggestions on the user voice site. And your customers have been voting for months on what they want, and it seems like our requests are getting very little love. Will we ever get any of the top TFS requests in these Quarterly Updates for 2012? Seeing how it seems like you seem to never update uservoice requests here are the top 13 TFS items with the most votes. 1 rename project in TFS - ALLEN: I REALLY want this feature as so many of my team projects are named the wrong this and we have to do painful migrations and we lose history/fidelity. Why is this so freakin hard? 2 VS11. Bring back the old "Pending Changes" window -ALLEN: The new pending changes window is so bad I go back to using VS 2010. I have devs who've tried 2012 and moved back to 2010 because of this one feature being so broken. 3 Enable distributed source control (DVCS) - ALLEN: Seems like you guys are working on this one...but there's so many issues with regular TFS Version Control I'm not clear on why you guys don't fix those first. I'm looking at trying to do diffs on "renamed" folders/items....ugly. 4 Allow for updating project templates on existing projects in TFS - ALLEN: I'd love to see this improved when changing templates say between scrum to agile, agile to CMMI, CMMI to custom, etc. Why is changing\customizing process templates and WITS so difficult? 5 make it possible to move a Team Project between Team Project Collections - ALLEN: Oh god, PLEASE, I want. Our business groups are always wanting to move these things around and we can't! 6 Provide build configuration dependencies in TFS Build - ALLEN: the latest status update says "planned", ok cool. This could be interesting if it's as easy to use as FinalBuilder/Team City. 7 Treat TFS as an Enterprise Symbol Server - ALLEN: meh! Sure fine...improve this slightly. It would be a nice to have. 8 create a central place to manage rights for TFS, Sharepoint and Reporting Services - ALLEN:+1 BILLION. Security is a NIGHTMARE with TFS, and we've had to invest weeks into building tools and using AD groups. FIX THIS!!!! 9 Switch between list view and tree view in pending changes window - ALLEN: Again see number 2. Seems like again people want the old pending changes experience. 10 Merge by Workitem -ALLEN: This would be a huge win...but Martin Woodward said it was some kind of anti-pattern on TFS rocks...Why is this an anti-pattern? 11 Hide Work Item Types (WITs) based on permission/security group - ALLEN: Why the hell do my Project Managers have to see Code Reviews when they create work items from Excel or Project? 12 New Work Item Control: Checkbox - ALLEN: Duh! How hard could this be?! Seriously! The comment says it all "Checkboxes introduced from the very beginning (Windows 1 actually, I checked. ;)" 13 develop an empowering Process Template editor!! -ALLEN: Why do we still have the terrible tool from VS 2005, go talk to the windows workflow team and build something YESTERDAY. (more) (continued) Alright so now let me review your features: 1.Web based Test Case Management - ALLEN: COOL! But let me guess I can't use it for UAT and I still have to have people buy MSDN with Test Pro. Licensing kills this feature so its DOA. Do my testers still have to go to MTM...I bet they do for video recording/screenshots? Why the two experiences? CONFUSING. 2.Code Map Debugger Integration and Responsiveness Improvements - ALLEN: Yawn. Requires Ultimate. Tell me when Premium gets it then I'll be interested. 3.Test Explorer group tests by Class - ALLEN: Yawn. You guys broke test lists that had been in the product since 2005, our beloved VSMDI files. Thanks for fixing what you broke? 4.Web based version control Explore and Compare - Nice I guess...but who uses this feature? We can't check-in code via the web or branch/merge, so who was this built for and why??? 5.Send work items in email - ALLEN: Another Yawn. You broke this and had to put it back. Thanks for fixing what you broke. 6.Email work items from backlogs ALLEN: Slightly interested. Let me get this straight, your devs took the code from item 5 and added the button on the backlog. Nice I guess? 7.Web version control Improvements - ALLEN: Again yawn..Why should my users care? What will they notice? 8.New Team Explorer Connection dialog display multiple servers, TPCs and projects - ALLEN: COOL! THANK THE DEAR SWEET LORD. Can you do work item queries across team projects yet? 9.Advanced SharePoint 2010 Load Testing - ALLEN: Yawn. Guess you had to make the office division happy. 10.Team Foundation Server Work Item Tagging - ALLEN: Slightly interested..is it as easy to use as bugzilla/Jirra tags? Can I do tag chains? Where do I control the list of tags...does this require new WITs (6.2?) be deployed again? 11.Extended cross browser support in Coded UI Testing - ALLEN: Slightly interested, we looked at Squish...way too expensive...how is this improved? 12.Microsoft SharePoint 2013 support in Coded UI Testing - ALLEN: Slightly interested, again we don't do sharepoint dev, so not a huge win...but I'm sure it fills a gap. 13.Test Explorer test details improvements for Coded UI Tests - ALLEN: Slightly interested I'll have to take a look, QTP just beats codedui hands down. 14.Improvements in Windows Store Applications Unit Testing - ALLEN: Yawn. Maybe in 3 years when Windows 8 takes off we'll need this. 15.Test Explorer test playlists - ALLEN: Yawn. WTF on the naming? Is this a rick roll? Never going to give you up.. Seriously "playlists”? The name should be Test Execution Lists like in other tools. 16.Symbol Loading improvements for IntelliTrace and Profiler - ALLEN: Slightly interested. 17.While not technically part of Update 2: Git as version control on Team Foundation Service - ALLEN: Yawn. If this is not part of Update 2 why list this here. Is this only for the cloud version of TFS if so double yawn. It would seem that Update 2 is lacking the features I want (uservoice) but you earned: 2 - COOL! 7 - Slightly interested 8 - Yawns @Allen drop me an email and I will set up a meeting with you and the person that owns the TFS back log (Works for Brian Harry)...That way you can answer your first question for me<g> My email is: Chass@microsoft.com I installed the VS2012 CTP update but now can't connect to TFS server. Get "Object reference not set to an instance of an object". Keith, please send me the details on what you doing so we can figure that out. buckh-microsoft-com I have the same problem as Keith. Cau you help me? I installed the CTP but I am unable to see the Test link to access the web based test case management. Any help is appreciated, thanks! @Keith Elder, there was a bug in this CTP that produced a null ref error when connecting to a TFS 2008 server. While compatibility between VS 2012 and TFS 2008 isn't officially supported, we did fix that bug for the next CTP. Hello Alicia, You need to go to control panel in TFS and give the user access. I will add screen shots for this in the TCM Web post at: blogs.msdn.com/.../light-weight-browser-based-test-management-and-execution.aspx Thanks, Chuck. I have the same problem as Keith: installed the VS2012 CTP update but now can't connect to TFS server. Get "Object reference not set to an instance of an object". Would you please address the issue here in the forum. Sorry, just saw Adam Barr's response, and yes, we still have TFS 2008 (due for upgrade soon). I appreciate that it is not officially supported, and yet all the VS 2012 and updates worked fine up to now, so this was a bit frustrating. Also, very difficult to find a specific reference to this issue...even above under 'Known Issues'. I am assuming that uninstalling the CTP is the only resolution until the next release? @Adam Barr: Your response about fixing the TFS 2008 null reference bug in the "next" CTP was dated 2/6, and there is a CTP 3 also published on that date...is that the one with the fix...because as far as I can tell that is the version I just downloaded and first encountered the error....(its a bit confusing because they are all KB2707250)? I see that there is TDD support extended for Windows Phone/emulator. Would coded ui recorder be able to identify the objects inside the emulator. Hello, I could not see the "tagging" feature from VS IDE. Just from TFS web access ... I am missing something? Regards!
http://blogs.msdn.com/b/visualstudioalm/archive/2013/01/30/first-ctp-for-visual-studio-update-2.aspx
CC-MAIN-2014-23
refinedweb
2,134
66.23
Adding dtrace probes to user code (part 2) By Darryl Gove-Oracle on Nov 27, 2007 Adam Leventhal pointed out in the comments to my post on adding dtrace userland probes that there is an improved approach to adding userland dtrace probes. He describes this approach on his blog. The approach solves two problems. First, that C++ name mangling makes it hard to add dtrace probes for that language. Second, that code with dtrace probes inserted in it will not compile on systems that do not have the necessary dtrace support. So going back to the example code, I'll try to show the problem and the solution. Here's app.cc: #include <stdio.h> #include <sys/sdt.h> void func(int a, int b) { DTRACE_PROBE2(myapp,func_call,a,b); printf("a=%i, b=%i\\n",a,b); } void main() { func(1,2); func(2,3); } When compiled with the C compiler the following symbols get defined: $ cc -c app.cc $ nm app.o app.o: [Index] Value Size Type Bind Other Shndx Name ... [10] | 0| 0|FUNC |GLOB |0 |UNDEF |__dtrace_myapp___func_call ... When compiled with the C++ compiler the following happens: $ CC -c app.cc $ nm app.o app.o: [Index] Value Size Type Bind Other Shndx Name ... [7] | 0| 0|FUNC |GLOB |0 |UNDEF |__1cbA__dtrace_myapp___func_call6FLL_v_ ... Because the call to the dtrace probe is not declared as being extern 'C' the compiler mangles the C++ function name. The new approach that Adam describes involves dtrace preprocessing the probe description file to generate a header file, and the including the header file in the source code. The big advantage of having the header file is that it's now possible to declare the dtrace probes to have extern 'C' linkage, and avoid the name mangling issue. The syntax for preprocessing the probe description file is: $ dtrace -h -s probes.d This generates the following header file: /\* \* Generated by dtrace(1M). \*/ #ifndef _PROBES_H #define _PROBES_H #include <unistd.h> #ifdef __cplusplus extern "C" { #endif #if _DTRACE_VERSION #define MYAPP_FUNC_CALL(arg0, arg1) __dtrace_myapp___func_call(arg0, arg1) #define MYAPP_FUNC_CALL_ENABLED() __dtraceenabled_myapp___func_call() extern void __dtrace_myapp___func_call(int, int); extern int __dtraceenabled_myapp___func_call(void); #else #define MYAPP_FUNC_CALL(arg0, arg1) #define MYAPP_FUNC_CALL_ENABLED() (0) #endif #ifdef __cplusplus } #endif #endif /\* _PROBES_H \*/ The other advantage is that the header file can protect the definitions of the dtrace probes with #if _DTRACE_VERSION; which enables the same source to be compiled on systems which do not support dtrace. The source code needs to be modified to support this syntax: #include &kt;stdio.h> #include "probes.h" void func(int a, int b) { MYAPP_FUNC_CALL(a,b); printf("a=%i, b=%i\\n",a,b); } void main() { func(1,2); func(2,3); } The rest of the process is the same as before.
https://blogs.oracle.com/d/tags/providers
CC-MAIN-2015-40
refinedweb
453
64.41
18 March 2008 11:15 [Source: ICIS news] LONDON (ICIS news)--Deutsche Bank on Tuesday upgraded its earnings forecasts and share price target for Linde after the German industrial gases group beat expectations with its fourth-quarter results.?xml:namespace> Linde increased its earnings before interest, tax, depreciation and amortisation (EBITDA) 22%, beating the analyst consensus forecast by 7%. Deutsche Bank increased its earnings per share (EPS) forecasts for 2008-2010 by 1-2% and its price target for Linde shares to €108 from €106. “The team is developing a track record for exceeding targets… while no new programmes have been announced we expect the focus on cost and efficiency to continue in 2009-2010,” said analyst Tim Jones. At 10:11 GMT, Linde shares were trading 2% up on Monday’s close at €88.37 on the Xet
http://www.icis.com/Articles/2008/03/18/9109135/linde-forecasts-raised-after-strong-q4-deutsche.html
CC-MAIN-2015-06
refinedweb
139
58.32
I PingBack from PingBack from Unix filesystems, anyone? Your ‘post’ (or indeed that which you quote) should really have given credit to the original implementors, or at least mentioned them, shouldn’t it? Windows users, eh. Yipee! Windows invented the symlink. Or did it already existed on Unix/Linux, something like 15 years ago? And yet another UNIX feature shows up in Windows as an "innovation". Are these symbolic links paths (relative or absolute) like the *nix implementation or are they richer implementations that are much more robust like the Mac OS’s aliases? create a symlink, something i always missed on windows. what about all the other missing features in a os, seen in bsd, linux, unix, etc. maybe start here: and, etc. bleh "Now why is this relevant to the SMB2 protocol? This is because, for symbolic links to behave correctly, they should be interpreted on the client side of a file sharing protocol (otherwise this can lead to security holes)" Are you nuts?! For security to be maintained the filesystem (NTFS) has to parse the link and handle it appropriately! A symlink only has meaning within the system running the filesystem. Either someone needs a big LARTing by a clue-by-four, I have seriously misred what you wrote, or you are … not quite suitable to do this work. I hope I seriously missed a point. I feel it’s worth mentioning to those who are not familiar with *nix, that symlinks have been available on various file systems in that space for… well, a very long time. It’s a long overdue addition to NTFS, and a very welcome one! Those who don’t understand UNIX are doomed to reinvent it, poorly So did you stole this from BSD too? >> Will there be SMB2.0 clients for Windows XP/2000/etc or will we need Vista in order for network clients to properly access symlinks? brilliant idea! WinFS = UNIX FS 1970 Seems, like so much duplicated efforts now by microsoft at their kernel level, too bad they don’t just do something like apple and use freebsd. At this point anyways, it seems like what microsoft is doing is essentially writing their own unix kernel. But how do the new NTFS symbolic links differ from NTFS junction points? Weren’t hard links already in NTFS? I already use them to remap entire folders from Program Files on another drive (e.g. D:) Hi, I’m a little bit confused :(. For me this seems a lot like hardlinks/junction points () Am I missing the point? Thanks in advance i never knew how the shortcuts in windows worked but i’m guessing that they don’t work as symbolic links then? Why does showing symlinks to a client that doesn’t understand them cause a security hole? Wouldn’t they just either not show up or show up as plain files with an unknown type? Why can’t symlinks be flagged with a "symlink" attribute that old clients ignore, and transparently point to the destination file when acted upon by an ignorant client, just like you said about applications ("Apps… will now open the target by default, unless they explicitly ask for the symbolic link…."). Can’t that apply to clients as well? Or are you saying that symlinks that the client doesn’t understand can lead to things like symlinking across hosts and causing people to open files they didn’t know they were opening? More of a "the user doesn’t know what their actions are doing" type thing, like when Windows hides file extensions and people think a VBS is a JPG? finally windows will get symlinks … That’s great, *nix has been using this successfully for years. Security issues should almost be a moot point as you can learn from others’ mistakes. Good to see you guys moving in the right direction…now if only vista would move away from the whole registry thing. 🙂 so does anyone else think Microsoft is trying to be more like linux now? It’s good that Windows is getting symlinks. They’re really useful – just imagine being able to switch over files or directories between versions just by moving a symlink? I was trying to remember how NFS handles them. Is that also client side? I can see the security hole of having a symlink evaluated server side. Point it to a file that you’re not meant to read and read the file. Though won’t normal security prevent that problem anyway? If you can’t read it, you can’t read it. Client side the symlink may not make sense and could also cause confusion for users. Any symlink pointing outside the "share" sounds dodgy. In this post, you state that, a symlink neads to be resolved clientside >> "This is because, for symbolic links to behave correctly, they should be interpreted" << the trueth (in my honest opinion though is), that everything, in a protocol accession the server, should actualy be handled By the server. because, of the ability to fake the symlink’s id to match an object request that otherwise should never be avail to that specific user) For example: lest say im running a windows Vista server for my clients in a way that compares to a unix-shell server. for this they are accessiong the system through SMB, what i would try to do if i wanted unauthorized access, Id try to reverse engeneer (like samba did) this protocol and fake a network brower’s response: telling the server that the symlike called: /user123/ (of wich i am alowed total control = read, write, change and delete)" isn’t actualy a link to: "%datadisk%/serverdocs/users/userver123/date" nut instead to %windir%/system32 (or even the data dir of another user). this by self COULD becoome quite a risk if the server itself doesn’t check whether this symlink is actualy a valid one… Now if the server DOES check this, both systems are doing exactly the same job, Wich in a way both make it redundant, yet also questionable wether it in fact is neaded or just unnesesairy load on your computers… Ofcaurse with just 1 computer it would hardly make a diference, but with over a 100 or even a 1000 clients it would)… ** based on how symlinks usualy work, they, as al files and folders, inherrit the access rights of thair parrent folder.. so if i gave Full-control to the …./userver123/ folder of where also this symlink whould be located. – it could get quite nasty if the administrator does not suspect this kind of exploits. so if in fact i indead would be righ regarding this. could you please shed some light on, what will be done to prevent this from actualy be possible, ______________________ with most kind regard, Edwin van uffelen, (IT-student) The Netherlands… Damn i had those in os/2 in 1999. Microsloth just got around to it? $ fortune -m ‘condemned’ %% (fortunes) Those who do not understand Unix are condemned to reinvent it, poorly. — Henry Spencer Nice… no more having to use scaaaaary junction points. To me, Kevin Owen’s question seemed to be more about finding out if there is some relationship between the SMB2 symbolic links and the currently implemented NTFS reparse/mount points. Can you expound on this? Why does everyone feel the need to post the EXACT same comment about UNIX. Ok – everybody knows that this new feature to Windows, is already part of UNIX, get over it. Thinking of a server that does symlinks server side – Apache on UNIX. Apache does have checks though to ensure that the symlink doesn’t point outside the document root. These checks are optional. Sometimes it makes sense, sometimes not. Depends on who has write access to the server. They were used in the infamous Mindcraft benchmarks to add more overhead to the Apache server. I can’t understand how this could create a security problem? Surely if a client machine logs into a share, then they know the log/pass for that share… if so then they’re likely to want access to all areas of that share. If they try to access files outside of that share (via an alias..er..’symlink’) then one would naturally assume that they’ll be greeted by a prompt asking them for the log/pass details for the user-account/directory/volume they’re trying to access (if the ‘symlink’ points to a path eminating from anywhere other than the current share’s root then it must be outside the share and subject to different permissions). …or am I missing something here? Windows, PAH! Funny how their technological "advances" and "innovations" just seem to be long-standing features of other OS’ but with added flakeyness. 😉 That is one thing that always bugged me about reparse points in NTFS, their inability to refer to network shares. I want to create a single filespace that I can work from to cover my entire network, but have to resort to using one drive for my local things, and another DFS root. Don’t they already exist in Windows as ‘Junctions’? I’ve been using these for some time. These only work with directories, but work on WinXP, Windows 2000 and Windows 2003: There is also a tool in the W2K Resource kit that will do it as well. A typical set of comments from typical *nix people. Get over your old OS… Or at least move up to Mac. Command line blows… the reason I’M having a hard time "getting over it" is because of bits like: "Note symbolic links are an NTFS feature" > Note symbolic links are an NTFS feature. I assume you’re saying "It’s implemented at the filesystem level", but this is certainly technology that exists far, far into the past in *nix systems. Don’t pretend to reinvent the wheel, come on out and say you are bringing in *nix features. Now if you could get a native port of the NFS client working properly. And integrated nicely with the "net use" command. Not like the poor implementation in the "Services for Unix" CD. Well the funny thing is that SMB already supported hardlinks somehow, which I guess hardly nobody found out. If you call CreateHardlink with a SMB source and destination, and the SMB location points to the same drive a ‘remote hardlink’ is created, so I deduce SMB supports remote creation of hardlinks. I have played a lot with hardlinks on NTFS, and so I am happy to hear NTFS6 will support symbolic links. In the meantime you can play with a few nifty tools related to hardlinks on Ciao Hermann At last! This is what they should have implemented originally instead of the horrible .lnk You guys should patent this symlink mechanism, that’s what I think. Don’t you know the file systems ext2/3, reiserfs, xfs, ufs. They all support symlinks, soft and hard. The only new thing is that NTFS now support it. I’m not saying it is bad that microsoft does something that already exists, in fact the *.lnk files did really suck, but it isn’t definitely something new! Bye. "That is one thing that always bugged me about reparse points in NTFS, their inability to refer to network shares. " In the betas of Windows 2000 (the older betas, back when it was named "NT 5") you could mount network shares to folders. The feature was killed, for some reason that’s not clear to me (because it’s the kind of thing that should have been *made to work*, because it’s extremely useful). The reparse point mechanism is, I believe, sufficiently general that one could write a simplistic symlink reparse point without much trouble at all. I’d think all you really need for symlinks is a generalization of junction points ("generalized" to allow them to point anywhere in the object manager namespace, and to allow them to be attached to files as well as directories). The only other thing you’d need to do is to change their "delete" semantics (it’s horrible and wrong that deleting a junction deletes the target, not the junction itself). So, correct me if I’m wrong, but the last 217 comments are pointing out that the Un*x implementation of SMB2.0 already has symbolic links. If so, why didnt MS just download the Un*x support library?! IM: Samba (how *nix does SMB) is free software. In order for it to use code from Samba, they would have to release Windows as free software. @silpheed: "the reason I’M having a hard time ‘getting over it’ is because of bits like: ‘Note symbolic links are an NTFS feature’" Gosh, that just couldn’t mean that it won’t work on FAT32. It must mean that Microsoft is claiming to invent it. Do you people even think before you post? Since this is Windows, which only natively supports NTFS and the FAT variants, the comment that symlinks are an NTFS feature implies that the feature is not available on FAT or FAT32. It does not mean that they are not a feature of any other file system for any other OS. So once again, get over it. This is truely mind-boggling… As the original creators of UNIX where in the pre-implementing stage, they already thought out the symlink idea on paper.. That was in the summer of 1969, it was implementen within the same year, or the beginning of 1970, as the diskpack for the original system were delivered. I believe we have to thank Dennis M. Richie for this one. (I’m not sure about that though) Those who do not learn from history are doomed to repeat it. Repeatedly. Symbolic links were invented in the Multics operating system in 1965-66, well before Unix was born. The Bell Labs Unix group chose not to implement them, instead providing much more problematic and limited (but easy to implement) "hard links". BSD Unix wisely adopted symbolic links over a decade later, correcting the error, and improving them to allow relative pathnames. It wasn’t until AT&T Unix System V disappeared that symbolic links became universally available in *nix systems. The hidebound AT&T attitude is why they weren’t in POSIX 1003.1, either. Hard links still present fundamental obstacles to hierarchy-oriented functions like quota management. Actually, NTFS has had symbolic links for years (maybe NT 3.0?), but they’ve been hidden from Windows users and accessible only through the POSIX subsystem. They’re also used to associate symbolic names (e.g., COM1) with real devices, but Windows users have no visibility into that. I’d guess that the "news" here is just that those API are being added to the Win32 API–in other words, little news and no novelty. I only hope that they don’t introduce gratuitous incompatibilities. Of course, NTFS has had Security Descriptors, Access Control Lists, and system level access auditing. Hey Unix bigot, want to explain why every Network Attached Storage device based on Unix needs to squash root access? Oh, giant gaping hole in the NFS protocol? Pot. Kettle. Black. Now, shut the f**k up. What inspires so many dipsticks to post so much inaccurate garbage here? Shutup. Nobody is impressed with stupid "Unix is better" rhetoric. Beavis: Actually you’re wrong. People are impressed by the "Unix is better" rhetoric because … well … UNIX IS BETTER. Unix is more stable, more secure, does more, scales better, runs on more hardware. Microsoft is just waking up to this reality. And time and time agein, they have to catch up with the leading *nix systems. many new "features" of Vista are already out there in Unix and Linux. This symlinks feature is just one example. As a side note, and back to the original post– If this is a new feature to NTFS, both Microsoft and the open source implementations of Windows file sharing should take it into consideration. But we all know that the open source community will find, patch and fix any holes before Microsoft realizes that hackers are exploiting them. PingBack from PingBack from PingBack from
https://blogs.technet.microsoft.com/windowsserver/2005/10/28/smb2-protocol-what-is-a-symbolic-link/
CC-MAIN-2017-39
refinedweb
2,714
70.84
Developing foreground applications Learn about the languages that are supported on Windows 10 IoT Core as well as the UWP and non-UWP app types that are supported on IoT Core. Note Visual Studio will generate a cryptic error when deploying to a RS5 (or RS4 with OpenSSH enabled) IoT image unless a SDK from RS4 or greater is installed that Visual Studio can access. Application Types Universal Windows Platform (UWP) Apps IoT Core is a UWP-centric OS and UWP apps are its primary app type. Universal Windows Platform (UWP) is a common app platform across all version of Windows 10, including Windows 10 IoT Core. UWP is an evolution of Windows Runtime (WinRT). You can find more information and an overview to UWP here. Visual Studio is the primary tool for writing UWP apps for IoT Core and in general. You can find a detailed listing of the compatibility requirements for Visual Studio here. Traditional UWP Apps UWP apps just work on IoT Core, just as they do on other Windows 10 editions. A simple, blank Xaml app in Visual Studio will properly deploy to your IoT Core device just as it would on a phone or Windows 10 PC. All of the standard UWP languages and project templates are fully supported on IoT Core. There are a few additions to the traditional UWP app-model to support IoT scenarios and any UWP app that takes advantage of them will need the corresponding information added to their manifest. In particular, the "iot" namespace needs to be added to the manifest of these standard UWP apps. Inside the attribute of the manifest, you need to define the iot xmlns and add it to the IgnorableNamespaces list. The final xml should look like this: <Package xmlns="" xmlns: Background Apps In addition to the traditional UI apps, IoT Core has added a new UWP app type called "Background Applications". These applications do not have a UI component, but instead have a class that implements the "IBackgroundTask" interface. They then register that class as a "StartupTask" to run at system boot. Since they are still UWP apps, they have access to the same set of APIs and are supported from the same language. The only difference is that there is no UI entry point. Each type of IBackgroundTask gets its own resource policy. This is usually restrictive to improve battery life and machine resources on devices where these background apps are secondary components of foreground UI apps. On IoT devices, Background Apps are often the primary function of the device and so these StartupTasks get a resource policy that mirrors foreground UI apps on other devices. The following sample shows the code necessary to build a C# Background App that blinks an LED: namespace BlinkyHeadlessCS { public sealed class StartupTask : IBackgroundTask { BackgroundTaskDeferral deferral; private GpioPinValue value = GpioPinValue.High; private const int LED_PIN = 5; private GpioPin pin; private ThreadPoolTimer timer; public void Run(IBackgroundTaskInstance taskInstance) { deferral = taskInstance.GetDeferral(); InitGPIO(); timer = ThreadPoolTimer.CreatePeriodicTimer(Timer_Tick, TimeSpan.FromMilliseconds(500)); } private void InitGPIO() { pin = GpioController.GetDefault().OpenPin(LED_PIN); pin.Write(GpioPinValue.High); pin.SetDriveMode(GpioPinDriveMode.Output); } private void Timer_Tick(ThreadPoolTimer timer) { value = (value == GpioPinValue.High) ? GpioPinValue.Low : GpioPinValue.High; pin.Write(value); } } } You can find in-depth information on Background apps here. Non-UWP Apps IoT Core supports certain traditional Win32 app types such as Win32 Console Apps and NT Services. These apps are built and run the same way as on Windows 10 Desktop. Additionally, there is an IoT Core C++ Console project template to make it easy to build such apps using Visual Studio. There are two main limitations on these non-UWP applications: - No legacy Win32 UI support: IoT Core does not contain APIs to create classic (HWND) Windows. Legacy methods such as CreateWindow() and CreateWindowEx() or any other methods that deal with Windows handles (HWNDs) are not available. Subsequently, frameworks that depend on such APIs including MFC, Windows Forms and WPF, are not supported on IoT Core - C++ Apps Only: Currently, only C++ is supported for developing Win32 apps on IoT Core. Programming Languages IoT Core supports a wide range of programming languages. In-Box languages Traditional UWP languages ship with support in Visual Studio by default. All of the In-Box languages support both UI and Background Applications - Languages - C# - C++ - JavaScript - Visual Basic Arduino Wiring Arduino Wiring requires the download of the "Windows IoT Core Project Templates" from the Visual Studio Tools->Extensions and Updates manager. Arduino Wiring supports only Background Applications. You can also build Windows Runtime Components using C#, C++, or Visual Basic and then reference those libraries from any other language. C# and Visual Basic (VB) C# and VB are both supported as UWP apps and have access to the portion of the .NET Framework available to UWP applications. They support UI apps built with Xaml as well as Background Apps. You can also build Windows Runtime Components that can be used from other supported languages. Samples: JavaScript You can use JavaScript to build both UI and Background Apps. The UI apps work the same way they do on all UWP editions. The Background Apps are new for IoT Core but are simple. The following sample code shows the output of the JS New Project Template: // The Background Application template is documented at (function () { "use strict"; // TODO: Insert code here for the startup task })(); C++ With C++ you can build Xaml or DirectX UI apps, as well as UWP Background projects and non-UI Win32 apps. Samples: Note For those who are planning to write their app in C++, you'll need to check the UWP C++ checkbox upon downloading. Arduino Wiring With Arduino Wiring support you can build apps in Arduino Wiring for many popular components and peripherals in the IoT ecosystem. Our Arduino Wiring Project Guide provides full instructions on how to get set up to build these apps. The samples copied and linked below will help you get started building your own. You can even build WinRT components in Arduino that can then be used from other languages. Blinky Sample Code The full sample code and docs are available on our samples page and you can find the full code below: void setup() { // put your setup code here, to run once: pinMode(GPIO5, OUTPUT); } void loop() { // put your main code here, to run repeatedly: digitalWrite(GPIO5, LOW); delay(500); digitalWrite(GPIO5, HIGH); delay(500); }
https://docs.microsoft.com/en-us/windows/iot-core/develop-your-app/buildingappsforiotcore
CC-MAIN-2021-49
refinedweb
1,068
60.24
In-Depth If your app requires the creation and manipulation of combinations of objects, the BigInteger structure in the Microsoft .NET Framework can offer huge advantages to combination functions. The BigInteger data type, introduced as part of the System.Numerics namespace in the Microsoft .NET Framework 4, enables big improvements in standard mathematical combination functions, which are fundamentally important in software engineering. A mathematical combination is a subset of items in which order does not matter. Take five animals, {ant, bat, cow, dog, elk}. One combination element of the five animals when taken two at a time is {ant, bat} and another is {cow, elk}. When taken three at a time, one combination is {ant, cow, dog}. Notice that combinations depend on the total number of items in the parent set (usually denoted by n) and the number of items in the subset (usually denoted by k). The screenshot in Figure 1 gives you an overview of where I'm headed in this article. The C# demo console application begins by displaying the int.MaxValue, which is more than 2 billion. Mathematical combinations often involve huge numbers that are much larger than int.MaxValue, and this is where the BigInteger type is useful. The next part of the demo output displays the number of ways to select 10 items from a set of 200 items, which is 22,451,004,309,013,280 -- much larger than int.MaxValue. The demo program then uses a successor method to display all 10 ways to select three items from a group of five items. Here the five items are just the numbers 0 through 4. The first combination element is {0,1,2} and the last element is {2,3,4}. Notice that an element like {0,2,1} is not listed; it's considered the same as {0,1,2} because, by definition, order doesn't matter for a combination. The demo program indicates that the underlying code has the ability to generate the successor to a given combination element. The code uses what is called lexicographical order. This means that if the parent items are the numbers 0..n-1, then if each combination element is interpreted as a single value, elements are listed from smallest to largest. The next part of the demo program computes and displays the 123,456,789,012,345th element of 200 items taken 10 at a time. This is an extremely difficult problem, but one that has a beautiful solution using the BigInteger type and something I call the combinadic of a number. The final part of the demo output shows the seventh combination element of five names and suggests that, although it's useful to be able to apply mathematical combinations to integers, in application programming you often want to work with combinations of strings. In this article, I'll present powerful C# code that you can integrate into your software apps, or use directly as standalone utilities. It's C# code, but you should have little trouble refactoring the code to Visual Basic .NET. Before I dive into the code, let me caution you that mathematical combinations are often confused with mathematical permutations. A permutation of n items is all possible rearrangements of the items. For example, if you have three items {0,1,2}, then there are six permutation elements: {0,1,2}, {0,2,1}, {1,0,2}, {1,2,0}, {2,0,1} and {2,1,0}. Permutations are just as important as combinations, and warrant a separate article. Overall Program Structure The overall structure of the program that generated the screenshot in Figure 1 is presented in Listing 1. I launched Visual Studio 2010 and created a C# console application program. In the Solution Explorer window, I renamed file Program.cs to the more descriptive CombinatoricsProgram.cs, which automatically renamed class Program. Next, I removed the unneeded template-generated using statements at the top of the program. Then I added a reference to the System.Numerics namespace, which is not accessible to a C# program by default. The BigInteger data type is defined in System.Numerics. I added a using statement that referenced the namespace, so I wouldn't have to fully qualify instances of BigInteger. The program houses a single public class named Combination. Depending on your application scenario, you may want to create a separate class library to house the Combination class. The Combination class has private members n (the parent size), k (the subset size) and data (to hold combination element items). I don't need properties on any of these members because all combination functionality is exposed through public methods. The Combination class has a single constructor that accepts parameters n and k. The Successor method accepts no arguments and returns a combination, which is the lexicographical successor to the current combination object. The Choose method is declared static because it isn't tied to a specific instance of a combination object. The Element method accepts a parameter m, which is a lexicographical index, and returns the combination object, which has that index for the current values of n and k. The LargestV method is a helper used by the Element method. The ApplyTo method accepts an array of strings, and returns the subset that corresponds to the combination element. Choose Method An important combination function computes the number of possible combination elements for n and k. This function is often called the Choose function, and sometimes abbreviated as C in mathematical literature. Before the arrival of the BigInteger data type, there was no satisfactory way to compute Choose in the .NET Framework. The BigInteger data type can represent arbitrarily large integer values. A surprising number of hideously bad examples of how to write a Choose function appear on the Web. The mathematical definition of Choose(n,k) is n! / (k!)(n-k)!, where the exclamation point means factorial. The factorial of a number v is v * (v-1) * (v-2) * . . . * 1. For example, 5! = 5 * 4 * 3 * 2 * 1 = 120. So Choose(5,3) could be computed as 5! / (3!)(5-3)! = 120 / (6)(2) = 10. That definition for Choose is extremely inefficient, however. It uses the factorial function, which is tricky to code properly. Without going into lengthy detail, the computation of Choose can be dramatically simplified. In general Choose(n, k) = Choose(n, n-k). For example, Choose(100,97) = Choose(100,3). Smaller numbers are much better when computing Choose. Also, there's no need to compute full factorials. Instead, it's possible to compute running products. For example, it turns out that Choose(100,3) = (100)(99)(98) / (3)(2)(1) = 161,700. Putting those simplifications together with the BigInteger type leads to the Choose implementation in Listing 2. Notice the input parameter value check if (n < k) in Choose. In some situations, you may want to treat this condition as an error. However, allowing this condition (and returning a value of 0 when it occurs) is needed by the Element method, which uses Choose as a helper. The statement ans = (ans * delta + i)) / i accumulates the running product. Because ans is type BigInteger, it will not overflow, so using the C# checked block isn't necessary. The first few lines in the Main method of the demo program show how to call Choose: static void Main(string[] args) { Console.WriteLine("\nBegin combinations with C# BigInteger demo\n"); BigInteger bi = Combination.Choose(200, 10); Console.WriteLine("MaxInt is " + int.MaxValue.ToString("#,###")); Console.Write("The number of ways to Choose from n=200 items k=10"); Console.WriteLine(" at a time is \n" + bi.ToString("#,###") + "\n"); Console.Write("Number of ways to Choose from 5 items 3 at a time is: "); Console.WriteLine(Combination.Choose(5, 3) + "\n"); . . . Notice that Choose is called as a static method of the Combination class. Combination Constructor and ToString Method The Combination constructor is very simple: public Combination(int n, int k) { if (n < 0 || k < 0) // normally n >= k throw new Exception("Negative param in Combination ctor"); this.n = n; this.k = k; this.data = new int[k]; for (int i = 0; i < k; ++i) this.data[i] = i; } After transferring input argument values n and k, the data array is allocated. The values assigned to data represent the first combination element in lexicographical order. Combination c = new Combination(5,3) creates the Combination of {0,1,2}. The ToString method is also simple: public override string ToString() { string s = "^ "; for (int i = 0; i < this.k; ++i) s += this.data[i].ToString() + " "; s += "^"; return s; } Here, each combination representation begins and ends with the "^" character to distinguish it from the representation of an ordinary numeric array. I use the carat character because it starts with the letter "c" and it reminds me that what I see is a combination (which also starts with the letter "c"). I use string concatenation because it's a bit cleaner -- though slightly less efficient -- than using the StringBuilder class. Successor Method The Successor method returns the next combination element relative to the current element. The code for the Successor method is short but rather tricky, as shown in Listing 3. One design issue in Successor is deciding how to deal with the Successor to the last combination element. If you refer to Figure 1, notice that the last element for a combination with n=5 and k=3 is {2,3,4}. In general, the last element is the only element that has value n-k in the [0] position of the data array. Here I decide to return null. Alternatives are to return the first combination element or throw an Exception. Another possibility is to define a LastElement method. Realistically, there are few scenarios in which you'll need to modify the code logic of the Successor method. The while loop uses an index i that starts at the right end of the current combination data array. Index i moves to the left until it reaches the key value in data. The key value is incremented, then the for loop creates an increasing sequence for every value in data to the right of i. The following lines of code in the Main method demonstrate how to use the Combination constructor, and the ToString and Successor methods: Console.Write("All combinations of 5 items 3 at a time "); Console.WriteLine("in lexicographical order are: \n"); Combination c = new Combination(5, 3); int i = 0; while (c != null) { Console.WriteLine(i + ": " + c.ToString()); c = c.Successor(); ++i; } mth Lexicographical Element Method Consider the problem of generating the mth lexicographical element of a combination. Wait, let me explain! Suppose n=5 and k=3; then, if m=7, the mth element is {1,2,4}, as shown in Figure 1. At first, this seems like an easy problem. Just start at the first element and iterate m times, calling the Successor method. However, as you've seen, the number of combination elements can be huge. I ran some timing tests to estimate how long it would take to determine the 123,456,789,012,345th element of a combination with n=200 and k=10 using a naive iterative approach. On my standard desktop PC, I came up with approximately 370 million seconds, which is roughly 12 years. You probably don't want to wait that long, so a different approach is needed. Some time ago, I discovered a clever math construct called the combinadic that can be used to directly compute the mth element of a combination. My original code implementation didn't use the BigInteger type because it wasn't available at the time. The code for an improved Element method that computes the mth lexicographical element of a combination is presented in Listing 4. Using this implementation of Element, I was able to compute the 123,456,789,012,345th element of a combination with n=200 and k=10 in less than 1 second, as shown in Figure 1. The Main method of the program that called Element and produced the screenshot in Figure 1 is the following: c = new Combination(200, 10); BigInteger m = BigInteger.Parse("123456789012345"); Combination e = c.Element(m); Console.Write("\nThe 123,456,789,012,345-th combination element "); Console.WriteLine("of C(200,10) is: \n"); Console.WriteLine(e.ToString()); The implementation of the Element method is tricky, but fortunately, you rarely have to modify the code. A detailed explanation of how Element works is available in my MSDN library article. Notice that Element calls method Choose and also calls a private helper method named LargestV: private static int LargestV(int a, int b, BigInteger x) { int v = a - 1; while (Choose(v, b) > x) --v; return v; } The LargestV helper method also calls Choose. My earlier versions of the Element method couldn't use the improved Choose method because the BigInteger type did not exist at the time. Applying Combinations to Strings In many application development scenarios, you'll want to apply combinations to collections of strings. Because a mathematical combination consists of a subset of integers between 0 and n-1, mapping combinations to arrays -- which are also indexed starting at 0 -- is easy. Take a look at method ApplyTo: public string[] ApplyTo(string[] a) { if (a.Length != this.n) throw new Exception("Bad array size in ApplyTo"); string[] result = new string[this.k]; for (int i = 0; i < result.Length; ++i) result[i] = a[this.data[i]]; return result; } The method accepts an array of strings and returns a subset array of strings that represents the current combination element. The idea is best explained by seeing how ApplyTo was called in the program that produced the output shown in Figure 1: string[] people = new string[] { "Abe", "Bob", "Cal", "Don", "Eve" }; Console.WriteLine("\nIf five people are:"); foreach (string s in people) Console.Write(s + " "); Console.WriteLine("\nThe 7th subset of the 5 people when taken 3 at a time is:\n"); c = new Combination(5, 3); c = c.Element(7); string[] subset = c.ApplyTo(people); foreach (string s in subset) Console.Write(s + " "); Console.WriteLine(""); Console.WriteLine("\nEnd demo\n"); The demo code sets up an array of five people: "Abe," "Bob," Cal," Don" and "Eve," indexed as {0,1,2,3,4}, and echoes their values to the screen. Then the demo creates a combination with n=5 and k=3, and uses the Element method to determine the seventh combination element, which is {1,2,4}. The ApplyTo method generates the seventh element by extracting the corresponding string values from the input array, giving "Bob," "Cal" and "Eve." The code presented here should allow you to efficiently handle most application programming scenarios that require the program¬matic creation and manipulation of combinations of objects. The code implementations of the Choose and Element methods are particularly efficient, in part because of the welcome addition of the BigInteger data type in the System.Numerics namespace. Happy combinatorializing! Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2012/08/01/biginteger-data-type.aspx
CC-MAIN-2016-22
refinedweb
2,515
56.66
Date: Dec 5, 2012 1:54 PM Author: Michael Stemper Subject: Re: Re: Matheology § 170 In article <0e301358-0106-4609-b628-14da5781de11@4g2000yql.googlegroups.com>, WM <mueckenh@rz.fh-augsburg.de> writes: >On 5 Dez., 08:05, Virgil <vir...@ligriv.com> wrote: >> > This set has, like many mathematical entities, a geometrical and an >> > alytical property: >> >> > 0.1 >> > 0.11 >> > 0.111 >> > =A0... >> > It is a triangle and it is a sequence too. >> >> While parts of it can be triangular by reason of having 3 finite sides, >> the whole of it does not have a third finite side so is not a triangle. >> >> NOte that to be a triangle, it would also have to have three vertices, >> which is not the case. > >In mathematics a triangle is defined by one angle and its two sides. No, in mathematics a triangle is defined by either its three vertices or its three sides. Two rays with a common endpoint define an angle, but not a triangle. If you have two triangles, you can use the lengths of two sides and the angle between them to see if they are congruent. But, that only works if you start with two triangles. If you can't specify real coordinates for the vertices, they don't exist, and you do not have a triangle. -- Michael F. Stemper #include <Standard_Disclaimer> The FAQ for rec.arts.sf.written is at: Please read it before posting.
http://mathforum.org/kb/plaintext.jspa?messageID=7932878
CC-MAIN-2015-22
refinedweb
237
66.33
Good afternoon all, I'm brand new to using MySql. Can anyone tell me what .dll file I have to add as a reference to import MySql into application? Currently Imports MySql.Data.MySqlClient does not work, I'm missing a dll i'm sure. Any Ideas? Thanks all! jb Originally Posted by jcb1269 Currently Imports MySql.Data.MySqlClient does not work, I'm missing a dll i'm sure. Any Ideas? What error do you get when you try the import. I tried the import and got this Originally Posted by Warning Warning 1 Namespace or type specified in the Imports 'MySql.Data.MySqlClient' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. Make sure the imported element name doesn't use any aliases. Interesting that it wasn't an actual error...just a warning. Have a look at the "MySQL Connector/Net — for connecting to MySQL from .NET" section from this MySQL Downloads link. Do you have all of that was missing the MySql.Data.dll. Nooob mistake. hehehe Thanks. jb Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?173199-MySql-and-Vb.Net-2008&p=523121
CC-MAIN-2017-17
refinedweb
208
69.18
#include <tcl.h> ... Tcl_Interp *interp = Tcl_CreateInterp(interp): ...Then you can create new Tcl commands, link static variables from your C program to Tcl variables, but first of all have the power of Tcl at your fingertips ;-). I thought there was at least one more call - to some sort of locating the encoding files, etc. function - required?MJ - When embedding a complete Tcl install (e.g. with init.tcl) the following can be used to completely initialize the embedded Tcl interpreter (copied from TIP 66). Tcl_Interp * interp; /* Initialise the Tcl library thoroughly */ Tcl_FindExecutable( argv[0] ); /* Create the interp, evaluate "init.tcl" for the script level initialisation. */ interp = Tcl_CreateInterp(); if ( Tcl_Init( interp ) != TCL_OK ) { ... Report the error }
http://wiki.tcl.tk/3589
CC-MAIN-2018-05
refinedweb
114
59.4
Lesson 4 of 27By Avijeet BiswalLast updated on Jun 22, 202047338 PyCharm, created by the Czech company JetBrains, is a popular Integrated Development Environment (IDE) used in programming, particularly for the Python programming language. It is written in Java and Python, and its initial release was in February of 2010. PyCharm works with Windows, macOS, and Linux versions. It provides a graphical debugger, an integrated unit tester, coding assistance, support for web development with Django, and integration with Anaconda’s data science platform. This PyCharm tutorial will help you learn how to create new projects, add files to projects, customize the UI, run and debug Python code, and explore a lot of other features. You can download PyCharm from the JetBrains website. Learn the essentials of object-oriented programming, web development with Django, and more with the Python Training Course. Enroll now! Once you click on download, it will give you two options: Professional or Community. The professional edition of PyCharm requires a subscription, while the community edition is free. For this PyCharm tutorial, we will use the community edition. This will start downloading the .exe file. After the download is complete, run the file to install PyCharm. Click on Finish to complete the PyCharm Community edition setup. Now that the installation is complete, let us discuss the PyCharm UI in this PyCharm tutorial. In the PyCharm UI, your options include File, Edit, View, Navigate, Code, Tools, VCS, Window, and Help. If you want to make changes to the environment and the interface, click on File and go to Settings. Now, to write your Python script, go to File and select New Scratch File. Next in this PyCharm tutorial, let's create a new project by going to File and selecting New Project. Choose a name for the project and select create. You will see a pop-up asking how you want to open the project. I’ll choose This Window. It will take a while to create a virtual environment. Once it’s ready, start by naming your project, and then, right-click it, go to New and select Python file. Name the Python file you have created. In our example, we’ve named it PyCharm. At the bottom of the PyCharm window, you will find the Python Console, Terminal, and TODO options. Let us continue this PyCharm tutorial performing some basic mathematical operations by importing the math module. It uses a full function (abs) to return the total value of a number. You can see the Add Configuration option has changed to the name of the Python file. Next, click the green run button to execute your code. The Add Configuration will be reset every time you create a new project. There is a Run tab that comes up once your code executes successfully to display the output. In the Python console, you can run one line of code, which will generate output beneath it. PyCharm has a Refractor option, which allows you to rename a variable throughout the code. To do this, select the variable name, right-click and choose Refractor, and then click Rename. In the next section of this PyCharm tutorial, we will cover how to import the NumPy module in PyCharm. However, if we use the following lines of code to import NumPy, it will throw an error. To install NumPy on PyCharm, click on File and go to the Settings. Under Settings, choose your Python project and select Python Interpreter. Then, search for the NumPy package and click Install Package. Now, let's rerun the code, and you can see this time that it executed successfully. You can close your project by going to the File menu and selecting the Close Project. If you want to open any file in PyCharm, select Open from the File menu and browse to the correct one. It will ask, “How would you like to open the project?”. Select the method that works for you. You can edit the configuration of a project by selecting the name of the file and clicking on Edit Configurations. This allows you to edit the Environment variables, choose the type of Python interpreter you need, select the Working directory, and so on. PyCharm offers multiple ways to run the code: You can select each line of code, and it will show a red-colored point, which refers to the breakpoint. To debug a program, select the bug option present on the top right or click on Run from the menu bar and choose Debug. It will execute until the breakpoint, which can be continued until completion by pressing F8. Below is a small script that can help you understand more about how PyCharm works. Once you run this program, it will prompt you for inputs in the console. st = input(“Enter X and Y”) X, Y=[int(x) for x in st.split(‘,’)] If you try to give anything other than integer as inputs, it will throw an error. Let’s create a list and see how it works: st = input(“Enter X and Y”) X, Y=[int(x) for x in st.split(‘,’)] lst = [[0 for col in range(Y)] for row in range(X)] print(lst) It results in a nested list with four zeros in each list. Let's look at another example to see how you can loop through a list:) The following code will print the elements in a list horizontally:): for col in range(Y): print(lst[row][col]) The code below allows users to print each row with one underneath the other.[row]) Python has wonderful data visualization libraries, such as Matplotlib, that help you build interactive graphs and charts. Go to Settings and in Python Interpreter, search for Matplotlib, and click on Install Package. Here is a simple straight-line graph plotted using Matplotlib: import matplotlib.pyplot as plt plt.plot([1,2,3,4],[1,2,3,4]) plt.show() In the last section of this PyCharm tutorial, we will learn how to import a .csv file onto PyCharm using Pandas. Learn data operations in Python, strings, conditional statements, error handling, and the commonly used Python web framework Django with the Python Training course. Pandas is a widely popular library used for data analysis and manipulation in Python. You need to import the pandas package to use it. To do so, go to File menu >> Settings >> Python Interpreter >> Search for pandas >> install package. The program below will help you import a CSV file onto PyCharm and store it as a data frame using the pandas library. import pandas as pd path = “D:/cars.csv” df = pd.read_csv(path) print(df) If you want to see the first five records of the data frame, use the head() function. We hope this PyCharm tutorial helped you understand how to work with the Python language using the PyCharm IDE. You would've learned how to download and set up the PyCharm IDE, create a project and Python file, configure the settings, write simple code in Python, and run it in various ways. You got an idea of how to install packages, such as NumPy, Matplotlib, and pandas. Finally, you imported a .csv file onto PyCharm using pandas. That’s just some basics. If you want to learn more complex Python coding,
https://www.simplilearn.com/tutorials/python-tutorial/pycharm?source=sl_frs_nav_user_clicks_on_previous_tutorial
CC-MAIN-2020-40
refinedweb
1,216
73.17
Django is the most popular web development framework for the Python programming language. Its design facilitates rapid development without compromising on the standards of professionally built applications. It is free, open source, uses the Model-View-Template architectural pattern, and encapsulates a lot of boilerplates to enable developers churn out web apps quickly. In this tutorial, you will learn and demonstrate how to create a deployment pipeline to continuously deploy your Django apps to a hosting environment. Prerequisites To follow this post, a few things are required: - Basic knowledge of the Python programming language - Python (version >= 3) installed on your system and updated - A Heroku account - A CircleCI account - A GitHub account With all these installed and set up, you can begin the tutorial. Cloning and running the sample Django project To get started, you will be cloning a simple Django project you can use for the deployment demonstration. To clone the project, run: git clone --single-branch --branch base-project Once cloned, go into the project root ( cd cd-django-site) and run the following command to start up the project with the local Python server: python manage.py runserver This will boot up the server and run the application at the address. Load this address in your browser. Creating a Heroku App Your next step is to set up a Heroku application to host the application. Go to your Heroku dashboard and click New -> Create new app. Enter a name for the new application. Make a note of the name you just entered. You will need this later on in the tutorial. Next, locate your Heroku API key in the Account Settings section of your dashboard. You will also need this later in the tutorial. Setting up a CircleCI project for deployment To begin this process, you first need to push the project to a remote repository on GitHub. Make sure that this is the GitHub account connected to your CircleCI account. Next, go to the Projects page (click Projects on the vertical menu on the right) on the CircleCI dashboard. Add the project. Click Set Up Project to begin. Click Skip this step on the modal that pops up. We will be manually adding our CircleCI config later in this tutorial. On the Setup page, click Use Existing Config to indicate that you are adding a configuration file manually and not using the sample displayed. Next, you get a prompt to either download a configuration file for the pipeline or start building. Click Start Building. This build will fail because we have not set up our configuration file yet. one that is selected. Back on the Project Settings page, click Environment Variables on the side menu. On the Environment Variables page, click Add Environment Variable. Add the following environment variables: HEROKU_APP_NAMEis the name of your Heroku application (in this case the name is cci-cd-django). HEROKU_API_KEYis your Heroku account API key. You can copy and paste it from Heroku’s account page. Now that you have added environment variables, you have everything set up on your CircleCI console for deployment to Heroku. Automating the deployment of the Django app To finalize the process, you will need to set up the Django project for deployment on Heroku. Begin by installing the Gunicorn web server for Python. Gunicorn is Heroku’s preferred server for running Django apps in production. At the root of the project, install Gunicorn by running: pip install gunicorn Next, install psycopg2, which is the Postgres adapter for Python applications. Enter this command: pip install psycopg2-binary For a successful deployment, Heroku also requires the django-heroku package to be installed and configured. Install this package by running: pip install django-heroku When the installation is complete, import this latest package at the top of the my_django_project/settings.py file. Place it just below the line from pathlib import Path. import django_heroku Then at the bottom of the file, add the following line: django_heroku.settings(locals()) With all dependencies installed, update the requirements.txt file that tracks all dependencies. Enter this command: pip freeze > requirements.txt In the command above, pip freeze is used to get all the dependencies into the project and send the output of this command into the requirements.txt file. This step is required by Heroku. Next, create a file named Procfile at the root of the project (Heroku apps include a Procfile that specifies the commands that are executed by the app on startup) and add the following line: web: gunicorn my_django_project.wsgi This command instructs Heroku to run the application using the gunicorn server. You now have everything in place to make sure that the Django application is successfully deployed to Heroku. Now you can write the continuous deployment pipeline script that will ship the project from your local environment to Heroku’s remote hosting environment. At the root of your project, create a folder named .circleci with a file named config.yml inside it. In config.yml, enter this code: version: 2.1 orbs: heroku: circleci/heroku@0.0.10 workflows: heroku_deploy: jobs: - heroku/deploy-via-git In this configuration, the Heroku orb circleci/heroku@0.0.10 is imported, which automatically gives access to a set of Heroku jobs and commands that enable ease-of-use of the Heroku toolbelt. One of these jobs is the heroku/deploy-via-git, which deploys your application straight from your GitHub repo to your Heroku account. This job already takes care of installing the Heroku CLI, installing project dependencies, and deploying the application. It also picks up your environment variables to facilitate a smooth deployment to Heroku. Commit all changes to the project and push to your remote GitHub repository. This will automatically trigger the deployment pipeline. Success! Next, click the build (heroku/deploy-via-git), for details about the deployment. Getting that successful build is great, but you need to confirm that your app actually works error-free on Heroku. To do that, visit your Heroku-assigned application URL. The URL should be in the format https://[APP_NAME].herokuapp.com. For this tutorial, the URL is:. Fantastic! Conclusion Python developers love working with Django because of its feature-rich nature and easy-to-use API. Python itself is a developer-friendly language and Django makes using Python for web apps a great choice. If your team makes use of these tools, let them know about what you have learned in this tutorial. The many benefits of using a continuous deployment pipeline to automatically deploy your Django applications only increase when other team members have access to the information. Make deployments to Heroku one less thing for your team to worry about. Happy coding! Fikayo Adepoju is a full-stack developer, technical writer, and tech content creator proficient in Web and Mobile technologies and DevOps, with over ten .
https://circleci.com/blog/django-deploy/
CC-MAIN-2021-39
refinedweb
1,138
56.66
Indicator development - missing a key piece I'm still not sure when to use init and when to use next. The indicator I'm trying to build is simple. it's effectively: (i'm making up the actual formula, as this one is proprietary, but it's all regular variables off the data object) if the close > last close then x, if close < last close then y, if close == lastclose then 0 effectively, what i want is: x = self.data.close - self.data.close(-1) * self.data.high y = self.data.close(-1) - self.data.close * self.data.high self.lines.myindicator = x if close > close(-1) else y if close < close(-1) else 0 but that throws an error, and i'm really not sure where to start, there's so much about the indicators I don't really get at this point. bool should return bool, returned LineOwnOperation effectively - can I run boolean operations in init? if so, how? if not - then I guess I have to do this in next - but I run into trouble there too. never mind, solved by using next - backtrader administrators last edited by backtrader @cemdev said in Indicator development - missing a key piece: x = self.data.close - self.data.close(-1) * self.data.high y = self.data.close(-1) - self.data.close * self.data.high self.lines.myindicator = x if close > close(-1) else y if close < close(-1) else 0 (Use ``` to quote code blocks) The above is almost ok for __init__. The problem being that not everything in python can be overridden and if ... elif ... elseis one of those things. One needs to use backtrader.If(capital Ito distinguish it even with from backtrader import *) See here: Docs - Platform Concepts, specifically the section Some non-overriden operators/functions With that in mind self.lines.myindicator = bt.If(close > close(-1), x, bt.If(close < close(-1), y, 0)) Which effectively says: if close > close(-1) then x elif close < close(-1) then y else 0 oh, that is EXCELLENT. Many thanks for the responses, especially on a sunday. i have it working with next, but i'll refactor with this, seems much cleaner.
https://community.backtrader.com/topic/531/indicator-development-missing-a-key-piece
CC-MAIN-2020-50
refinedweb
363
63.7