text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
cheers. Index: initdb.c =================================================================== RCS file: /projects/cvsroot/pgsql-server/src/bin/initdb/initdb.c,v retrieving revision 1.4 diff -c -w -r1.4 initdb.c *** initdb.c 13 Nov 2003 01:36:00 -0000 1.4 --- initdb.c 13 Nov 2003 04:15:33 -0000 *************** *** 2324,2329 **** --- 2324,2333 ---- pqsignal(SIGTERM, trapsig); #endif + printf("The files belonging to this database system will be owned by user %s.\n" + "This user must also own the server process.\n\n,", + efective_user); + /* clear this we'll use it in a few lines */ errno = 0; ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? Advertising
https://www.mail-archive.com/pgsql-patches@postgresql.org/msg01438.html
CC-MAIN-2018-13
refinedweb
105
60.31
- NAME - SYNOPSIS - DESCRIPTION - OPTIONS - METHODS - SYNTAX HIGHLIGHTING - WRITING PLUGINS - CONTRIBUTIONS - AUTHOR - BUGS - TODO - SEE ALSO NAME Tk::CodeText - a TextUndo widget with syntax highlighting capabilities SYNOPSIS use Tk; require Tk::CodeText; my $m = new MainWindow; my $e = $m->Scrolled('CodeText', -disablemenu => 1, -syntax => 'Perl', -scrollbars => 'se', )->pack(-expand => 1, -fill => 'both'); $m->configure(-menu => $e->menu); $m->MainLoop; DESCRIPTION Tk::CodeText, and Xresources. OPTIONS - Name: autoindent - - Class: Autoindent - - Switch: -autoindent Boolean, when you press the enter button, should the next line begin at the same position as the current line or not. By default false. - Name: commentchar - - Class: Commentchar - - Switch: -commentchar By default "#". - - Boolean, by default 0. In case you don't want the menu under the right mouse button to pop up. - Name: indentchar - - Class: Indentchar - - Switch: -indentchar By default "\t". - Name: match - - Class: Match - - Switch: -match string of pairs for brace/bracket/curlie etc matching. If this description doesn't make anything clear, don't worry, the default setting will: '[]{}()' if you don't want matching to be available, simply set it to ''. - Name: matchoptions - - Class: Matchoptions - - Switch: -matchoptions Options list for the tag 'Match'. By default: [-background => 'red', -foreground => 'yellow'] You can also specify this option as a space separated string. Might come in handy for your Xresource files. "-background red -foreground yellow" - Name: not available - - Class: not available - - Switch -rules. - Name: rulesdir - - Class: Rulesdir - - Switch -rulesdir. - Name: syntax - - Class: Syntax - - Switch: -syntax Specifies the language for highlighting. At this moment the possible values are None, HTML, Perl, Pod and Xresources. By default None Alternatively it is possible to specify a reference to your independent plugin. - Name: Not available - - Class: Not available - - Switch: -updatecall Here you can specify a callback that will be executed whenever the insert cursor has moved or text has been modified, so your application can keep track of position etc. Don't make this callback to heavy, the widget will get sluggish quickly. There are some undocumented options. They are used internally. It is propably best to leave them alone. METHODS - doAutoIndent Checks the indention of the previous line and indents the line where the cursor is equally deep. - highlight($begin, $end); Does syntax highlighting on the section of text indicated by $begin and $end. $begin and $end are linenumbers not indexes! - highlightCheck>($begin, $end);. - highlightLine($line); Does syntax highlighting on linenumber $line. - highlightPlug. - highlightPlugInit. - highlightPurge($line); Tells the widget that the text from linenumber $line to the end of the text is not to be considered highlighted any more. - highlightVisual Calls visualEnd to see what part of the text is visible on the display, and adjusts highlighting accordingly. - linenumber($index); Returns the linenumber part of an index. You may also specify indexes like 'end' or 'insert' etc. - matchCheck Checks wether the character that is just before the 'insert'-mark should be matched, and if so should it match forwards or backwards. It then calls matchFind. - matchFind($direction, $char, $match, $start, $stop); Matches $char to $match, skipping nested $char/$match pairs, and displays the match found (if any). - rulesEdit Pops up a window that enables the user to set the color and font options for the current syntax. - rulesFetch Checks wether the file $text->cget('-rulesdir') . '/' . $text->cget('-syntax') . '.rules' exists, and if so attempts to load this as a set of rules. - rulesSave Saves the currently loaded rules as $text->cget('-rulesdir') . '/' . $text->cget('-syntax') . '.rules' - selectionComment Comment currently selected text. - selectionIndent Indent currently selected text. - selectionModify Used by the other selection... methods to do the actual work. - selectionUnComment Uncomment currently selected text. - selectionUnIndent Unindent currently selected text. SYNTAX HIGHLIGHTING CodeText CodeText will then mark positions 0 to 2 as 'Reserved', positions 2 to 3 as 'DEFAULT', positions 3 to 10 as 'Variable', etcetera. WRITING PLUGINS::CodeText, your plugin must be in the namespace Tk::CodeText::YourSyntax. - The constructor is called 'new', and it should accept a reference a reference to a list of rules as parameters. - The following methods will be called upon by Tk::CodeText: highlight, stateCompare, rules, setSate, getState, syntax. More information about those methods is available in the documentation of Tk::CodeText::None and Tk::CodeText::Template. Good luck, you're on your own now. Inheriting Tk::CodeText::Template For many highlighting problems Tk::CodeText::Template provides a nice basis to start from. Your code could look like this: package Tk::CodeText::MySyntax; use strict; use base('Tk::CodeText::Template'); sub new { my ($proto, $wdg, $rules) = @_; my $class = ref($proto) || $proto; Next, specify the set of hardcoded rules. if (not defined($rules)) { $rules = [ ['Tagname1', -foreground => 'red'], ['Tagname1', -foreground => 'red'], ]; }; Call the constructor of Tk::CodeText:::CodeText:. CONTRIBUTIONS If you have written a plugin, i will be happy to include it in the next release of Tk::CodeText. If you send it to me, please have it accompanied with the sample of code that you used for testing. AUTHOR BUGS Unknown. If you find any, please contact the author. TODO - Add additional language modules. I am going to need help on this one. - - HTML and Xresources plugins need rewriting. - - The sample files in the test suite should be set up so that conformity with the language specification can actually be verified. - SEE ALSO - Tk::Text, Tk::TextUndo, Tk::CodeText::None, Tk::CodeText::Perl Tk::CodeText::HTML, Tk::CodeText::Template, Tk::CodeText::Bash -
https://metacpan.org/pod/distribution/Tk-CodeText/CodeText.pod
CC-MAIN-2018-09
refinedweb
884
57.87
This C# Program Illustrates Elapsed Event. Here an Event Handler is set up for the Timer.Elapsed event, creates a timer, and starts the timer. The event handler displays the SignalTime property each time it is raised. Here is source code of the C# Program to Illustrate Elapsed Event. The C# program is successfully compiled and executed with Microsoft Visual Studio. The program output is also shown below. /* * C# Program to Illustrate Elapsed Event */ using System; using System.Timers; public class Program { private static System.Timers.Timer Tim; public static void Main() { Tim = new System.Timers.Timer(10); Tim.Elapsed += new ElapsedEventHandler(OnTimedEvent); Tim.Interval = 1000; Tim.Enabled = true; Console.WriteLine("Press Any Key to Exit else Elapsed Event will be Raised "); Console.ReadLine(); } private static void OnTimedEvent(object source, ElapsedEventArgs e) { Console.WriteLine("The Elapsed event was Raised {0}", e.SignalTime); } } Here is the output of the C# Program: Press Any Key to Exit else Elapsed Event will be Raised : The Elapsed event was raised at 9/17/2013 7:24:15 PM The Elapsed event was raised at 9/17/2013 7:24:16 PM The Elapsed event was raised at 9/17/2013 7:24:17 PM Sanfoundry Global Education & Learning Series – 1000 C# Programs. If you wish to look at all C# Programming examples, go to 1000 C# Programs.
http://www.sanfoundry.com/csharp-program-elapsed-event/
CC-MAIN-2018-09
refinedweb
223
69.07
Test Run - Dive into Neural Networks By James McCaffrey | May 2012 An artificial neural network (usually just called a neural network) is an abstraction loosely modeled on biological neurons and synapses. Although neural networks have been studied for decades, many neural network code implementations on the Internet are not, in my opinion, explained very well. In this month’s column, I’ll explain what artificial neural networks are and present C# code that implements a neural network. The best way to see where I’m headed is to take a look at Figure 1 and Figure 2. One way of thinking about neural networks is to consider them numerical input-output mechanisms. The neural network in Figure 1 has three inputs labeled x0, x1 and x2, with values 1.0, 2.0 and 3.0, respectively. The neural network has two outputs labeled y0 and y1, with values 0.72 and -0.88, respectively. The neural network in Figure 1 has one layer of so-called hidden neurons and can be described as a three-layer, fully connected, feedforward network with three inputs, two outputs and four hidden neurons. Unfortunately, neural network terminology varies quite a bit. In this article, I’ll generally—but not always—use the terminology described in the excellent neural network FAQ at bit.ly/wfikTI. Figure 1 Neural Network Structure Figure 2 Neural Network Demo Program Figure 2 shows the output produced by the demo program presented in this article. The neural network uses both a sigmoid activation function and a tanh activation function. These functions are suggested by the two equations with the Greek letters phi in Figure 1. The outputs produced by a neural network depend on the values of a set of numeric weights and biases. In this example, there are a total of 26 weights and biases with values 0.10, 0.20 ... -5.00. After the weight and bias values are loaded into the neural network, the demo program loads the three input values (1.0, 2.0, 3.0) and then performs a series of computations as suggested by the messages about the input-to-hidden sums and the hidden-to-output sums. The demo program concludes by displaying the two output values (0.72, -0.88). I’ll walk you through the program that produced the output shown in Figure 2. This column assumes you have intermediate programming skills but doesn’t assume you know anything about neural networks. The demo program is coded using the C# language but you should have no trouble refactoring the demo code to another language such as Visual Basic .NET or Python. The program presented in this article is essentially a tutorial and a platform for experimentation; it does not directly solve any practical problem, so I’ll explain how you can expand the code to solve meaningful problems. I think you’ll find the information quite interesting, and some of the programming techniques can be valuable additions to your coding skill set. Modeling a Neural Network Conceptually, artificial neural networks are modeled on the behavior of real biological neural networks. In Figure 1 the circles represent neurons where processing occurs and the arrows represent both information flow and numeric values called weights. In many situations, input values are copied directly into input neurons without any weighting and emitted directly without any processing, so the first real action occurs in the hidden layer neurons. Assume that input values 1.0, 2.0 and 3.0 are emitted from the input neurons. If you examine Figure 1, you can see an arrow representing a weight value between each of the three input neurons and each of the four hidden neurons. Suppose the three weight arrows shown pointing into the top hidden neuron are named w00, w10 and w20. In this notation the first index represents the index of the source input neuron and the second index represents the index of the destination hidden neuron. Neuron processing occurs in three steps. In the first step, a weighted sum is computed. Suppose w00 = 0.1, w10 = 0.5 and w20 = 0.9. The weighted sum for the top hidden neuron is (1.0)(0.1) + (2.0)(0.5) + (3.0)(0.9) = 3.8. The second processing step is to add a bias value. Suppose the bias value is -2.0; then the adjusted weighted sum becomes 3.8 + (-2.0) = 1.8. The third step is to apply an activation function to the adjusted weighted sum. Suppose the activation function is the sigmoid function defined by 1.0 / (1.0 + Exp(-x)), where Exp represents the exponential function. The output from the hidden neuron becomes 1.0 / (1.0 + Exp(-1.8)) = 0.86. This output then becomes part of the weighted sum input into each of the output layer neurons. In Figure 1, this three-step process is suggested by the equation with the Greek letter phi: weighted sums (xw) are computed, a bias (b) is added and an activation function (phi) is applied. After all hidden neuron values have been computed, output layer neuron values are computed in the same way. The activation function used to compute output neuron values can be the same function used when computing the hidden neuron values, or a different activation function can be used. The demo program shown running in Figure 2 uses the hyperbolic tangent function as the hidden-to-output activation function. After all output layer neuron values have been computed, in most situations these values are not weighted or processed but are simply emitted as the final output values of the neural network. Internal Structure The key to understanding the neural network implementation presented here is to closely examine Figure 3, which, at first glance, might appear extremely complicated. But bear with me—the figure is not nearly as complex as it might first appear. Figure 3 shows a total of eight arrays and two matrices. The first array is labeled this.inputs. This array holds the neural network input values, which are 1.0, 2.0 and 3.0 in this example. Next comes the set of weight values that are used to compute values in the so-called hidden layer. These weights are stored in a 3 x 4 matrix labeled i-h weights where the i-h stands for input-to-hidden. Notice in Figure 1 that the demo neural network has four hidden neurons. The i-h weights matrix has a number of rows equal to the number of inputs and a number of columns equal to the number of hidden neurons. The array labeled i-h sums is a scratch array used for computation. Note that the length of the i-h sums array will always be the same as the number of hidden neurons (four, in this example). Next comes an array labeled i-h biases. Neural network biases are additional weights used to compute hidden and output layer neurons. The length of the i-h biases array will be the same as the length of the i-h sums array, which in turn is the same as the number of hidden neurons. The array labeled i-h outputs is an intermediate result and the values in this array are used as inputs to the next layer. The i-h sums array has length equal to the number of hidden neurons. Next comes a matrix labeled h-o weights where the h-o stands for hidden-to-output. Here the h-o weights matrix has size 4 x 2 because there are four hidden neurons and two outputs. The h-o sums array, the h-o biases array and the this.outputs array all have lengths equal to the number of outputs (two, in this example). The array labeled weights at the bottom of Figure 3 holds all the input-to-hidden and hidden-to-output weights and biases. In this example, the length of the weights array is (3 * 4) + 4 + (4 * 2) + 2 = 26. In general, if Ni is the number of input values, Nh is the number of hidden neurons and No is the number of outputs, then the length of the weights array will be Nw = (Ni * Nh) + Nh + (Nh * No) + No. Computing the Outputs After the eight arrays and two matrices described in the previous section have been created, a neural network can compute its output based on its inputs, weights and biases. The first step is to copy input values into the this.inputs array. The next step is to assign values to the weights array. For the purposes of a demonstration you can use any weight values you like. Next, values in the weights array are copied to the i-h weights matrix, the i-h biases array, the h-o weights matrix and the h-o biases array. Figure 3 should make this relationship clear. The values in the i-h sums array are computed in two steps. The first step is to compute the weighted sums by multiplying the values in the inputs array by the values in the appropriate column of the i-h weights matrix. For example, the weighted sum for hidden neuron [3] (where I’m using zero-based indexing) uses each input value and the values in column [3] of the i-h weights matrix: (1.0)(0.4) + (2.0)(0.8) + (3.0)(1.2) = 5.6. The second step when computing i-h sum values is to add each bias value to the current i-h sum value. For example, because i-h biases [3] has value -7.0, the value of i-h sums [3] becomes 5.6 + (-7.0) = -1.4. After all the values in the i-h sums array have been calculated, the input-to-hidden activation function is applied to those sums to produce the input-to-hidden output values. There are many possible activation functions. The simplest activation function is called the step function, which simply returns 1.0 for any input value greater than zero and returns 0.0 for any input value less than or equal to zero. Another common activation function, and the one used in this article, is the sigmoid function, which is defined as f(x) = 1.0 / (1.0 + Exp(-x)). The graph of the sigmoid function is shown in Figure 4. Figure 4 The Sigmoid Function Notice the sigmoid function returns a value in the range strictly greater than zero and strictly less than one. In this example, if the value for i-h sums [3] after the bias value has been added is -1.4, then the value of i-h outputs [3] becomes 1.0 / (1.0 + Exp(-(-1.4))) = 0.20. After all the input-to-hidden output neuron values have been computed, those values serve as the inputs for the hidden-to-output layer neuron computations. These computations work in the same way as the input-to-hidden computations: preliminary weighted sums are calculated, biases are added and then an activation function is applied. In this example I use the hyperbolic tangent function, abbreviated as tanh, for the hidden-to-output activation function. The tanh function is closely related to the sigmoid function. The graph of the tanh function has an S-shaped curve similar to the sigmoid function, but tanh returns a value in the range (-1,1) instead of in the range (0,1). Combining Weights and Biases All of the neural network implementations I’ve seen on the Internet don’t maintain separate weight and bias arrays, but instead combine weights and biases into the weights matrix. How is this possible? Recall that the computation of the value of input-to-hidden neuron [3] resembled (i0 * w03) + (i1 * w13) + (i2 * w23) + b3, where i0 is input value [0], w03 is the weight for input [0] and neuron [3], and b3 is the bias value for hidden neuron [3]. If you create an additional, fake input [4] that has a dummy value of 1.0, and an additional row of weights that hold the bias values, then the previously described computation becomes: (i0 * w03) + (i1 * w13) + (i2 * w23) + (i3 * w33), where i3 is the dummy 1.0 input value and w33 is the bias. The argument is that this approach simplifies the neural network model. I disagree. In my opinion, combining weights and biases makes a neural network model more difficult to understand and more error-prone to implement. However, apparently I’m the only author who seems to have this opinion, so you should make your own design decision. Implementation I implemented the neural network shown in Figures 1, 2 and 3 using Visual Studio 2010. I created a C# console application named NeuralNetworks. In the Solution Explorer window I right-clicked on file Program.cs and renamed it to NeuralNetworksProgram.cs, which also changed the template-generated class name to NeuralNetworksProgram. The overall program structure, with most WriteLine statements removed, is shown in Figure 5. using System; namespace NeuralNetworks { class NeuralNetworksProgram { static void Main(string[] args) { try { Console.WriteLine("\nBegin Neural Network demo\n"); NeuralNetwork nn = new NeuralNetwork(3, 4, 2); double[] weights = new double[] { 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, -2.0, -6.0, -1.0, -7.0, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, -2.5, -5.0 }; nn.SetWeights(weights); double[] xValues = new double[] { 1.0, 2.0, 3.0 }; double[] yValues = nn.ComputeOutputs(xValues); Helpers.ShowVector(yValues); Console.WriteLine("End Neural Network demo\n"); } catch (Exception ex) { Console.WriteLine("Fatal: " + ex.Message); } } } class NeuralNetwork { // Class members here public NeuralNetwork(int numInput, int numHidden, int numOutput) { ... } public void SetWeights(double[] weights) { ... } public double[] ComputeOutputs(double[] xValues) { ... } private static double SigmoidFunction(double x) { ... } private static double HyperTanFunction(double x) { ... } } public class Helpers { public static double[][] MakeMatrix(int rows, int cols) { ... } public static void ShowVector(double[] vector) { ... } public static void ShowMatrix(double[][] matrix, int numRows) { ... } } } // ns I deleted all the template-generated using statements except for the one referencing the System namespace. In the Main function, after displaying a begin message, I instantiate a NeuralNetwork object named nn with three inputs, four hidden neurons and two outputs. Next, I assign 26 arbitrary weights and biases to an array named weights. I load the weights into the neural network object using a method named SetWeights. I assign values 1.0, 2.0 and 3.0 to an array named xValues. I use method ComputeOutputs to load the input values into the neural network and determine the resulting outputs, which I fetch into an array named yValues. The demo concludes by displaying the output values. The NeuralNetwork Class The NeuralNetwork class definition starts: As explained in the previous sections, the structure of a neural network is determined by the number of input values, the number of hidden layer neurons and the number of output values. The class definition continues as: These seven arrays and two matrices correspond to the ones shown in Figure 3. I use an ih prefix for input-to-hidden data and an ho prefix for hidden-to-output data. Recall that the values in the ihOutputs array serve as the inputs for the output layer computations, so naming this array precisely is a bit troublesome. Figure 6 shows how the NeuralNetwork class constructor is defined. public NeuralNetwork(int numInput, int numHidden, int numOutput) { this.numInput = numInput; this.numHidden = numHidden; this.numOutput = numOutput; inputs = new double[numInput]; ihWeights = Helpers.MakeMatrix(numInput, numHidden); ihSums = new double[numHidden]; ihBiases = new double[numHidden]; ihOutputs = new double[numHidden]; hoWeights = Helpers.MakeMatrix(numHidden, numOutput); hoSums = new double[numOutput]; hoBiases = new double[numOutput]; outputs = new double[numOutput]; } After copying the input parameter values numInput, numHidden and numOutput into their respective class fields, each of the nine member arrays and matrices are allocated with the sizes I explained earlier. I implement matrices as arrays of arrays rather than using the C# multidimensional array type so that you can more easily refactor my code to a language that doesn’t support multidimensional array types. Because each row of my matrices must be allocated, it’s convenient to use a helper method such as MakeMatrix. The SetWeights method accepts an array of weights and bias values and populates ihWeights, ihBiases, hoWeights and hoBiases. The method begins like this: As explained earlier, the total number of weights and biases, Nw, in a fully connected feedforward neural network is (Ni * Nh) + (Nh * No) + Nh + No. I do a simple check to see if the weights array parameter has the correct length. Here, “xxxxxx” is a stand-in for a descriptive error message. Next, I initialize an index variable k to the beginning of the weights array parameter. Method SetWeights concludes:++] } Each value in the weights array parameter is copied sequentially into ihWeights, ihBiases, hoWeights and hoBiases. Notice no values are copied into ihSums or hoSums because those two scratch arrays are used for computation. Computing the Outputs The heart of the NeuralNetwork class is method ComputeOutputs. The method is surprisingly short and simple and begins: First I check to see if the length of the input x-values array is the correct size for the NeuralNetwork object. Then I zero out the ihSums and hoSums arrays. If ComputeOutputs is called only once, then this explicit initialization is not necessary, but if ComputeOutputs is called more than once—because ihSums and hoSums are accumulated values—the explicit initialization is absolutely necessary. An alternative design approach is to not declare and allocate ihSums and hoSums as class members, but instead make them local to the ComputeOutputs method. Method ComputeOutputs continues: The values in the xValues array parameter are copied to the class inputs array member. In some neural network scenarios, input parameter values are normalized, for example by performing a linear transform so that all inputs are scaled between -1.0 and +1.0, but here no normalization is performed. Next, a nested loop computes the weighted sums as shown in Figures 1 and 3. Notice that in order to index ihWeights in standard form where index i is the row index and index j is the column index, it’s necessary to have j in the outer loop. Method ComputeOutputs continues: Each weighted sum is modified by adding the appropriate bias value. At this point, to produce the output shown in Figure 2, I used method Helpers.ShowVector to display the current values in the ihSums array. Next, I apply the sigmoid function to each of the values in ihSums and assign the results to array ihOutputs. I’ll present the code for method SigmoidFunction shortly. Method ComputeOutputs continues: I use the just-computed values in ihOutputs and the weights in hoWeights to compute values into hoSums, then I add the appropriate hidden-to-output bias values. Again, to produce the output shown in Figure 2, I called Helpers.ShowVector. Method ComputeOutputs finishes: I apply method HyperTanFunction to the hoSums to generate the final outputs into class array private member outputs. I copy those outputs to a local result array and use that array as a return value. An alternative design choice would be to implement ComputeOutputs without a return value, but implement a public method GetOutputs so that the outputs of the neural network object could be retrieved. The Activation Functions and Helper Methods Here’s the code for the sigmoid function used to compute the input-to-hidden outputs: Because some implementations of the Math.Exp function can produce arithmetic overflow, checking the value of the input parameter is usually performed. The code for the tanh function used to compute the hidden-to-output results is: The hyperbolic tangent function returns values between -1 and +1, so arithmetic overflow is not a problem. Here the input value is checked merely to improve performance. The static utility methods in class Helpers are just coding conveniences. The MakeMatrix method used to allocate matrices in the NeuralNetwork constructor allocates each row of a matrix implemented as an array of arrays: Methods ShowVector and ShowMatrix display the values in an array or matrix to the console. You can see the code for these two methods in the code download that accompanies this article (available at msdn.microsoft.com/magazine/msdnmag0512). Next Steps The code presented here should give you a solid basis for understanding and experimenting with neural networks. You might want to examine the effects of using different activation functions and varying the number of inputs, outputs and hidden layer neurons. You can modify the neural network by making it partially connected, where some neurons are not logically connected to neurons in the next layer. The neural network presented in this article has one hidden layer. It’s possible to create more complex neural networks that have two or even more hidden layers, and you might want to extend the code presented here to implement such a neural network. Neural networks can be used to solve a variety of practical problems, including classification problems. In order to solve such problems there are several challenges. For example, you must know how to encode non-numeric data and how to train a neural network to find the best set of weights and biases. I will present an example of using neural networks for classification in a future article. Dr. James McCaffrey works for Volt Information Sciences Inc., where he manages technical training for software engineers working at Microsoft’s Redmond, Wash., campus. He has worked on several Microsoft products including Internet Explorer and MSN Search. He’s the author of “.NET Test Automation Recipes” (Apress, 2006), and can be reached at jammc@microsoft.com. Thanks to the following Microsoft technical experts for reviewing this article: Dan Liebling and Anne Loomis Thompson MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/magazine/3791e878-be19-4e07-af65-574c02d1faf7
CC-MAIN-2017-39
refinedweb
3,686
54.63
The thing I love most about programming is the aha! moment when you start to fully understand a concept. Even though it might take a long time and no small amount of effort to get there, it sure is worth it. I think that the most effective way to assess (and help improve) our degree of comprehension of a given subject is to try and apply the knowledge to the real world. Not only does this let us identify and ultimately address our weaknesses, but it can also shed some light on the way things work. A simple trial and error approach often reveals those details that had remained elusive previously. With that in mind, I believe that learning how to implement promises was one of the most important moments in my programming journey — it has given me invaluable insight into how asynchronous code works and has made me a better programmer overall. I hope that this article will help you come to grips with implementing promises in JavaScript as well. We shall focus on how to implement the promise core according to the Promises/A+ specification with a few methods of the Bluebird API. We are also going to be using the TDD approach with Jest. TypeScript is going to come in handy, too. Given that we are going to be working on the skills of implementation here, I am going to assume you have some basic understanding of what promises are and and a vague sense of how they work. If you don’t, here is a great place to start. Now that we have that out of the way, go ahead and clone the repository and let’s get started. The core of a promise As you know, a promise is an object with the following properties: Then A method that attaches a handler to our promise. It returns a new promise with the value from the previous one mapped by one of the handler’s methods. Handlers An array of handlers attached by then. A handler is an object containing two methods onSuccess and onFail, both of which are passed as arguments to then(onSuccess, onFail). type HandlerOnSuccess<T, U = any> = (value: T) => U | Thenable<U>; type HandlerOnFail<U = any> = (reason: any) => U | Thenable<U>; interface Handler<T, U> { onSuccess: HandlerOnSuccess<T, U>; onFail: HandlerOnFail<U>; } State A promise can be in one of three states: resolved, rejected, or pending. Resolved means that either everything went smoothly and we received our value, or we caught and handled the error. Rejected means that either we rejected the promise, or an error was thrown and we didn’t catch it. Pending means that neither the resolve nor the reject method has been called yet and we are still waiting for the value. The term “the promise is settled” means that the promise is either resolved or rejected. Value A value that we have either resolved or rejected. Once the value is set, there is no way of changing it. Testing According to the TDD approach, we want to write our tests before the actual code comes along, so let’s do just that. Here are the tests for our core: describe('PQ <constructor>', () => { test('resolves like a promise', () => { return new PQ<number>((resolve) => { setTimeout(() => { resolve(1); }, 30); }).then((val) => { expect(val).toBe(1); }); }); test('is always asynchronous', () => { const p = new PQ((resolve) => resolve(5)); expect((p as any).value).not.toBe(5); }); test('resolves with the expected value', () => { return new PQ<number>((resolve) => resolve(30)).then((val) => { expect(val).toBe(30); }); }); test('resolves a thenable before calling then', () => { return new PQ<number>((resolve) => resolve(new PQ((resolve) => resolve(30))), ).then((val) => expect(val).toBe(30)); }); test('catches errors (reject)', () => { const error = new Error('Hello there'); return new PQ((resolve, reject) => { return reject(error); }).catch((err: Error) => { expect(err).toBe(error); }); }); test('catches errors (throw)', () => { const error = new Error('General Kenobi!'); return new PQ(() => { throw error; }).catch((err) => { expect(err).toBe(error); }); }); test('is not mutable - then returns a new promise', () => { const start = new PQ<number>((resolve) => resolve(20)); return PQ.all([ start .then((val) => { expect(val).toBe(20); return 30; }) .then((val) => expect(val).toBe(30)), start.then((val) => expect(val).toBe(20)), ]); }); }); Running our tests I highly recommend using the Jest extension for Visual Studio Code. It runs our tests in the background for us and shows us the result right there between the lines of our code as green and red dots for passed and failed tests, respectively. To see the results, open the “Output” console and choose the “Jest” tab. We can also run our tests by executing the following command: npm run test Regardless of how we run the tests, we can see that all of them come back negative. Let’s change that. Implementing the Promise core constructor class PQ<T> { private state: States = States.PENDING; private handlers: Handler<T, any>[] = []; private value: T | any; public static errors = errors; public constructor(callback: (resolve: Resolve<T>, reject: Reject) => void) { try { callback(this.resolve, this.reject); } catch (e) { this.reject(e); } } } Our constructor takes a callback as a parameter. We call this callback with this.resolve and this.reject as arguments. Note that normally we would have bound this.resolve and this.reject to this, but here we have used the class arrow method instead. setResult Now we have to set the result. Please remember that we must handle the result correctly, which means that, should it return a promise, we must resolve it first. class PQ<T> { // ... private setResult = (value: T | any, state: States) => { const set = () => { if (this.state !== States.PENDING) { return null; } if (isThenable(value)) { return (value as Thenable<T>).then(this.resolve, this.reject); } this.value = value; this.state = state; return this.executeHandlers(); }; setTimeout(set, 0); }; } First, we check if the state is not pending — if it is, then the promise is already settled and we can’t assign any new value to it. Then we need to check if a value is a thenable. To put it simply, a thenable is an object with then as a method. By convention, a thenable should behave like a promise. So in order to get the result, we will call then and pass as arguments this.resolve and this.reject. Once the thenable settles, it will call one of our methods and give us the expected non-promise value. So now we have to check if an object is a thenable. describe('isThenable', () => { test('detects objects with a then method', () => { expect(isThenable({ then: () => null })).toBe(true); expect(isThenable(null)).toBe(false); expect(isThenable({})).toBe(false); }); }); const isFunction = (func: any) => typeof func === 'function'; const isObject = (supposedObject: any) => typeof supposedObject === 'object' && supposedObject !== null && !Array.isArray(supposedObject); const isThenable = (obj: any) => isObject(obj) && isFunction(obj.then); It is important to realize that our promise will never be synchronous, even if the code inside the callback is. We are going to delay the execution until the next iteration of the event loop by using setTimeout. Now the only thing left to do is to set our value and status and then execute the registered handlers. executeHandlers class PQ<T> { // ... private executeHandlers = () => { if (this.state === States.PENDING) { return null; } this.handlers.forEach((handler) => { if (this.state === States.REJECTED) { return handler.onFail(this.value); } return handler.onSuccess(this.value); }); this.handlers = []; }; } Again, make sure the state is not pending. The state of the promise dictates which function we are going to use. If it’s resolved, we should execute onSuccess, otherwise — onFail. Let’s now clear our array of handlers just to be safe and not to execute anything accidentally in the future. A handler can be attached and executed later anyways. And that’s what we must discuss next: a way to attach our handler. attachHandler class PQ<T> { // ... private attachHandler = (handler: Handler<T, any>) => { this.handlers = [...this.handlers, handler]; this.executeHandlers(); }; } It really is as simple as it seems. We just add a handler to our handlers array and execute it. That’s it. Now, to put it all together we need to implement the then method. then class PQ<T> { // ... public then<U>( onSuccess?: HandlerOnSuccess<T, U>, onFail?: HandlerOnFail<U>, ) { return new PQ<U | T>((resolve, reject) => { return this.attachHandler({ onSuccess: (result) => { if (!onSuccess) { return resolve(result); } try { return resolve(onSuccess(result)); } catch (e) { return reject(e); } }, onFail: (reason) => { if (!onFail) { return reject(reason); } try { return resolve(onFail(reason)); } catch (e) { return reject(e); } }, }); }); } } In then, we return a promise, and in the callback we attach a handler that is then used to wait for the current promise to be settled. When that happens, either handler’s onSuccess or onFail will be executed and we will proceed accordingly. One thing to remember here is that neither of the handlers passed to then is required. It is important, however, that we don’t try to execute something that might be undefined. Also, in onFail when the handler is passed, we actually resolve the returned promise, because the error has been handled. catch Catch is actually just an abstraction over the then method. class PQ<T> { // ... public catch<U>(onFail: HandlerOnFail<U>) { return this.then<U>(identity, onFail); } } That’s it. Finally Finally is also just an abstraction over doing then(finallyCb, finallyCb), because it doesn’t really care about the result of the promise. Actually, it also preserves the result of the previous promise and returns it. So whatever is being returned by the finallyCb doesn’t really matter. describe('PQ.prototype.finally', () => { test('it is called regardless of the promise state', () => { let counter = 0; return PQ.resolve(15) .finally(() => { counter += 1; }) .then(() => { return PQ.reject(15); }) .then(() => { // wont be called counter = 1000; }) .finally(() => { counter += 1; }) .catch((reason) => { expect(reason).toBe(15); expect(counter).toBe(2); }); }); }); class PQ<T> { // ... public finally<U>(cb: Finally<U>) { return new PQ<U>((resolve, reject) => { let val: U | any; let isRejected: boolean; return this.then( (value) => { isRejected = false; val = value; return cb(); }, (reason) => { isRejected = true; val = reason; return cb(); }, ).then(() => { if (isRejected) { return reject(val); } return resolve(val); }); }); } } toString describe('PQ.prototype.toString', () => { test('returns [object PQ]', () => { expect(new PQ<undefined>((resolve) => resolve()).toString()).toBe( '[object PQ]', ); }); }); class PQ<T> { // ... public toString() { return `[object PQ]`; } } It will just return a string [object PQ]. Having implemented the core of our promises, we can now implement some of the previously mentioned Bluebird methods, which will make operating on promises easier for us. Additional methods Promise.resolve describe('PQ.resolve', () => { test('resolves a value', () => { return PQ.resolve(5).then((value) => { expect(value).toBe(5); }); }); }); class PQ<T> { // ... public static resolve<U = any>(value?: U | Thenable<U>) { return new PQ<U>((resolve) => { return resolve(value); }); } } Promise.reject describe('PQ.reject', () => { test('rejects a value', () => { return PQ.reject(5).catch((value) => { expect(value).toBe(5); }); }); }); class PQ<T> { // ... public static reject<U>(reason?: any) { return new PQ<U>((resolve, reject) => { return reject(reason); }); } } Promise.all describe('PQ.all', () => { test('resolves a collection of promises', () => { return PQ.all([PQ.resolve(1), PQ.resolve(2), 3]).then((collection) => { expect(collection).toEqual([1, 2, 3]); }); }); test('rejects if one item rejects', () => { return PQ.all([PQ.resolve(1), PQ.reject(2)]).catch((reason) => { expect(reason).toBe(2); }); }); }); class PQ<T> { // ... public static all<U = any>(collection: (U | Thenable<U>)[]) { return new PQ<U[]>((resolve, reject) => { if (!Array.isArray(collection)) { return reject(new TypeError('An array must be provided.')); } let counter = collection.length; const resolvedCollection: U[] = []; const tryResolve = (value: U, index: number) => { counter -= 1; resolvedCollection[index] = value; if (counter !== 0) { return null; } return resolve(resolvedCollection); }; return collection.forEach((item, index) => { return PQ.resolve(item) .then((value) => { return tryResolve(value, index); }) .catch(reject); }); }); } } I believe the implementation is pretty straightforward. Starting at collection.length, we count down with each tryResolve until we get to 0, which means that every item of the collection has been resolved. We then resolve the newly created collection. Promise.any describe('PQ.any', () => { test('resolves the first value', () => { return PQ.any<number>([ PQ.resolve(1), new PQ((resolve) => setTimeout(resolve, 15)), ]).then((val) => expect(val).toBe(1)); }); test('rejects if the first value rejects', () => { return PQ.any([ new PQ((resolve) => setTimeout(resolve, 15)), PQ.reject(1), ]).catch((reason) => { expect(reason).toBe(1); }); }); }); class PQ<T> { // ... public static any<U = any>(collection: (U | Thenable<U>)[]) { return new PQ<U>((resolve, reject) => { return collection.forEach((item) => { return PQ.resolve(item) .then(resolve) .catch(reject); }); }); } } We simply wait for the first value to resolve and return it in a promise. Promise.props describe('PQ.props', () => { test('resolves object correctly', () => { return PQ.props<{ test: number; test2: number }>({ test: PQ.resolve(1), test2: PQ.resolve(2), }).then((obj) => { return expect(obj).toEqual({ test: 1, test2: 2 }); }); }); test('rejects non objects', () => { return PQ.props([]).catch((reason) => { expect(reason).toBeInstanceOf(TypeError); }); }); }); class PQ<T> { // ... public static props<U = any>(obj: object) { return new PQ<U>((resolve, reject) => { if (!isObject(obj)) { return reject(new TypeError('An object must be provided.')); } const resolvedObject = {}; const keys = Object.keys(obj); const resolvedValues = PQ.all<string>(keys.map((key) => obj[key])); return resolvedValues .then((collection) => { return collection.map((value, index) => { resolvedObject[keys[index]] = value; }); }) .then(() => resolve(resolvedObject as U)) .catch(reject); }); } } We iterate over keys of the passed object, resolving every value. We then assign the values to the new object and resolve a promise with it. Promise.prototype.spread describe('PQ.protoype.spread', () => { test('spreads arguments', () => { return PQ.all<number>([1, 2, 3]).spread((...args) => { expect(args).toEqual([1, 2, 3]); return 5; }); }); test('accepts normal value (non collection)', () => { return PQ.resolve(1).spread((one) => { expect(one).toBe(1); }); }); }); class PQ<T> { // ... public spread<U>(handler: (...args: any[]) => U) { return this.then<U>((collection) => { if (Array.isArray(collection)) { return handler(...collection); } return handler(collection); }); } } Promise.delay describe('PQ.delay', () => { test('waits for the given amount of miliseconds before resolving', () => { return new PQ<string>((resolve) => { setTimeout(() => { resolve('timeout'); }, 50); return PQ.delay(40).then(() => resolve('delay')); }).then((val) => { expect(val).toBe('delay'); }); }); test('waits for the given amount of miliseconds before resolving 2', () => { return new PQ<string>((resolve) => { setTimeout(() => { resolve('timeout'); }, 50); return PQ.delay(60).then(() => resolve('delay')); }).then((val) => { expect(val).toBe('timeout'); }); }); }); class PQ<T> { // ... public static delay(timeInMs: number) { return new PQ((resolve) => { return setTimeout(resolve, timeInMs); }); } } By using setTimeout, we simply delay the execution of the resolve function by the given number of milliseconds. Promise.prototype.timeout describe('PQ.prototype.timeout', () => { test('rejects after given timeout', () => { return new PQ<number>((resolve) => { setTimeout(resolve, 50); }) .timeout(40) .catch((reason) => { expect(reason).toBeInstanceOf(PQ.errors.TimeoutError); }); }); test('resolves before given timeout', () => { return new PQ<number>((resolve) => { setTimeout(() => resolve(500), 500); }) .timeout(600) .then((value) => { expect(value).toBe(500); }); }); }); class PQ<T> { // ... public timeout(timeInMs: number) { return new PQ<T>((resolve, reject) => { const timeoutCb = () => { return reject(new PQ.errors.TimeoutError()); }; setTimeout(timeoutCb, timeInMs); return this.then(resolve); }); } } This one is a bit tricky. If the setTimeout executes faster than then in our promise, it will reject the promise with our special error. Promise.promisify describe('PQ.promisify', () => { test('works', () => { const getName = (firstName, lastName, callback) => { return callback(null, `${firstName} ${lastName}`); }; const fn = PQ.promisify<string>(getName); const firstName = 'Maciej'; const lastName = 'Cieslar'; return fn(firstName, lastName).then((value) => { return expect(value).toBe(`${firstName} ${lastName}`); }); }); }); class PQ<T> { // ... public static promisify<U = any>( fn: (...args: any[]) => void, context = null, ) { return (...args: any[]) => { return new PQ<U>((resolve, reject) => { return fn.apply(context, [ ...args, (err: any, result: U) => { if (err) { return reject(err); } return resolve(result); }, ]); }); }; } } We apply to the function all the passed arguments, plus — as the last one — we give the error-first callback. Promise.promisifyAll describe('PQ.promisifyAll', () => { test('promisifies a object', () => { const person = { name: 'Maciej Cieslar', getName(callback) { return callback(null, this.name); }, }; const promisifiedPerson = PQ.promisifyAll<{ getNameAsync: () => PQ<string>; }>(person); return promisifiedPerson.getNameAsync().then((name) => { expect(name).toBe('Maciej Cieslar'); }); }); }); class PQ<T> { // ... public static promisifyAll<U>(obj: any): U { return Object.keys(obj).reduce((result, key) => { let prop = obj[key]; if (isFunction(prop)) { prop = PQ.promisify(prop, obj); } result[`${key}Async`] = prop; return result; }, {}) as U; } } We iterate over the keys of the object and promisify its methods and add to each name of the method word Async. Wrapping up Presented here were but a few amongst all of the Bluebird API methods, so I strongly encourage you to explore, play around with, and try implementing the rest of them. It might seem hard at first but don’t get discouraged — it would be worthless if it were it easy. Thank you very much for reading! I hope you found this article informative and that it helped you get a grasp of the concept of promises, and that from now on you will feel more comfortable using them or simply writing asynchronous code. If you have any questions or comments, feel free to put them in the comment section below or send me a message. Originally published at on August 4, 2018.
https://www.freecodecamp.org/news/how-to-implement-promises-in-javascript-1ce2680a7f51/
CC-MAIN-2020-24
refinedweb
2,817
51.75
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Beginning Java Author What are static factory methods Amit Ghorpade Bartender Joined: Jun 06, 2007 Posts: 2851 10 I like... posted Apr 01, 2008 00:07:00 0 Hi all, I know this question is too novice, but as this forum says no question is too small to ask. Anyways, although I cleared SCJP 5, the only thing I came to know about is that when we need a Date instance, we use the static factory method Calendar.getDateInstance(). As static factory methods were not on the exam I just read on. But then I could not find more material on the topic. Could anyone please explain and also why such method exist in place of a constructor. Thanks in advance. SCJP, SCWCD. | Asking Good Questions | Anubhav Anand Ranch Hand Joined: May 18, 2007 Posts: 341 I like... posted Apr 01, 2008 00:55:00 0 This has a good explanation. Manuel Leiria Ranch Hand Joined: Jul 13, 2007 Posts: 171 posted Apr 01, 2008 02:46:00 0 Quoting from "Effective Java Programming Language" from Joshua Bloch "Item 1: Consider providing static factory methods instead of constructors The normal way for a class to allow a client to obtain an instance is to provide a public constructor. There is another, less widely known technique that should also be a part of every programmer's toolkit. A class can provide a public static factory method, which is simply a static method that returns an instance of the class. Here's a simple example from the class Boolean (the wrapper class for the primitive type boolean). This static factory method, which was added in the 1.4 release, translates a boolean primitive value into a Boolean object reference: public static Boolean valueOf(boolean b) { return (b ? Boolean.TRUE : Boolean.FALSE); } can make a class easier to use and the resulting client code easier to read. For example, the constructor BigInteger (int, int, Random), which returns a BigInteger that is probably prime, would have been better expressed as a static factory method named BigInteger.probablePrime. (This static factory static factory methods have names, they do not share with constructors the restriction that a class can have only one with a given signature. In cases where a class seems to require multiple constructors with the same signature, you should consider replacing one or more constructors with static factory methods whose carefully chosen names highlight their differences. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they're invoked. This allows immutable classes to use preconstructed instances or to cache instances as they're constructed and to dispense these instances repeatedly so as to avoid creating unnecessary duplicate objects. The Boolean.valueOf(boolean) method illustrates this technique: It never creates an object. This technique can greatly improve performance if equivalent objects are requested frequently, especially if these objects are expensive to create. The ability of static factory methods to return the same object from repeated invocations can also be used to maintain strict control over what instances exist at any given time. There are two reasons to do this. First, it allows a class to guarantee that it is a singleton (Item 2). Second, it allows an immutable class to ensure that no two equal instances exist: a.equals(b) if and only if a==b. If a class makes this guarantee, then its clients can use the == operator instead of the equals(Object) method, which may result in a substantial performance improvement. The typesafe enum pattern , described in Item 21, implements this optimization, and the String.intern method implements it in a limited form. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. This gives you great flexibility in choosing the class of the returned object." [ April 01, 2008: Message edited by: Manuel Leiria ] Manuel Leiria<br /> <br />--------------<br />Peace cannot be kept by force; it can only be achieved by understanding. <br /> Albert Einstein Amit Ghorpade Bartender Joined: Jun 06, 2007 Posts: 2851 10 I like... posted Apr 01, 2008 03:01:00 0 Thank you for the replies, but now reading the above mentioned reply and the blog, one more question comes to my mind that what is a singleton .The only thing I know about a singleton is that it has only a single instance. Apart from this what is the significance of it and how to write a singleton? Thank you in advance Anubhav Anand Ranch Hand Joined: May 18, 2007 Posts: 341 I like... posted Apr 01, 2008 04:19:00 0 Singleton actually comes in picture as a design pattern. In this pattern only one single instance is available. As an example you can have your database connection factory class as singleton and this will help to have only one single instance of database. You can read this article to know more how to implement singleton pattern. Hope that helps. Amit Ghorpade Bartender Joined: Jun 06, 2007 Posts: 2851 10 I like... posted Apr 01, 2008 05:39:00 0 Thanks A. Anand for the link, looks like now I need to get my hands on design patterns Thank you I agree. Here's the link: subject: What are static factory methods Similar Threads static methods for object creation Patterns: Abstract Factory Vs Factory Method Factory methods Abstract Factor / Factory Method / Factory factory methods??????? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/409915/java/java/static-factory-methods
CC-MAIN-2015-48
refinedweb
957
61.16
the following is a little program im working on to increase my knowledge of functions... it gives me 2 errors that i cant figure out... on lines 47 and 61, compiler indicates there are too few arguments... what does this mean here is the code T.I.A. Code:/* Name: jCalc Author: Jarrod Swart Date: 22/09/03 19:05 Description: A simple 4 function calculator to improve my knowlege and understanding of functions. */ #include <iostream> using namespace std; // This begins the "graphical menu" void menuText() { cout << "[][][][][][][][][][][][][][][][][][][]\n"; cout << "[] []\n"; cout << "[] jCalc - 4 function calculator []\n"; cout << "[] ---------------------------- []\n"; cout << "[] 1. Add []\n"; cout << "[] 2. Subtract []\n"; cout << "[] 3. Multiply []\n"; cout << "[] 4. Divide []\n"; cout << "[] 5. About []\n"; cout << "[] ---------------------------- []\n"; cout << "[] Input your choice, press enter []\n"; cout << "[] []\n"; cout << "[][][][][][][][][][][][][][][][][][][]\n"; } // end menu // begin menu option 5 - about void aboutText() { cout << "jCalc was created by Jarrod Swart as a means to improve understanding of functions and general C++ knowledge.\n"; } // end menu option 5 // a function to accept a value and go to the specified part of the program // ie: if 5 is entered, it displays aboutText() // currently only five is an option int menuSelection(int menuOption) { cin >> menuOption; if (menuOption == 5) { aboutText(); } } int main() { menuText(); //display graphic text menuSelection(); //allow user to input option system("pause"); return 0; }
http://forums.devshed.com/programming-42/functions-question-85851.html
CC-MAIN-2016-44
refinedweb
219
62.07
The Copy Type refactoring allows users to copy a class, interface, struct, or enum from one namespace to another, or clone it within the same namespace. To copy a type - Select the class, interface, struct, or enum in Class View or Object Browser, or position the caret at its name in the editor. - Do one of the following: - On the main menu, click ReSharper | Refactor | Copy Type. - Right-click the type, and on the shortcut menu, click Refactor | Copy Type. - Press Ctrl+Shift+R, and then select Copy Type. - In the Name text box, specify the new name of the type copy. - In the Namespace text box, specify the namespace where the copy of the type is to be created. - Click Continue. If no conflicts are discovered, the copy of the type is created immediately.
http://www.jetbrains.com/resharper/webhelp50/Refactorings__Copy_Type.html
CC-MAIN-2015-48
refinedweb
135
71.65
Using C++ in Mozilla code¶ C++ language features¶ or throw any exceptions. Libraries that throw exceptions may be used if you are willing to have the throw instead be treated as an abort. On the side of extending C++, we compile with -fno-strict-aliasing. This means that when reinterpreting a pointer as a differently-typed pointer, you don’t need to adhere to the “effective type” (of the pointee) rule from the standard (aka. “the strict aliasing rule”) when dereferencing the reinterpreted pointer. You still need make sure that you don’t violate alignment requirements and need to make sure that the data at the memory location pointed to forms a valid value when interpreted according to the type of the pointer when dereferencing the pointer for reading. Likewise, if you write by dereferencing the reinterpreted pointer and the originally-typed pointer might still be dereferenced for reading, you need to make sure that the values you write are valid according to the original type. This value validity issue is moot for e.g. primitive integers for which all bit patterns of their size are valid values. As of Mozilla 59, C++14 mode is required to build Mozilla. As of Mozilla 67, MSVC can no longer be used to build Mozilla. As of Mozilla 73, C++17 mode is required to build Mozilla. This means that C++17 can be used where supported on all platforms. The list of acceptable features is given below: Sources¶ Notes¶ rvalue references: Implicit move method generation cannot be used. Attributes: Several common attributes are defined in mozilla/Attributes.h or nscore.h. Alignment: Some alignment utilities are defined in mozilla/Alignment.h. /!\ MOZ_ALIGNOF and alignof don’t have the same semantics. Be careful of what you expect from them. [[deprecated]]: If we have deprecated code, we should be removing it rather than marking it as such. Marking things as [[deprecated]] also means the compiler will warn if you use the deprecated API, which turns into a fatal error in our automation builds, which is not helpful. Sized deallocation: Our compilers all support this (custom flags are required for GCC and Clang), but turning it on breaks some classes’ operator new methods, and some work would need to be done to make it an efficiency win with our custom memory allocator. Aligned allocation/deallocation: Our custom memory allocator doesn’t have support for these functions. Thread locals: thread_local is not supported on Android. C++ and Mozilla standard libraries¶ The Mozilla codebase contains within it several subprojects which follow different rules for which libraries can and can’t be used it. The rules listed here apply to normal platform code, and assume unrestricted usability of MFBT or XPCOM APIs. Warning The rest of this section is a draft for expository and exploratory purposes. Do not trust the information listed here. What follows is a list of standard library components provided by Mozilla or the C++ standard. If an API is not listed here, then it is not permissible to use it in Mozilla code. Deprecated APIs are not listed here. In general, prefer Mozilla variants of data structures to standard C++ ones, even when permitted to use the latter, since Mozilla variants tend to have features not found in the standard library (e.g., memory size tracking) or have more controllable performance characteristics. A list of approved standard library headers is maintained in config/stl-headers.mozbuild. Strings¶ See the Mozilla internal string guide for usage of nsAString (our copy-on-write replacement for std::u16string) and nsACString (our copy-on-write replacement for std::string). Be sure not to introduce further uses of std::wstring, which is not portable! (Some uses exist in the IPC code.) Mozilla data structures and standard C++ ranges and iterators¶ Some Mozilla-defined data structures provide STL-style iterators and are usable in range-based for loops as well as STL algorithms. Currently, these include: Note that if the iterator category is stated as “missing”, the type is probably only usable in range-based for. This is most likely just an omission, which could be easily fixed. Useful in this context are also the class template IteratorRange (which can be used to construct a range from any pair of iterators) and function template Reversed (which can be used to reverse any range), both defined in mfbt/ReverseIterator.h Further C++ rules¶ Don’t use static constructors¶ ¶ See the introduction to the “C++ language features” section at the start of this document. Don’t use Run-time Type Information¶ See the introduction to the “C++ language features” section at the start of this document. If you need runtime typing, you can achieve a similar result by adding a classOf() virtual member function to the base class of your hierarchy and overriding that member function in each subclass. If classOf() returns a unique value for each class in the hierarchy, you’ll be able to do type comparisons at runtime. Don’t use the C++ standard library (including iostream and locale)¶ See the section “C++ and Mozilla standard libraries”. Use C++ lambdas, but with care¶ C++ lambdas are supported across all our compilers now. Rejoice! We recommend explicitly listing out the variables that you capture in the lambda, both for documentation purposes, and to double-check that you’re only capturing what you expect to capture. Use namespaces¶ Namespaces may be used according to the style guidelines in C++ Coding style. Make header files compatible with C and C++¶ #include "oldCheader.h" ... There are number of reasons for doing this, other than just good style. For one thing, you are making life easier for everyone else, doing the work in one common place (the header file) instead of all the C++ files that include it. Also, by making the C header safe for C++, you document that “hey, this file is now being included in C++”. That’s a good thing. You also avoid a big portability nightmare that is nasty to fix… Use override on subclass virtual member functions¶ The override keyword is supported in C++11 and in all our supported compilers, and it catches bugs. Always declare a copy constructor and assignment operator¶¶¶ Non-portable code: class FooClass { // having such similar signatures // is a bad idea in the first place. void doit(long); void doit(short); }; void B::foo(FooClass* xyz) { xyz->doit(45); } Be sure to type your scalar constants, e.g., uint32_t(10) or 10L. Otherwise, you can produce ambiguous function calls which potentially could resolve to multiple methods, particularly if you haven’t followed (2) above. Not all of the compilers will flag ambiguous method calls. Portable code: class FooClass { // having such similar signatures // is a bad idea in the first place. void doit(long); void doit(short); }; void B::foo(FooClass* xyz) { xyz->doit(45L); } Use nsCOMPtr in XPCOM code¶ See the nsCOMPtr User Manual for usage details. Don’t use identifiers that start with an underscore¶ This rule occasionally surprises people who’ve been hacking C++ for decades. But it comes directly from the C++ standard! According to the C++ Standard, 17.4.3.1.2 Global Names [lib.global.names], paragraph 1:. Stuff that is good to do for C or C++¶ Avoid conditional #includes when possible¶¶ Every object file linked into libxul needs to have a unique name. Avoid generic names like nsModule.cpp and instead use nsPlacesModule.cpp. Turn on warnings for your compiler, and then write warning free code¶¶ Some compilers do not pack the bits when different bitfields are given different types. For example, the following struct might have a size of 8 bytes, even though it would fit in 1: struct { char ch: 1; int i: 1; }; Don’t use an enum type for a bitfield¶.
https://firefox-source-docs.mozilla.org/code-quality/coding-style/using_cxx_in_firefox_code.html
CC-MAIN-2020-50
refinedweb
1,300
62.68
ilogb - return an unbiased exponent #include <math.h> int ilogb (double x) The ilogb() function returns the exponent part of x. Formally, the return value is the integral part of logr|x| as a signed integral value, for non-zero x, where r is the radix of the machine's floating point arithmetic. The call ilogb(x) is equivalent to (int)logb(x). Upon successful completion, ilogb() returns the exponent part of x. If x is 0, then ilogb() returns -INT_MIN. If x is NaN or ±Inf, then ilogb() returns INT_MAX. No errors are defined. None. None. None. logb(), <math.h>.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/ilogb.html
CC-MAIN-2015-27
refinedweb
101
68.16
Passing commands to standard In on windows (Mplayer related) [SOLVED] Currently I have mplayer running in the background and would like to send it commands during runtime. I am aware that I will need to use slave mode (-slave). However I am not sure how I would pass these commands to standard in so that they can be read. What I want to do is start mplayer and then at some point in time (when a button is pressed), pass it the file to play. However have no idea the best way to send commands to standard in from another program. Currently what I think i'll have to do is somehow write to "cin" from inside the program so was thinking using ifstream to read from the file which contains commands written by ofstream. However this seems very messy so was wondering if anyone has an other ideas? Does anyone have any useful information on where I go from here? I presuming I have to use some sort of Pipe but as this is my first jump into windows programming I am a bit lost. So any links or advice would be really helpful. Thanks in advance - SGaist Lifetime Qt Champion Hi, That was for Linux but it's probably also applicable to Windows: use QProcess to start mplayer and communicate with it. Hope it helps Yea that is what I currently have, my program is launched with QProcess, however because this application is on windows I cannot just write to the file descriptor. Otherwise I would do something like this: @#include <unistd.h> #include <string> #include <sstream> #include <iostream> #include <thread> int main() { std::thread mess_with_stdin([] () { for (int i = 0; i < 10; ++i) { std::stringstream msg; msg << "Self-message #" << i << ": Hello! How do you like that!?\n"; auto s = msg.str(); write(STDIN_FILENO, s.c_str(), s.size()); usleep(1000); } }); std::string str; while (getline(std::cin, str)) std::cout << "String: " << str << std::endl; mess_with_stdin.join(); }@ What I am looking for is the process needed to do the same on windows as having trouble finding a solution to this problem. The problem is not with the QProcess side but the actual "Communication with it". Hopefully someone has used named pipes on windows might be able to point me in the right direction of a good tutorial. - SGaist Lifetime Qt Champion QProcess is a QIODevice, so you can write there directly Ledge Cheers didn't think to check if QProcess was capable of doing this also.
https://forum.qt.io/topic/45815/passing-commands-to-standard-in-on-windows-mplayer-related-solved
CC-MAIN-2018-34
refinedweb
416
70.13
I’ve (somewhat unexpectedly) become a serious fanboy of both Netlify and GatsbyJS. For the first time since I was discovering Rails I have felt like a tool (library, framework, etc.) had an answer to every question I was seeking. Nearly every time I reached a challenging hurdle without a clear solution, I would do a bit of googling and, sure enough, Netlify or Gatsby (depending on the challenge) would have the answer ready and waiting. That’s a seriously uplifting way to work and that’s going to keep me coming back to these two products. That being said, one area in which I’ve struggled to find easy answers is when building form handlers using Netlify. I did a lot of reading, along with a fair amount of trial and error, before I really found a solution that worked well. And while I came up with seven points I think you should know before working with Netlify forms, I wanted to take some time to share how these points translate to a Gatsby project. If you have not read the article, I’d suggest at least skimming it — it’ll help add some context as we pass through the following scenarios. For the rest of this article, we’re going to dive into some specific examples. Since there are many different means of approaching how you work with forms in Gatsby, I’ve provided multiple ways in which you can approach Netlify’s form handling, such that you can choose the best path forward for your project. Let’s dig in! Setup I’ve skipped boilerplate code — I’m assuming if you’re here you already know how to work with Netlify and Gatsby, and that you have a Git service like GitHub acting as the glue between the two. If not, here are docs on how to get rolling with Gatsby, and here they are for Netlify. These examples following haven’t done anything out of the ordinary beyond the basic Getting Started setup. (At the time of writing this, I’m working with Gatsby v2.3.25.) Once your Gatsby project is up and running locally, let’s start by building a basic form. For this example, along with those following, let’s say we’re going to create a contact form at /contact with two fields — The first step is to create the page and add some boilerplate code: src/pages/contact.js import React from "react" import Layout from "../components/layout" const ContactFormPage = () => ( <Layout> <h1>Contact</h1> </Layout> ) export default ContactFormPage Note that the location and behavior of the Layout is based on what you get from Gatsby out of the box. If you’ve changed this behavior, you’ll want to adjust that component accordingly. Basic Form From here we can work on building the form. We can essentially just drop our form markup into the page and be done with it. Knowing the rest of your boilerplate code, here’s what the ContactFormPage function looks like with a basic working form: const ContactFormPage = () => ( <Layout> <h1>Contact</h1> <form name="Contact Form" method="POST" data- <input type="hidden" name="form-name" value="Contact Form" /> <div> <label>Your Email:</label> <input type="email" name="email" /> </div> <div> <label>Message:</label> <textarea name="message" /> </div> <button type="submit">Send</button> </form> </Layout> ) I want to call your attention to four key points within this markup (these shouldn’t be a surprise if you’ve read the must know article referenced above): - The form has a data-netlify="true"attribute, which tells Netlify to register the form while building your site. - The form has a nameattribute describing it. This is the name Netlify will give the form when you deploy this code. - The form’s nameattribute is repeated in a hidden form-namefield. This is absolutely necessary. If you omit this field or mistype the name, your entries will either throw an error or get lost somewhere in the internet abyss. - Every field has a nameattribute. A field must have a name for that data to be persisted within Netlify. Another point — which may seem obvious if you’ve worked with Netlify forms in the past — is that Netlify forms do not work in local development. When you first add this code, Netlify doesn’t know about the form and it’s not being submitted to Netlify. Instead, you’ll have to deploy your code (via Netlify) to test the form. I highly recommend doing this via a Deploy Preview, which will enable you to test before the form goes into production. (As a bonus, as far as I’ve been able to tell, form submissions that originate from preview deploys are not counted toward your total monthly allotment.) But, that’s it! Commit and push your code to GitHub, kicking off a Netlify build (if everything is configured correctly), and see the form live after the site (or preview) is deployed. Then the form should submit properly and you should see your submissions through the Netlify UI. Adding a Success Page You may notice a bit of an undesirable effect with this out-of-the-box behavior: the success page is served from Netlify. That means: - It likely doesn’t match your site’s design. - You have to click a link to go back to the page the form was on. Fortunately, you can specify the path to which you want users redirected after a successful form submission so you have more control over the message, behavior, and aesthetic. To do this, all you have to do is add an action attribute to your form, like so: <form name="Contact Form" method="POST" data- Notice here that I’ve added a value of /thank-you as the action on the form. This means that after a successful form submission, users will be redirected to /thank-you. In other words, we’ll need a page to receive those users: src/pages/thank-you.js import React from "react" import Layout from "../components/layout" const ThankYouPage = () => ( <Layout> <h1>Contact</h1> <p>Thank you for your submission!</p> </Layout> ) export default ThankYouPage Notice this is just a basic page, but it gives some feedback to the user within the context of your site (and its design) so they feel more continuity after completing the form. reCAPTCHA One great feature of Netlify’s form handling is that it has a built-in filter to catch spammy submissions and prevent them from counting toward your monthly allotment. It won’t be perfect, but it’s one more step that a form submission has to get through before being considered valid. Still, you may want to add additional spam measures, such as reCAPTCHA. If you’ve read through Netlify’s docs or through my must-know Netlify form tips, you’ve probably seen that Netlify also offers reCAPTCHA support out of the box. And you may also already know, if you’ve tried to implement this within a Gatsby site, that Netlify’s reCAPTCHA support doesn’t extend to forms rendered by JavaScript (as Gatsby’s pages are). Therefore, if we want a reCAPTCHA field within our Gatsby site, we must implement it ourselves. I have found that this is easiest to work with through the react-google-recaptcha library. You can install this like so: $ yarn add react-google-recaptcha Next, import the package to the top of your page: src/pages/contact.js import ReCAPTCHA from "react-google-recaptcha"; Following Netlify’s docs, you do still need to add data-netlify-recaptcha=“true” as an attribute on your form. And then you can add the ReCAPTCHA component with a sitekey prop. The updated form markup might look something like this: <form name='JSX Form' method='POST' data- <input type='hidden' name='form-name' value='JSX Form' /> <label>Your Email:</label> <input type='email' name='email' /> <br /> <label>Message:</label> <textarea name='message' /> <br /> <ReCAPTCHA sitekey="YOUR_SITE_KEY" /> <button type='submit'>Send</button> </form> Notice that I added YOUR_SITE_KEY as a placeholder for the reCAPTCHA key. I usually like to use an environment variable in this space. One really nice feature of Gatsby’s is that any environment variable beginning with GATSBY_ automatically gets picked up and is available on the process.env object. So if I set the environment variable GATSBY_RECAPTCHA_KEY to my key, then the ReCAPTCHA line looks more like this: <ReCAPTCHA sitekey={process.env.GATSBY_RECAPTCHA_KEY} /> Also, don’t forget that Netlify needs to verify this server-side for it to be valid. That means, for this to work, you are going to need to set SITE_RECAPTCHA_KEY and SITE_RECAPTCHA_SECRET variables within Netlify so that it will validate the reCAPTCHA. Once you have this all configured correctly it will (or should) just work! Async Submission Now, while we’ve had to work through a few GOTCHA! scenarios so far, it’s all relatively straightforward. As long as you follow the docs, everything works (or at least should work) swimmingly. But there’s one downside to the approaches we’ve taken — when submitting forms via HTTP, your browser actually sends a POST request to the action specified in the form, and it gets redirected upon success. The problem, in that case, is that Gatsby has to reload, so you’re going to take a performance hit on the success page. One way to keep the process acting smooth is to use AJAX or an XHR request to asynchronously submit the form data to the Netlify server, and then show your own custom feedback upon success. Major Changes, Summarized To get started, we’re going to see a few important changes: - Bring in two more packages — axiosfor handling the request and query-stringfor encoding our data properly. - Convert our page component to a class so that we can work with React state and give feedback to the user. - Add an event listener to the form submission event so we can catch it and override it. - Every field in the form gets a refattribute matching the name of the field. The skeleton of that component then looks like this: src/pages/contact.js import React from "react" import axios from "axios" import * as qs from "query-string" import Layout from "../components/layout" class ContactFormPage extends React.Component { constructor(props) { // Do intro stuff ... } handleSubmit(event) { // Do form submission stuff ... } render() { return ( <Layout> <h1>Contact</h1> {this.state.feedbackMsg && <p>{this.state.feedbackMsg}</p>} <form ref={this.domRef} <!-- ... --> <input ref="email" type="email" name="email" /> <!-- ... --> <textarea ref="message" name="message" /> <!-- ... --> </form> </Layout> ) } } export default ContactFormPage Note the addition of the feedbackMsg state on the form. That will show a paragraph of text matching what we set the feedbackMsg state, and it won’t show if there is nothing set. The Constructor Now let’s step through our two methods we haven’t filled in yet. First is the constructor, which is the method that gets run automatically when the class is instantiated. If overriding the constructor in an extended React component, you must call super(props) before doing anything else. We’re going to perform two tasks in our constructor: - Create a blank React DOM reference. This gets attached to our form on render (that’s why you saw the ref={this.domRef}attribute on the form). This enables us to access the form through the DOM, which we’ll use to clear the fields upon success. - Set a default state, containing feedbackMsgas null. This is what that method should look like now: constructor(props) { super(props) this.domRef = React.createRef() this.state = { feedbackMsg: null } } Handling the Submission The handleSubmit method is a bit more complicated. Instead of explaining it ahead of time, I’ve put the comments inline: handleSubmit(event) { // Do not submit form via HTTP, since we're doing that via XHR request. event.preventDefault() // Loop through this component's refs (the fields) and add them to the // formData object. What we're left with is an object of key-value pairs // that represent the form data we want to send to Netlify. const formData = {} Object.keys(this.refs).map(key => (formData[key] = this.refs[key].value)) // Set options for axios. The URL we're submitting to // (this.props.location.pathname) is the current page. const axiosOptions = { url: this.props.location.pathname, method: "post", headers: { "Content-Type": "application/x-www-form-urlencoded" }, data: qs.stringify(formData), } // Submit to Netlify. Upon success, set the feedback message and clear all // the fields within the form. Upon failure, keep the fields as they are, // but set the feedback message to show the error state. axios(axiosOptions) .then(response => { this.setState({ feedbackMsg: "Form submitted successfully!", }) this.domRef.current.reset() }) .catch(err => this.setState({ feedbackMsg: "Form could not be submitted.", }) ) } You can see, there’s a lot more going on when you want to submit your form asynchronously. But if you do it well enough, your users will have a better experience and your site will be snappier after a user submits a form. A Note on reCAPTCHA Note that I’ve removed reCAPTCHA from this example. You can certainly keep it in, but this solution I’ve outlined in the section above will not work properly as-is. If you add reCAPTCHA back, you won’t have a reference to it as you are looping through your component’s ref objects. Instead, you’ll have to listen for changes to that reCAPTCHA and, upon success, store the value in state, and then include that value to the g-recaptcha-response field when submitting the form. As simple and powerful as Netlify forms are, and as awesome as the Gatsby framework is, implementing Netlify form handling within a Gatsby project can be tricky — I spent the better part of a week trying and failing to make all of this work. And if you make user experience a priority, it gets even trickier when considering feedback and performance. But, when you put all this together, your users are going to love it, and you will, too — you’ll be able to accept dynamic data from your users directly through Netlify! That’s pretty cool. (If you’ve found issues with the code or have questions, please do not hesitate to reach out to me.)
https://cobwwweb.com/how-to-use-netlify-forms-with-gatsby
CC-MAIN-2019-22
refinedweb
2,375
62.17
0 FUTURECompEng 5 Years Ago What is the running time for the following recursive function: pubic static int Sum(int [] a, int start, int end) { if ((end – start) = =0) return a[end]; else { int mid = (end + start) / 2; return (Sum(a, start,mid) + Sum(a,mid+1,end)); } } Can someone please explain step by step on how I would figure this out. My attempt is that I only care about the if statement and int mid here (i think). "If" would be a simple n and would "int mid" be log_2n giving a running time of nlog_2n? algorithms big java running time
https://www.daniweb.com/programming/computer-science/threads/435522/big-o-running-time-example
CC-MAIN-2018-05
refinedweb
102
73
Log files are crucial when it comes to troubleshooting an application or finding the to forget about logging at all. To the lines containing only braces), only 1 line has any business value (the one in bold). The remaining 9 lines are there only to ensure that the progress of the method execution gets logged. And, this is only a single method. Real-life applications consist of thousands of them. That means, a lot of lines which introduce no business value and make the code harder to read. This is where Log4PostSharp comes to rescue. Check the second snippet; it shows method that performs exactly the. If you want to quickly see some sample application that uses Log4PostSharp, you can download the demo project from here. Remember to download and install the latest version of PostSharp first, from here. To achieve its goal, Log4PostSharp uses PostSharp, an excellent tool that allows injecting code into .NET assemblies. Normally, the injection occurs automatically after the project is compiled (the PostSharp installer configures a .NET build process to make this possible; for more details, please visit this website). To see how the injection works, six new static fields and the static constructor that initializes them. The injected logging code reads the fields instead of invoking some log4net methods to improve logging performance. The code that gets injected follows the log4net recommendations, and is optimized to achieve the best performance. In the static constructor, the logger is created for every class, as: ~log4PostSharp~log = LogManager.GetLogger(typeof(Program)); This means that the logger name is the same as the class name (including namespace). Log4PostSharp does not interfere with any manually added logging code, and requires that the developer configures log4net the usual way. Note: Remember that Log4PostSharp caches some information (like indications whether loggers are enabled). If you configure log4net by placing the " [assembly: XmlConfigurator(Watch = true)]" line somewhere in the AssemblyInfo.cs file and then enable/disable loggers while the application is already running, these changes will have no effect. The Log attribute exposes a few properties that allow you to customize the code that gets injected. Values. Determines the text that gets logged. It can contain placeholders which then get replaced with actual values: {signature}– replaced with the signature of the method [weave-time], {@parameter_name}– replaced with the value of the specified parameter [runtime], {paramvalues}– replaced with the comma-separated list of values of all method parameters [runtime], {returnvalue}– replaced with the value that the method is about to return [runtime]. Note that placeholders are marked as being either weave-time or runtime. Values of the weave-time placeholders are determined when the code is being injected. Hence, the performance of the generated code is exactly the same as if the placeholders were not used (build may take a little longer though). On the other hand, values of the runtime ones cannot be determined until the method gets called (and may vary between two different method calls). Therefore, if runtime placeholders are specified, the: ExceptionTextproperty can contain only weave-time placeholders. ExitText. void), the {returnvalue}placeholder still can be used, and its value is assumed to be null. Adding the attribute to every method separately would not only be attributes (the order in which they appear in the file is irrelevant). Exclusion must always occur after inclusion, to make it work. For information on more multicast features, see the PostSharp documentation. Multicast attributes can be customized just like the ordinary ones. If you have a project where you want to use automated logging, you have to follow a few simple steps. Download PostSharp from here and install it. Download Log4PostSharp from here and copy the Log4PostSharp.Weaver.dll and Log4PostSharp.psplugin files into the Plugins directory under your PostSharp installation folder. Copy the Log4PostSharp.dll into the directory where you store other libraries for your project. In the project, add references to the Log4PostSharp.dll and PostSharp.Public.dll (you can find this library on the ".NET" tab of the "Add reference..." dialog). Remember that these DLL files are required only by the compiler and the weaver. You do not need to deploy them with your application or library. Decorate the desired methods with the Log attribute, or just add the attribute to all the methods using the multicast feature. Compile and run the application to see that it works. From this moment, you can start removing old logging code (or just stop adding new logging code). If you do not want to install Log4PostSharp into the PostSharp Plugins folder, you will need to provide a .psproj file for your project. For more details, please refer to the PostSharp documentation. The Log4PostSharp is published under the BSD license which allows using it freely even in commercial products. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/dotnet/log4postsharp-intro.aspx
crawl-002
refinedweb
802
57.37
Create, read, modify, write and execute WebAssembly (WASM) files from .NET-based applications. A library able to create, read, modify, write and execute WebAssembly (WASM) files from .NET-based applications. Execution does not use an interpreter or a 3rd party library: WASM instructions are mapped to their .NET equivalents and converted to native machine language by the .NET JIT compiler. Available on NuGet at . WebAssembly.Moduleclass to create, read, modify, and write WebAssembly (WASM) binary files. Module.ReadFromBinaryreads a stream into an instance, which can then be inspected and modified through its properties. WriteToBinaryon a module instance writes binary WASM to the provided stream. WebAssembly.Runtime.Compileclass to execute WebAssembly (WASM) binary files using the .NET JIT compiler. Please report an issue if you encounter an assembly that works in browsers but not with this library. using System; using WebAssembly; // Acquire from using WebAssembly.Instructions; using WebAssembly.Runtime; // We need this later to call the code we're generating. public abstract class Sample { // Sometimes you can use C# dynamic instead of building an abstract class like this. public abstract int Demo(int value); } static class Program { static void Main() { // Module can be used to create, read, modify, and write WebAssembly files. var module = new Module(); // In this case, we're creating a new one. // Types are function signatures: the list of parameters and returns. module.Types.Add(new WebAssemblyType // The first added type gets index 0. { Parameters = new[] { WebAssemblyValueType.Int32, // This sample takes a single Int32 as input. // Complex types can be passed by sending them in pieces. }, Returns = new[] { // Multiple returns are supported by the binary format. // Standard currently allows a count of 0 or 1, though. WebAssemblyValueType.Int32, }, }); // Types can be re-used for multiple functions to reduce WASM size. // The function list associates a function index to a type index. module.Functions.Add(new Function // The first added function gets index 0. { Type = 0, // The index for the "type" value added above. }); // Code must be passed in the exact same order as the Functions above. module.Codes.Add(new FunctionBody { Code = new Instruction[] { new LocalGet(0), // The parameters are the first locals, in order. // We defined the first parameter as Int32, so now an Int32 is at the top of the stack. new Int32CountOneBits(), // Returns the count of binary bits set to 1. // It takes the Int32 from the top of the stack, and pushes the return value. // So, in the end, there is still a single Int32 on the stack. new End(), // All functions must end with "End". // The final "End" also delivers the returned value. }, }); // Exports enable features to be accessed by external code. // Typically this means JavaScript, but this library adds .NET execution capability, too. module.Exports.Add(new Export { Kind = ExternalKind.Function, Index = 0, // This should match the function index from above. Name = "Demo", // Anything legal in Unicode is legal in an export name. }); // We now have enough for a usable WASM file, which we could save with module.WriteToBinary(). // Below, we show how the Compile feature can be used for .NET-based execution. // For stream-based compilation, WebAssembly.Compile should be used. var instanceCreator = module.Compile<sample>(); // Instances should be wrapped in a "using" block for automatic disposal. // This sample doesn't import anything, so we pass an empty import dictionary. using (var instance = instanceCreator(new ImportDictionary())) { // FYI, instanceCreator can be used multiple times to create independent instances. Console.WriteLine(instance.Exports.Demo(0)); // Binary 0, result 0 Console.WriteLine(instance.Exports.Demo(1)); // Binary 1, result 1, Console.WriteLine(instance.Exports.Demo(42)); // Binary 101010, result 3 } // Automatically release the WebAssembly instance here. } } Informational; there is no timelime for completion of these items. System.Reflection.Emit.AssemblyBuilder-affiliated methods with replacements so that something like Mono.Cecil can be used to produce a DLL.
https://xscode.com/RyanLamansky/dotnet-webassembly
CC-MAIN-2022-05
refinedweb
633
52.76
Bugs item #874176, was opened at 2004-01-10 01:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: Category: pythonwin Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dmitry Dembinsky (xloggerz) Assigned to: Nobody/Anonymous (nobody) Summary: pythonwin debugger does not track changed files Initial Comment: I have encountered a problem while running python scripts under pythonwin debugger It looks like the debugger does not track changes in the imported files. Here's the sequence to reproduce the problem: 1. Create two files with the following contents and save them in the same directory: File imptest.py: def f(): print "f ()" File test.py: import imptest if __name__=='__main__': imptest.f () 2. Open test.py in an IDE and select File -> Run... (either with or without debugger) "f ()" will be printed in an interactive window 3. Now edit file imptest.py (no matter inside or outside of the IDE) and change the funcion f() so that it'd print something else, e.g.: def f(): print "f () changed" Then, run test.py again. It will still print "f ()" instead of expected "f () changed" The change in code would not be reflected, unless IDE is restarted or "reload(imptest)" is manually invoked in an interactive window. ---------------------------------------------------------------------- You can respond by visiting:
http://sourceforge.net/p/pywin32/mailman/pywin32-bugs/thread/E1Af69j-0001RP-00@sc8-sf-web3.sourceforge.net/
CC-MAIN-2014-49
refinedweb
221
62.48
Steps - Java Beginners Java Steps How to create Execuateable EXE File of Java Program. Hi Friend, Try the following code: import java.io.*; import java.util.jar.*; public class CreateJar { public static int buffer = 10240 i want java code for this xml file...please show me.. i want java code for this xml file...please show me.. xbrli:shares xbrli:pure iso4217:INR JDBC Steps ? Basic steps in writing a JDBC Application name and password. Following code we have used to make the connection... records from the recordset object and show on the console. Here is the code... JDBC Steps – Basic steps in writing a JDBC Application Flex Image Gallery Slide Show Flex Image Gallery Slide Show Earlier, most of us were fond of storing... to show and share those with our nears and dears who were far away from us.... But if you aren?t flair with the programming, leave it us. We will provide you superb Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate to insert the record in contact table,that is not happening neither it is giving... inserted in the database from this file. PHP Steps code. First of all, let’s see a snap shot of building blocks of PHP... of any PHP beginners that a PHP scripting block always begins with <?php... simpatico and many times does not get the support. As we have earlier mentioned String SQL_QUERY =" from Insurance...: " + insurance. getInsuranceName()); } in the above code,the hibernate... this lngInsuranceId='1'. but i want to search not only the value 1 but the value Downloading and Installing "SimpleHelloByEnteringName" JSF Example example. In this, we will show you how you can quickly download and install the source code of the application provided by us. This will help you start JSF... In the last section we help us help us hai, i want org.jfree.* packages please help me to find out Please visit the following link: Download JFreechart Download jfreechart-1.0.14.zip from the given link. Then on extracting the zip file, you Hibernate Mapping want a code want a code I am trying to write Java code for creating a file of any type (either text,csv,or any other format). What I need is : The file... for password first. if any body know about the code Hibernate Annotations Annotations part. Download Source Code Hibernate Annotations: Hibernate needs... steps for developing the hibernate annotation application. Step 1:- Creating... in the database. The code is identical as you have used in the Hibernate Quick Hibernate Annotation Tutorial in the project workspace.The First Application :- Let us assume we have...Hibernate Annotations  ... hands on experience before starting this tutorial. Introduction:- Hibernate Hibernate Tools Update Site ; Hibernate Tools Update Site In this section we will show you how to download and install Hibernate tools from Hibernate Tools Update... Hibernate Tools in your Eclipse IDE. Steps to Install Hibernate Tools in Eclipse IDE Why would we want a Database? Part-3 : Why would we want a Database? Most of the beginners are asking this question why do we need database in our application or program. They don't know...; : With the help of PHP and MySQL, we can develop thousands of forum on the internet jQuery to Show Data Randomly easily replace PHP with JSP, or ASP program. Steps to develop the Data Show...; The code $("#random").click(function() When we click on the Ouput... jQuery to Show Data Randomly   Downloading Struts & Hibernate is extracted in the code directory. 5. Now we will add the hibernate... libext under "C:\Struts-Hibernate-Integration\code\WEB-INF\" . We...; In this we will download Struts & Hibernate java code - Java Beginners java code how can we convert an RGB image into its grayscale...:// The code, you will get from the above link, consist of some radio buttons to show different java code - Java Beginners java code sir i want to connect all m pages as i make a start button in swing and i want whn we click on start my login page is come next how we link the pages..plzz send me the code Hibernate - Hibernate Hibernate What is a lazy loading in hibernate?i want one example of source code?plz reply Integrate Struts, Hibernate and Spring ://struts.apache.org/download.cgi. We are using Struts version Download Hibernate... Integrate Struts, Hibernate and Spring  ... required software for our Login and Registration Application. In this tutorial we Hibernate Performing the usual DB operations to give details of the Database used by us for hibernate. 2) We want to create... outlines the basic features of the Hibernate environment and the code for performing.... ----------------------------------------- We can use a number of databases with Hibernate. We have Adding Spring and Hibernate Capabilities Spring and Hibernate Capabilities to our web application. So for this, we will have... application. So, to integrate Hibernate and Spring frameworks with JSF framework we... frameworks. Steps to integrate: Adding Servlets Library Since we are using ant need code - Java Beginners need code i want to take the html coding. if user give a input html file the out is html coding in text file. i want to take html coding only. (view -> source) . how it is possible in java. Hi Friend, Here we Create and Show Wordpad File on JButton Click - Java Beginners Create and Show Wordpad File on JButton Click Hello Sir I want... currenly on Form) plz Help Me. Hi Friend, Try the following code...); } catch(Exception e) {} } } To run the above code, you need POI api code problem - Java Beginners . In this code we a text file from where the input is to be taken...code problem Dear sir, I'm havin a problem that suppose i've got... here i want to replace the 4th line "php" in "j2me". meant that 4th line should Example to show exception handling in java Example to show exception handling in java In this Tutorial we want to describe you a code that show you the use of exception class in java .Exceptions handling ,label and check button for checking the correct alphabet.I want to display... the alphabets which are missing that are we have to fill the textfield through the keyboard. The alphabet is wrong or right that we have to check after clicking check code for serching files and folders code for serching files and folders i want to create a code in java... one file or folder with that name show all the file or foldea with there path and location...and one more thing we have to create new thread for each file java code - Java Beginners AccessModifiers(); tp.test(p); } } In the above code, we have...java code hi sir, i want coding for the program : 1) write... the class. So we have defined set and get method of this variable as public Example to show class exception in java Example to show class exception in java In this Tutorial we are describing the example to show the use of class exception in java .Exceptions are the condition  Hibernate Mapping Hibernate in depth. We will show everything on Hibernate Mappings with running code example. We have used easy to learn examples that will help you understand... also talks about hibernate annotations mapping. In this tutorial we will use xml Hibernate Update Query Hibernate Update Query In this tutorial we will show how to update a row with new information... is the code of our java file (UpdateExample.java), where we will update a field Hibernate Delete Query Hibernate Delete Query In this lesson we will show how to delete rows from the underlying database using the hibernate. Lets first write a java class to delete a row from Example to show clone exception in java Example to show clone exception in java In this Tutorial we want to describe you a code... an example from clone exception. By Clone we have a method for duplication Basic Steps to Outsourcing Success, Steps to Success in Outsourcing Basic Steps to Outsourcing Success Introduction There are a few fundamental steps to ensure the best results for your outsourcing venture. These strategies... the decision making process in-house, these are the steps that need to be followed JDBC Connectivity Code In Java JDBC Connectivity Code In Java In this Tutorial we want to describe you a code that helps you in understanding JDBC Connectivity Code in Java. In this program, the code reg : the want of source code reg : the want of source code Front End -JAVA Back End - MS Access Hello Sir, I want College Student Admission Project in Java with Source code...) Available Seats and etc. plz Give Me Full Source code with Database Source Code cls - Java Beginners Source Code cls Dear RoseIndia Team, Thanks for your prompt reply to my question about alternate to cls of dos on command line source code. I have two submissions. 1. Instead of three lines if we simply write Hibernate Count Query Hibernate Count Query In this section we will show you, how to use the Count Query. Hibernate...) Here is the java code for counting the records from insurance table CRUD application in hibernate annotation CRUD application in hibernate annotation  ... using hibernate annotation. Table name: student CREATE TABLE... Follows the following steps for developing the CRUD application Show me the code for that Show me the code for that JVM on my machine? And how do I know whether its working show_source() example. show_source() The show_source() function works similar to highlight_file... Set this parameter to TRUE to make this function return the highlighted code Syntax show_source (filename, return) Note: This function displays the entire Writing First Hibernate Code running on localhost. Developing Code to Test Hibernate example Now we are ready... Writing First Hibernate Code  ...; is org.hibernate.dialect.MySQLDialect which tells the Hibernate that we are using MySQL code and specification u asked - Java Beginners from the resource bundle. We don't want to crash because of * a missing String...code and specification u asked you asked me to send the requirements in detail and the code too.so iam sendind you the specification i want code for these programs i want code for these programs Advances in operating system Laboratory Work: (The following programs can be executed on any available and suitable platform) Design, develop and execute a program using any jeetendradash - Hibernate jeetendradash how this code works?i just want to know when we execute session.getNamedQuery("Insname"), what happens? how the above code invoked? can anyone explain please?thnx HIBERNATE IN CONSOLE & SERVLET ; -------- In the earlier tutorial, we had seen how we can install hibernate,& about... by us is suitable for the database used by us. For our demo, we choose... let us compile the player.java. After compiling, we create the following  Show Clippings are providing you an example where we will show you the clippings. In the example... Show Clippings In this section, you will study how to show the clip. Clip is an art which Hibernate Annotation Example Hibernate Annotation Example In this section we will read about how to create a hibernate application using annotation. Here we will see an example... hibernate mapping XML file. In this example we will use the Eclipse IDE 1 - Hibernate Hibernate 1 what is a fetchi loading in hibernate?i want source code?plz reply Hibernate Max() Function (Aggregate Functions) Hibernate Max() Function (Aggregate Functions) In this section, we will show you, how to use the Max() function. Hibernate supports multiple aggregate functions. When Hibernate Min() Function (Aggregate Functions) Hibernate Min() Function (Aggregate Functions) In this section, we will show you, how to use the Min() function. Hibernate supports multiple aggregate functions. When Hibernate Avg() Function (Aggregate Functions) Hibernate Avg() Function (Aggregate Functions) In this section, we will show you, how to use the avg() function. Hibernate supports multiple aggregate functions. When < JavaScript Show Date the current date. Here is the code: <html> <h2>Show... JavaScript Show Date..., we are going to display the current date using JavaScript. You can see Hibernate Tutorials will learn all the aspects of hibernate in easy steps which will be explained with easy...Hibernate Tutorials Hibernate is popular open source object relational mapping... to providing mapping of Java classes to database tables, Hibernate also Hibernate code Hibernate code programm 4 hibernate Hi, Read hibernate tutorial at Thanks please read at the the link Hibernate Book . This code is often complex, tedious and costly to develop. Hibernate does... object oriented code and the relational database. Hibernate Quickly gives you all you.... Similarly, if you have some familiarity with Hibernate 2 and now want to learn Tools : Downloading Hibernate Tools In this we will show you how to download... we will explain the features of Hibernate Tools for Eclipse IDE... Update Site In this section we will show you how to download hibernate code - Hibernate hibernate code How to store a image in mysql or oracle db using struts &hibernate need java code - Java Beginners need java code i want a program for an algorithm in java which... ,the unmatched letter is E so the difference is 1 and so we can merge it .so... the difference is 1 .since diff is one we can merge it .so it becomes B2C1G2 i.e(B=2,C=1,G=2 Implementation code inside interfaces code than in interfaces, but I want to show you what is possible with inner...Implementation code inside interfaces 2001-01-25 The Java Specialists' Newsletter [Issue 006] - Implementation code inside interfaces Author: Dr. Heinz JavaScript Show Hide table ; In this section, we are going to show and hide table on clicking the button using the JavaScript. In the given example, we have created a table. The method... JavaScript Show Hide table Hibernate 4 Annotations Tutorial features. This video tutorial explains you the steps and the code for creating such applications. We have used the Eclipse IDE. What is Hibernate Annotations... program. Video tutorial explained you the steps needed to create Hibernate example how to show random image in ASP.net? how to show random image in ASP.net? hello bros i saw in many websites..there is a programming of random image changing.... i want to use this in my ASP.NET websites, can any one suggest me code or any kind of help..so that i Example to show ArrayoutofboundException in java Example to show ArrayoutofboundException in java... with an index that is outside array defined boundaries. Understand with Example The code... ArrayoutofboundException . In this example java program we print the command line arguments Hibernate code - Hibernate Hibernate code example code of how insert data in the parent table and child tabel coloums at a time in hibernate Hi friend, I am...: Thanks want a project want a project i want to make project in java on railway reservation using applets and servlets and ms access as database..please provide me code and how i can compile and run struts and hibernate integration struts and hibernate integration i want entire for this application using struts and hibernate integration here we have to use 4 tables i.e... in student_course table please send me the code Please visit
http://www.roseindia.net/tutorialhelp/comment/9429
CC-MAIN-2013-48
refinedweb
2,540
65.93
02 April 2009 23:31 [Source: ICIS news] WASHINGTON (ICIS news)--Global construction of coal-fired power plants over the next ten years will add 3bn tonnes of carbon dioxide (CO2) to the atmosphere, wiping out emissions cuts planned by the US and the EU, a consulting firm said on Thursday. The McIlvaine Co said its study of utility projects worldwide indicates that coal-fired electric power capacity will grow from 1.759m megawatts (MW) in 2010 to 2.384m MW by 2020. That increase will add 625,000 MW of new coal-fired electric capacity. McIlvaine said that another 80,000 MW of new coal-burning electric generation will be added as well, but this capacity will be built to replace older units being taken out of service. “Coal-fired power in ?xml:namespace> The coal-fired power additions in “So even if the US and Europe were to cut CO2 emissions by far more than the targeted 20%, the total CO2 increase from Asia will offset it by a wide margin,” he said. The US Congress is considering a cap-and-trade emissions reduction mandate that would cut the nation’s CO2 levels to 14% below 2005 levels by 2020 or roughly 20% below current volumes. In addition to the 59% increase in Asian coal-burning power capacity, McIlvaine said that many of the new coal-fired plants in Both Coal will continue to be the principle electric generating fuel for Asia and “Since planning for new coal-fired power plants occurs as much as a decade in advance, there is not likely to be a major change in the forecast through 2020,” McIlvaine said. The complete study, “Coal-fired Boilers; World Analysis and Forecast,” is available from the McIlvaine
http://www.icis.com/Articles/2009/04/02/9205664/global-coal-fired-utility-gains-offset-us-eu-cuts-study.html
CC-MAIN-2015-22
refinedweb
292
54.86
Python for Coders This introduction to Python is designed for people who have had some experience writing code in other languages. Part 0 Python In case you are wondering we are using Python 3.4, Python 2.7 is very similar in terms of the code you write, but we won't worry about the differences in this session. There are some 3rd party libraries which do no yet support Python 3 but that list is growing very small very Shell As you go through the material please type out the examples in your command line shell or using IDLE. You can try other things and if you have questions we will be happy to answer them. To access the command line shell open a terminal and type python. You should be presented with a Python prompt which should look like this: Python 3.4.2 (...) [GCC 4.2.1 (...)] on ... Type "help", "copyright", "credits" or "license" for more information. >>> Part 1 The Basics Let's start with some simple math. Python is an interpreted language which means you can execute expressions interactively at a prompt. Everything you see in a block can be evaluated at a your open Python prompt. 1 + 1 Python supports integers across different bases, floating point and complex numbers. There is a builtin libraries for decimal numbers and an extensive 3rd party library for scientific numeric processing called NumPy. Here are a few Python numeric literals. -42 # Strictly this isn't a numeric literal it is a composition of the - operator and a literal 3.1415 42e10 # Scientific notation for floats 1 / 2 #Gives us the right answer! Most the math operators work the way you expect. There is also a power operator, 2**4 and a modulo operator: 5%3 Strings Strings in Python are surround by either single quotes or double quotes. "Hello, World!" '<a href="">Link</a>' # Single quoted strings are handy for including double quotes in a string "<a href=\"\">Link</a>" # You can use \ to escape any quote, but that is pain There are also special triple quoted strings which are useful when you want to perserve whitespace. """Triple quote string allow new lines, which is handy for template strings. Python also uses them for source documentation. I have seem people use them for comments but I think that is a terrible practice. There are no multiline comments but most editors and prepend # to a group of selected lines.""" When we excute a single value in the interpreter, Python returns the representation of the internal value, if you want to see how tripple quoted when they are printed use the builtin print function. string = """This is a triple quoted string set to a variable named string""" print(string) Variables In Python variables are like buckets (dump trucks?). You can put anything you want in them. Just give them a name and you can use them in place of the literal value or objects. variable = 200.00 print(variable) variable = "Hello, World!" print(variable) The value a variable has only depends on what it was last assigned. The variable doesn't have a type, but the object it points to is strongly typed. Errors in Python When you write something Python doesn't understand the interpreter throws an error and tries to explain what went wrong, as best it can. Let's see some examples by running these code blocks # run or copy and paste each line into your intrepreter to see the kinds of error Python can raise gibberish *adsflf_ print('Hello' 1v34". If you run into an error you don't understand please ask a tutor. Exercises Try to fix the two blocks of code so that they run successfully. # Block 1 print('Hello, World!) # Block 2 aswer = 3 * 8 print(answer) Part 2 Variables, objects, and helpful interspection Variables don't have types, that is why they are sometimes called names. Python uses duck typing. If something looks like duck or quacks like a duck, it is a duck. If Python tries to do something assuming a variable is a duck and finds out it isn't, it will generally throw an exception. Some might say Python is weakly typed. It is true the variables aren't typed, but the objects a variable refers to are strongly typed. (Unlike C where you can take bytes and call them a string or an integer.) Python is also very object oriented in the way most everything is an object, and though you can create your own classes, you don't have to. String are objects which have methods, we can use dir and help to learn what they are. dir("Hello, World!") # list all methods on a string Consider the names with the double underscores as private. There are still a lot of methods. Let's look at upper and lower which change the strings case. "Hello, World!".upper() We can also use the builtin help function to access documentation about a function or object. help("Hello, World".upper) dir and help are very helpful builtin functions that allow you to explore Python via the command line prompt. Change "upper" in the above example to other methods listed to see what they do. You can type qto exist the help screen. Strings also support some standard operations including + to concatenate and * to multiply "Hello," + " World!" "MEOW " * 5 Formatting There are a few different ways to format strings with Python. In this session we will focus on the format method. your_name = "Albert O'Connor" string = "Hello, {0}!" print(string.format(your_name)) We can also used literal strings: print("Hello, {0}!".format("Albert O'Connor")) {0} is the default positional placeholder for format, but that can be customized. Indexed by Zero For better or worse, (and practically it is better most of the time) everything in Python is index by 0. Part 3 If Else The literal values in Python for true and false are True and False. False == False True == True True == False True == False Boolean operators include >, <, <=, >=, ==, not, and, or, in and is. String also have some helpful boolean methods. 1 > 2 "Cool".startswith("C") "Cool".endswith("C") "oo" in "Cool" 42 == 1 In order to write an "if" statement we need code that spans multiple lines condition = True if condition: print("Condition is True") else: print("Condition is False") Some things to notice. The if condition ends in a colon (":"). In Python blocks of code are indicated with a colon (":") and are grouped by white space. Notice the else also ends with a colon (":"), "else:". About that white space, consider the following code: if condition: print("Condition is True") else: print("Condition is False") print("Condition is True or False, either way this is outputted") Since the last print function isn't indented it gets run after the if block or the else block. You can play with this. Try indenting the last print function below and see what happens. condition = True if condition: print("Condition is True") else: print("Condition is False") print("Now it is indented when will this statement be excuted?") It is handy to note that a single = in a condition like below is actually a syntax error! if 1 = 3: print("True!") Whitespace On the surface the use of whitespace in Python is kind of controversial. The benefit is there are not endless debates how to format code, there is basically one right way, and it makes code more readable. The downside is you can run into errors with whitespace. A good editor configured to use spaces instead of tabs and and indent side of 4 should keep you safe. Part 4 Tuples, Lists, Loops and Dicts Lists are the first container type we will look at. [] # The empty list ["Milk", "Eggs", "Bacon"] [1,2,3] List literals are all about square brackets ("[ ]") and commas (","). You can create a list of literals by wrapping them in square brackets and separating them with commas. You can even mix different types of things into the same list; numbers, strings, booleans. [True, 0, "Awesome"] We can put variables into a list and set a variable to a list. your_name = "Albert O'Connor" awesome_people = ["Eric Idle", your_name] print(awesome_people) Like strings lists have methods which we can see with dir. dir([]) "append" lets you add an item to the end of a list. your_name = "Albert O'Connor" awesome_people = ["Eric Idle", your_name] awesome_people.append("John Cleese") print(awesome_people) We use square brackets ("[]") again with the variable of the list to access individual elements. awesome_people[0] If you want the last element, use: awesome_people[-1] Finally, if you need to know how many elements a list contains, use: len(awesome_people) Tuples Tuples are similar to lists, except they are immutable. You can create a new tuple out of existing tuples, but you can not append to an existing tuple. Strings are actually tuples of characters and are also immutable. You can make a new string out of an existing string, but you can't change a Python string in place. If you need a buffer there are builtin libraries that provide that. () # empty tuple (1,) # tuple with on element in it, the extra comma is needed to make it clear it isn't just brackets ('Awesome', 42) # tuples can also contain mixed types 1, 2, 3 # you don't actually need the brackets in some cases. pair = ('+', 'plus') sign, name = pair # tuples can be unpacked print(sign + ': ' + name) Tuples are great for passing around data in group or returning more than one value from a function. a = 1 b = 2 a, b = b, a # You can implement swap in one line! print(a) print(b) Let's see what else tuples can do: dir(()) Tuples don't have many methods mostly because they are a immuatable primitive type. Loops Python's main loop is foreach style that looks like this: list = [1, 2, 3] for item in list: print(item) # Do any action per item in the list item is the variable name for each item in the list. The list can be any iterable including a list or a tuple or the results of a database query. Let's see it in action with our list your_name = "Albert O'Connor" awesome_people = ["Eric Idle", your_name] awesome_people.append("John Cleese") for person in awesome_people: print(person) If you really want an index style for loop you can use the built in method range or the builtin in function enumerate. # Try range out by itself first. range(0,10) for number in range(0,10): print("{0} squared is {1}".format(number, number*number)) We can explicitly index a list, but you generally don't need to. list = ['bacon', 'eggs', 'ham'] for i in range(3): print(list[i]) # you can do this but why would you want to? If you want the index values assoicated with a list you can use the builtin enumerate list = ['bacon', 'eggs', 'ham'] for i, value in enumerate(list): # Sweet tuple unpacking print(str(i) + ": " + value) # Use enumerate if you want to operate based on the current index. While Though it isn't used too often there is a while loop. One nice property of foreach style loop is you can't create an infinite loop without trying really really hard. With a while loop it is as easy as any lanugage. This is the syntax: # Don't copy this, if some_condition were true you would get an infinite loop :) while some_condition: print("Something") Sometimes it is useful to have an infinite loop from which you escape when come condtion is satisfied. while(True): a=2 if(a==2): break Dictionaries Dictionaries are another container like lists, but instead of being indexed by a number like 0 or 1 it is indexed by a key which can be almost anything. The name comes from being able to use it to represent a dictionary. List literals use square brackets ("[]") but dictionaries use braces ("{}"). Let's see what the literal dictionary looks like. {"Python": "An awesome programming language", "Monty Python": "A british comedy troupe"} In a dictionary the key comes first followed by a colon (":") then the value then a comma (",") then another key and so on. This is one situation where a colon doesn't start a block. We can assign a dictionary to a variable and we can index it by keys to get the values (definitions) out. our_dictionary = { "Python": "An awesome programming language", "Monty Python": "A british comedy troupe" } our_dictionary["Python"] We can loop over the keys in a dictionary to list all of our definitions... for key in our_dictionary: print('The Key is "{0}" and the value is "{1}"'.format(key, our_dictionary[key])) Part 5 Functions A function looks like this: def add(a, b): return a + b def is the keyword to define a function. add in the above example is the name. All functions require a parameter list surrounded by an open bracket "(" and close bracket ")" even if there are no parameters. return is also a keyword which is required return a value. If it isn't provided None is returned. Function bodies are block like if statements and for loops. None None is Python's null object. There is only one created when the interrepter stared. None evaluates to False. None bool(None) That is a lot of information about function lets try out some of it. def the_answer(): # Even when there are no parameters you still need the brackets return 42 the_answer # Outputs information about the function object, see everything is an object Now let's try calling our function. def the_answer(): # Even when there are no parameters you still need the brackets return 42 the_answer() # The brackets are need to call the function def hello(name): return "Hello, {0}!".format(name) hello("Awesome") def no_return(): # With out return None gets returned 1 + 2 no_return() Flexible Arguments There are two types of arguments, positional and keyword. So far we have only seen positional arguments. Keyword agruments have default values. def add_5_to(value=0): return value + 5 print(add_5_to()) print(add_5_to(15)) print(add_5_to(value=20)) Keyword arguments can be called explicitly by keyword or implicitly by position, which can be a bit confusing. In the function definition positional arguments have to come before keyword arguments. When processing argument the same rule applies. Positional arguments are pocessed until the first keyword argurment is reached. def awesome(positional, default="default", flag=True): return positional, default, flag awesome("positional string") awesome('pos', 'def', 'flag') # positional arguments are applied in order awesome('pos', flag='flag') # defalt keyword can be skipped, but only if flag is used explicitly awesome('pos', flag='flag', default='def') # if you are using explicit keywords order doesn't matter awesome('pos', default='def', 'flag') # This doesn't work though awesome(default='def') # This also doesn't work, the positional argument must be provided Not all argments need to be named. It is possible to capture extra positional and keyword arguments using a special and a bit odd bit of syntax with "*" and "**". Math Basic mathematical functions are supplied via the math module. import math print(math.cos(0)) print(math.log(27)) print(math.log(27,3)) Try help(math.log) to see why the log function can take a variable number of arguments. Next Steps Go through the Setup Python instructions if you haven't already to make sure python is installed, pip is installed, and you have a good editor. If there is still time, help out some one else who is a bit further behind. You can also do some addition reading. Here are some choice selections form the Python Tutorial, but you can also just start from the beginning. - - - - - -
http://watpy.ca/learn/introduction/Python%20for%20Coders.md
CC-MAIN-2018-13
refinedweb
2,623
71.24
Open Source Low Code Development Platform (LCDP) LCDP for rapid web application development and sharing of machine learning models. Overview, but they provide a framework that makes it very easy to develop a full-fledged web application. LCDP for Web Application Development Appsmith Appsmith is a JavaScript-based visual development platform to build and launch internal tools quickly. Drag-and-drop pre-built widgets, and connect them using JavaScript to create interactive pages. Connect UI to your APIs and databases to build complex workflows in minutes. - UI Components: Table, Chart, Form, Map, Image, Video, and many more. - API Support: REST APIs, OAuth 2.0, CURL - Database Support: PostgreSQL, MongoDB, MySQL, Firestore, S3, Redshift, Elastic Search, DynamoDB, Redis, and MSFT SQL Server - Hosting: Cloud-hosted & On-premise Lowdefy Lowdefy is an open-source (Apache-2.0) low-code framework that lets you build web apps with YAML configuration files. It is great for building admin panels, BI dashboards, workflows, and CRUD apps. User interfaces in Lowdefy are built using blocks, which are React components. Lowdefy provides a set of default block types with the essentials needed to build an app, but you can also create your own custom blocks. Lowdefy uses webpack module federation to import these blocks as micro front-ends. Frappe Frappe, pronounced fra-pay, is a full-stack, batteries-included, web framework written in Python and Javascript with MariaDB as the database. It is the framework that powers ERPNext. It is pretty generic and can be used to build database-driven apps. Flask-AppBuilder Flask-AppBuilder is a simple and rapid application development framework, built on top of Flask. It includes detailed security, auto CRUD generation for your models, google charts, and much more. - Automatic menu generation. - Automatic CRUD generation. - Multiple actions on database records. - Big variety of filters for your lists. - Various view widgets: lists, master-detail, list of thumbnails, etc - Select2, Datepicker, DateTimePicker - Google charts with an automatic group by or direct values and filters. Django Django is a high-level Python Web framework that makes it very easy for rapid web development. Django is definitely one of the most widely used frameworks for Python developers. With some Python basics, you should be able to create a full-fledged app using Django. react-admin + strapi react-admin is a front-end framework for building data-driven applications running in the browser on top of REST/GraphQL APIs, using ES6, React, and Material Design. strapi is an open-source headless CMS to build powerful APIs with no effort. With these 2 combined, you can quickly build your front-end and back-end easily. ToolJet ToolJet is an open-source low-code framework to build and deploy internal tools quickly. You can connect to your data sources such as databases ( PostgreSQL, MongoDB, MySQL, Elasticsearch, Firestore, DynamoDB, and more ), API endpoints ( ToolJet supports importing OpenAPI spec & OAuth2 authorization), and external services ( Stripe, Slack, Google Sheets, etc ) and use pre-built UI widgets to build internal tools. It provides a visual app builder with widgets and supports mobile and desktop layouts. For deployment, you can deploy it using Docker, Kubernetes, Heroku, and more. LCDP for Machine Learning Plotly Dash For data scientists or machine learning engineers, Dash is definitely no stranger to them. It abstracts away all of the technologies and protocols that are required to build an interactive web-based analytics application. Dash is simple enough that you can bind a user interface around your Python code in an afternoon. Built on top of Plotly.js, React, and Flask, Dash ties modern UI elements like dropdowns, sliders, and graphs directly to your analytical Python code. Gradio Gradio allows you to quickly create customizable UI components around your TensorFlow or PyTorch models, or even arbitrary Python functions. With just a few lines of code, you can generate a user interface for your machine learning model. import gradio as gr def recognize_digit(img): # ... implement digit recognition model on input array # ... return dictionary of labels and confidences gr.Interface(fn=recognize_digit, inputs="sketchpad", outputs="label").launch() Streamlit Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. With just a few lines of code, you can create an impressive application to share your machine learning model. LCDP for Business Intelligence Metabase Metabase is an open-source low code business intelligence platform. It supports a wide range of relational and cloud databases. Using Metabase, you can easily develop beautiful dashboards which provide meaningful insights into your data. Superset Apache Superset is a data visualization and data exploration platform. Superset can query data from any SQL-speaking datastore or data engine (e.g. Presto or Athena) that has a Python DB-API driver and an SQLAlchemy dialect. It provides an intuitive interface for data visualization, ready-to-use visualizations, the ability to custom develop plugins, and many more features. NoCoDB And lastly, let’s also look into NocoDB which is an interesting application to turn any MySQL, PostgreSQL, SQL Server, SQLite & MariaDB into a smart spreadsheet. It provides a rich spreadsheet interface for you to search, sort filter table columns, display images and create customized views for the tables. You can even create and automate workflow using MS Teams, Slack, Discord, Email, SMS, Whatsapp, or using any 3rd party APIs. Programmatic API access is also provided using REST or GraphQL APIs. You may also want to check out these articles! Serving Machine Learning Models (DCGAN, PGAN, ResNext) using FastAPI and Streamlit Overview medium.com
https://alpha2phi.medium.com/open-source-low-code-development-platform-lcdp-96e0cd08c8f4
CC-MAIN-2021-25
refinedweb
925
56.45
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project. [PATCH][commit/obvious] Fix argument type on gdbsim_detach prototype. RE: [+rfc] Re: [patch v6 00/21] record-btrace: reverse [commit 1/2] Move processing of .debug_gdb_scripts to auto-load.c [commit 1/3] py-breakpoint.exp cleanups: use with_test_prefix [commit 2/2] Move processing of .debug_gdb_scripts to auto-load.c [commit 2/3] py-breakpoint.exp cleanups: reformat to 80 columns [commit 3/3] py-breakpoint.exp cleanups: unique test names [COMMIT PATCH] get_prev_frame, outer_frame_id and unwind->stop_reason checks are redundant. [COMMIT PATCH] get_prev_frame, stop_reason != UNWIND_NO_REASON, add frame debug output. [COMMIT PATCH] get_prev_frame, UNWIND_NULL_ID -> UNWIND_OUTERMOST [COMMIT PATCH] Make the maint.exp:'maint print objfiles' test less fragile. [COMMIT PATCH] register: "optimized out" -> "not saved". [commit/ARI] gdb_ari.sh: Remove entries for dirent.h and stat.h. [commit/obvious] Remove gdb_string.h from gdbarch.sh [commit/testsuite] mi-language.exp: Check "langauge-option" in -list-features output. [commit] Add some comments to configure.ac [commit] breakpoint.c (breakpoint_cond_eval): Fix and enhance comment. [commit] cli/cli-script.c (multi_line_command_p): New function. [commit] Dandling memory pointers in Ada catchpoints with GDB/MI. [commit] fix errors running arm-bl-branch-dest.exp [commit] Fix long line in earlier ChangeLog entry. [commit] fix typo in py-type.exp [commit] fix whitespace in py-symbol.exp [commit] gdb.base/ena-dis-br.exp: Add missing quote. [commit] gdb.python/py-arch.exp: Tweak test name for bad memory access test. [commit] linux-low.c (linux_set_resume_request): Fix comment. [commit] linux-low.c (resume_status_pending_p): Tweak comment. [commit] Make build tree rm -rf'able again [commit] py-auto-load.c (source_section_scripts): Move comment. [commit] py-frame.c: Delete FIRST_ERROR, superfluous [commit] python/py-frame.c (frapy_block): Fix error message text. [commit] Remove trailing whitespace in python/* [commit] Rename breakpoint_object to gdbpy_breakpoint_object. [commit] restore testing of source foo.py if python not configured in [commit] test name tweaks for py-value.exp [commit] unique test names, minor cleanups, in py-symbol.exp [commit] Work around gold/15646 [COMMITTED PATCH 0/2] "set debug frame 1" and not saved registers (was: Re: [PATCH 08/12] Replace some value_optimized_out with value_entirely_available) [COMMITTED] Eliminate dwarf2_frame_cache recursion, don't unwind from the dwarf2 sniffer (move dwarf2_tailcall_sniffer_first elsewhere). [committed] gdb.cp/derivation.exp: s/perrro/perror/ [COMMITTED] Make use of the frame stash to detect wider stack cycles. (was: Re: [PATCH] Don't let two frames with the same id end up in the frame chain. (Re: [PATCH 1/2] avoid infinite loop with bad debuginfo)) [FYI/www] Fix link to Internals Manual for master branch. [FYI] MAINTAINERS (Write After Approval): Add myself to the list. [OB] Simplify dwarf2-frame.c:read_addr_from_reg. (was: [RFA] Rename "read_reg" into "read_addr_from_reg" in struct dwarf_expr_context_funcs) [obv] gdb/NEWS: Fix typo [obv][patch] Fix up some old ChangeLog entries [PATCH 0/2] enable ptype/whatis for fortran types/modules [PATCH 0/2] Fix "info frame" in the outermost frame. [PATCH 0/2] fix multi-threaded unwinding on AArch64 Re: [PATCH 0/2] GDB process record and reverse debugging improvements for arm*-linux* [PATCH 0/3 V3] Cache code access for disassemble [PATCH 0/3] Cleanup mi-support.exp [PATCH 0/3] More perf test cases [PATCH 0/3] Use target_read_code in skip_prologue [PATCH 0/4 V4] GDB Performance testing Re: [PATCH 0/7] More run control cleanup. [PATCH 0/8] trivia [PATCH 00/10 V2] Cache code access for disassemble [PATCH 01/10] Remove last_cache Re: [PATCH 01/10] vla: introduce new bound type abstraction adapt uses [PATCH 01/11] Use 'struct varobj_item' to represent name and value pair [PATCH 02/10] Don't update target_dcache if it is not initialized Re: [PATCH 02/10] type: add c99 variable length array support [PATCH 02/11] Generalize varobj iterator [PATCH 03/10] Move target-dcache out of target.c Re: [PATCH 03/10] vla: enable sizeof operator to work with variable length arrays [PATCH 03/11] Iterate over 'struct varobj_item' instead of PyObject Re: [PATCH 03/12] Mark optimized out values as non-lazy. [PATCH 04/10] Don't stress 'remote' in "Data Caching" in doc Re: [PATCH 04/10] vla: enable sizeof operator for indirection [PATCH 04/11] Remove #if HAVE_PYTHON [PATCH 05/10] Invalidate or shrink dcache when setting is changed. RE: [PATCH 05/10] vla: allow side effects for sizeof argument [PATCH 05/11] Rename varobj_pretty_printed_p to varobj_is_dynamic_p [PATCH 06/10] Add REGISTRY for struct address_space. Re: [PATCH 06/10] vla: update type from newly created value [PATCH 06/11] Use varobj_is_dynamic_p more widely [PATCH 07/10] Associate target_dcache to address_space. Re: [PATCH 07/10] test: evaluate pointers to C99 vla correctly. [PATCH 07/11] MI option --available-children-only [PATCH 08/10] Don't invalidate dcache when option stack-cache is changed Re: [PATCH 08/10] test: multi-dimensional c99 vla. [PATCH 08/11] Iterator varobj_items by their availability [PATCH 09/10] set/show code-cache Re: [PATCH 09/10] test: basic c99 vla tests [PATCH 09/11] Delete varobj's children on traceframe is changed. [PATCH 1/1] Documentation for MPX. [PATCH 1/2] avoid infinite loop with bad debuginfo [PATCH 1/2] fortran: enable ptype/whatis for user defined types. Re: [PATCH 1/2] GDB process record and reverse debugging improvements for arm*-linux* [PATCH 1/2] Make "set debug frame 1" use the standard print routine for optimized out values. [PATCH 1/2] New OPTIMIZED_OUT_ERROR error code. [PATCH 1/2] Update doc on displayhint in command -var-list-children [PATCH 1/2] Use mi_create_floating_varobj [PATCH 1/3] Remove 'whatever' in lib/mi-support.exp [PATCH 1/3] Renaming in target-dcache.c [PATCH 1/3] Test on disassemble [PATCH 1/3] Use target_read_code in skip_prologue (i386) [PATCH 1/4] New make target 'check-perf' and new dir gdb.perf [PATCH 1/8] make symtab::filename const Re: [PATCH 10/10] test: add mi vla test [PATCH 10/10] Use target_read_code in disassemble. [PATCH 10/11] Match dynamic="1" in the output of -var-list-children [PATCH 11/11] Test case [PATCH 2/2] Check has_more in mi_create_dynamic_varobj [PATCH 2/2] Doc 'dynamic' for command -var-list-children [PATCH 2/2] Fix "info frame" in the outermost frame. [PATCH 2/2] fortran: enable ptype/whatis for modules. Re: [PATCH 2/2] GDB process record and reverse debugging improvements for arm*-linux* [PATCH 2/2] handle an unspecified return address column [PATCH 2/2] Make "set debug frame 1" output print <not saved> instead of <optimized out>. Re: [PATCH 2/2] Read memory in multiple lines in dcache_xfer_memory. [PATCH 2/3] Fix format issues in lib/mi-support.exp Re: [PATCH 2/3] New field 'la_natural_name' in struct language_defn [PATCH 2/3] set/show code-cache [PATCH 2/3] skip_prolgoue (amd64) [PATCH 2/3] Test on single step [PATCH 2/4] Perf test framework [PATCH 2/8] make symtab::dirname const [PATCH 3/3] Perf test case: skip-prologue [PATCH 3/3] Remove unnecessary '\'. Re: [PATCH 3/3] Remove varobj_language_string, languages and varobj_languages [PATCH 3/3] Test on backtrace [PATCH 3/3] Use target_read_code in disassemble. [PATCH 3/4] Mention perf test in testsuite/README [PATCH 3/8] put the psymtab filename in the filename bcache [PATCH 4/4] Test on solib load and unload [PATCH 4/8] remove some stale FIXMEs Re: [PATCH 5/5] set/show code-cache NEWS and doc [PATCH 5/8] pack partial_symtab for space [PATCH 6/8] remove unnecessary declaration [PATCH 7/8] remove objfile_to_front [PATCH 8/8] update free_objfile comment [PATCH OBV] Fix typo [PATCH OBV] Fix typo "checksm" [PATCH OBV] s/see @pxref/@pxref in doc [PATCH v1 1/1] Fix PR16193 - gdbserver aborts. [PATCH v2 0/6] introduce common.m4 [Patch v2 00/10] C99 variable length array support [PATCH v2 00/16] use gnulib more heavily [Patch v2 01/10] vla: introduce new bound type abstraction adapt uses [PATCH v2 01/16] link gdbreplay against gnulib [Patch v2 02/10] type: add c99 variable length array support [PATCH v2 02/16] change how list of modules is computed [Patch v2 03/10] vla: enable sizeof operator to work with variable length arrays [PATCH v2 03/16] import strstr and strerror modules [Patch v2 04/10] vla: enable sizeof operator for indirection [PATCH v2 04/16] remove gdb_string.h [Patch v2 05/10] vla: update type from newly created value [PATCH v2 05/16] don't check for string.h or strings.h [Patch v2 06/10] vla: print "dynamic length" for unresolved dynamic bounds [PATCH v2 06/16] import gnulib dirent module [Patch v2 07/10] test: multi-dimensional c99 vla. [PATCH v2 07/16] remove gdb_dirent.h [Patch v2 08/10] test: evaluate pointers to C99 vla correctly. [PATCH v2 08/16] don't check for stddef.h [Patch v2 09/10] test: basic c99 vla tests [PATCH v2 09/16] stdlib.h is universal too [PATCH v2 1/1] Documentation for MPX. [PATCH v2 1/1] Fix PR16193 - gdbserver aborts. [PATCH v2 1/6] introduce common.m4 [Patch v2 10/10] test: add mi vla test [PATCH v2 10/16] don't check for unistd.h [PATCH v2 11/16] sys/types.h cleanup [PATCH v2 12/16] import gnulib sys/stat.h module [PATCH v2 13/16] remove gdb_stat.h [PATCH v2 14/16] import gnulib sys_wait module [PATCH v2 15/16] conditionally define __WCLONE [PATCH v2 16/16] remove gdb_wait.h [PATCH v2 2/6] remove link.h checks [PATCH v2 3/6] use gdb_string.h in m32c-tdep.c [PATCH v2 4/6] gdb configure updates [PATCH v2 5/6] fix a comment in configure.ac [PATCH v2 6/6] remove unused gdbserver configury [PATCH v2] Events when inferior is modified [PATCH v2] gdb/dwarf2read.c: Sanity check DW_AT_sibling values. [PATCH v2] gdb: fix cygwin check in configure script [PATCH v2] Resurrect gdb-add-index as a contrib script [PATCH v2] S390: Fix TDB regset recognition [PATCH v3 00/13] use gnulib more heavily Re: [PATCH v3 00/17] test suite parallel safety [PATCH v3 01/13] link gdbreplay against gnulib [PATCH v3 02/13] change how list of modules is computed [PATCH v3 03/13] import strstr and strerror modules [PATCH v3 04/13] remove gdb_string.h [PATCH v3 05/13] don't check for string.h or strings.h [PATCH v3 06/13] import gnulib dirent module [PATCH v3 07/13] remove gdb_dirent.h [PATCH v3 08/13] don't check for stddef.h [PATCH v3 09/13] stdlib.h is universal too [PATCH v3 1/1] Fix PR16193 - gdbserver aborts. [PATCH v3 10/13] don't check for unistd.h [PATCH v3 11/13] sys/types.h cleanup [PATCH v3 12/13] import gnulib sys/stat.h module [PATCH v3 13/13] remove gdb_stat.h [PATCH v3] Resurrect gdb-add-index as a contrib script Re: [PATCH v4 2/9] add "this" pointers to more target APIs Re: [PATCH v4 3/9] add target method delegation Re: [PATCH v4 7/9] make dprintf.exp pass in always-async mode Re: [PATCH v4 8/9] fix py-finish-breakpoint.exp with always-async Re: [PATCH v4 9/9] enable target-async [PATCH v4] Resurrect gdb-add-index as a contrib script [PATCH v5] Resurrect gdb-add-index as a contrib script RE: [patch v6 18/21] record-btrace: extend unwinder RE: [patch v6 21/21] record-btrace: add (reverse-)stepping support RE: [PATCH V7 0/8] Intel(R) MPX register support [patch v7 00/24] record-btrace: reverse [patch v7 01/24] btrace, linux: fix memory leak when reading branch trace [patch v7 02/24] btrace: uppercase btrace_read_type [patch v7 03/24] gdbarch: add instruction predicate methods [patch v7 04/24] frame: add frame_is_tailcall function [patch v7 05/24] frame: artificial frame id's [patch v7 06/24] btrace: change branch trace data structure [patch v7 07/24] record-btrace: fix insn range in function call history [patch v7 08/24] record-btrace: start counting at one [patch v7 09/24] btrace: increase buffer size [patch v7 10/24] record-btrace: optionally indent function call history [patch v7 11/24] record-btrace: make ranges include begin and end [patch v7 12/24] btrace: add replay position to btrace thread info [patch v7 13/24] target: add ops parameter to to_prepare_to_store method [patch v7 14/24] record-btrace: supply register target methods [patch v7 15/24] frame, backtrace: allow targets to supply a frame unwinder [patch v7 16/24] record-btrace, frame: supply target-specific unwinder [patch v7 17/24] record-btrace: provide xfer_partial target method [patch v7 18/24] record-btrace: add to_wait and to_resume target methods. [patch v7 19/24] record-btrace: provide target_find_new_threads method [patch v7 20/24] record-btrace: add record goto target methods [patch v7 21/24] record-btrace: extend unwinder [patch v7 22/24] btrace, gdbserver: read branch trace incrementally [patch v7 23/24] record-btrace: show trace from enable location [patch v7 24/24] record-btrace: add (reverse-)stepping support Re: [PATCH V7 5/8] Add MPX support to gdbserver. Re: [PATCH V7 8/8] Add MPX feature description to GDB manual. [PATCH, doc RFA] Allow CLI and Python conditions to be set on same breakpoint Re: [PATCH, remote] Handle 'k' packet errors gracefully [patch, sim] Fix simulator Makefile [PATCH, testsuite] Prevent warnings due to dummy malloc calls. [PATCH] [commit PR cli/16122] Unify interactivity tests to use input_from_terminal_p Re: [PATCH] [DOC] shell startup files, clarifications and fixes. [PATCH] [PR gdb/15224] Enable "set history save on" by default [PATCH] [PR gdb/16123] Modify GDB testsuite to always disable history saving [PATCH] [remote/gdbserver] Don't lose signals when reconnecting. Re: [PATCH] [SPARC64] Figure out where a longjmp will land [PATCH] Add d_main_name to dlang.c [PATCH] Add MIPS UFR support [PATCH] constify to_detach [PATCH] Debug Methods in GDB Python [PATCH] Delegate to target_ops->beneath to read cache lines [PATCH] Delete interp_exec_p [PATCH] Don't evaluate condition for non-matching thread [PATCH] Don't let two frames with the same id end up in the frame chain. (Re: [PATCH 1/2] avoid infinite loop with bad debuginfo) [PATCH] Eliminate dwarf2_frame_cache recursion (move dwarf2_tailcall_sniffer_first elsewhere) [PATCH] Fix completion for pascal language Re: [patch] fix for checking the command ambiguousness. [PATCH] Fix for PR tdep/15653: Implement SystemTap SDT probe support for AArch64 [PATCH] Fix GDB crash with upstream GCC due to memcpy(NULL, ...) [PATCH] Fix GDB crash with upstream GCC due to qsort(NULL, ...) Fwd: Re: [PATCH] Fix gdb.base/shreloc.exp: (msymbol) relocated functions have different addresses fail in cygwin Re: [PATCH] Fix Gold/strip discrepancies for PR 11786 [PATCH] fix grammar oddity in the manual [PATCH] Fix loading libc longjmp probes when no custom get_longjmp_target exists [PATCH] fix multi-arch-exec for parallel mode Re: [PATCH] Fix PR 12702 - gdb can hang waiting for thread group leader (gdbserver) [PATCH] fix PR c++/16117 [PATCH] Fix PR remote/15974 Re: [PATCH] fix PR-12417 [PATCH] gdb.dwarf2/dwzbuildid.exp: Avoid reserved variable name [PATCH] gdb.mi/mi-info-os.exp: Fix cross-debugger testing [PATCH] gdb/arm-tdep.c: Remove "Infinite loop detected" error message. [PATCH] gdb/arm-tdep.c: Use filtered output in arm_print_float_info. [PATCH] gdb/dwarf2read.c: Sanity check DW_AT_sibling values. [PATCH] gdb: fix cygwin check in configure script Re: [PATCH] hardware watchpoints turned off, inferior not yet started Re: [PATCH] Improve MI inferior output check / mi-console.exp [PATCH] include/gdb/section-scripts.h: New file. Re: [PATCH] Let gdbserver doesn't tell GDB it support target-side breakpoint conditions and commands if it doesn't support 'Z' packet [PATCH] Make "backtrace" doesn't print python stack if init python dir get fail [PATCH] make GDB can handle the binary that psymbol table has something wrong [PATCH] Move "types deeply equal" code from py-type.c to gdbtypes.c [PATCH] mt set per-command remote-packets on|off Re: [PATCH] New "make check-headers" rule. [PATCH] New "make check-headers" rule. (was: Re: [RFA/commit 1/3] language.h: Add "symtab.h" #include) [PATCH] off-by-one fix for py-linetable.c Re: [PATCH] PR 15520 - GDB step command crashed on non-stop mode [PATCH] Print entirely unavailable struct/union values as a single <unavailable>. (Re: [PATCH 06/12] Delete value_bits_valid.) [PATCH] print summary from "make check" [PATCH] Resurrect gdb-add-index as a contrib script [PATCH] S390: Fix TDB regset recognition Re: [PATCH] sim/arm: Prevent crash when running sim with no binary. Re: [PATCH] sim/arm: Prevent NULL pointer dereference in sim_create_inferior. [PATCH] simplify bpstat_check_breakpoint_conditions Re: [PATCH] single-stepping over unconditional branches with zero offset Re: [PATCH] testsuite/gdb.dwarf2: dw2-case-insensitive.exp: p fuNC_lang fails on arm Re: [PATCH] testsuite/gdb.dwarf2: Fix for dw2-ifort-parameter failure on ARM [PATCH] testsuite: introduce index in varobj child eval. [PATCH] Tighten regexp in gdb.base/setshow.exp [PATCH] update comment in dw2-bad-cfi.S. [PATCH] use error, not internal_error, in dwarf2-frame.c [PATCH] Use gdb_produce_source [PATCH] VFP, SIMD and coprocessor instructions recording for arm*-linux* targets. Re: [PATCH][commit/obvious] Fix argument type on gdbsim_detach prototype. Re: [patch][python] 1/3 Python representation of GDB line tables (Python code) Re: [patch][python] 2/3 Python representation of GDB line tables (Testsuite) Re: [patch][python] 3/3 Python representation of GDB line tables (Documentation) Re: [patch][python] Fix python/14513 Re: [patch][python] Fix python/15747 (provide access to COMPLETE_EXPRESSION [PATCH][RFC] symfile.c:find_separate_debug_file: additional path Fwd: [PATCH]Add symbol whose field 'has_type' has been set to partial symbol table Re: [ping] [PATCH] Skip VDSO when reading SO list (PR 8882) [PING][PATCH] gdb.mi/mi-info-os.exp: Fix cross-debugger testing [pushed] [PATCH v3 1/1] Fix PR16193 - gdbserver aborts. [pushed] [PATCH V7 0/8] Intel(R) MPX register support [pushed] Fix type of not saved registers. (was: Re: [PATCH 2/2] Make "set debug frame 1" output print <not saved> instead of <optimized out>.) Re: [pushed] Plug target side conditions and commands leaks [pushed] Plug target side conditions and commands leaks (was: Re: [PATCH] Let gdbserver doesn't tell GDB it support target-side breakpoint conditions and commands if it doesn't support 'Z' packet) Re: [python][patch] Add temporary breakpoint features to Python breakpoints. [RCF 00/11] Visit varobj available children only in MI [RFA 1/2] New GDB/MI command "-info-gdb-mi-command" [RFA 2/2] Add "undefined-command" error code at end of ^error result... [RFA 2/3] New function cli-utils.c:extract_arg_const [RFA GDB/MI] Help determine if GDB/MI command exists or not [RFA/Ada(v2) 1/3] Add command to list Ada exceptions [RFA/Ada(v2) 2/3] Implement GDB/MI equivalent of "info exceptions" CLI command. [RFA/Ada(v2) 3/3] Document "info exceptions" and "-info-ada-exception" new commands. [RFA/commit 1/3] language.h: Add "symtab.h" #include [RFA/commit+doco] Add "language-option" to -list-features [RFA/commit] Fix filestuff.c build error if RLIMIT_NOFILE not defined. [RFA/Python] Fix int() builtin with range type gdb.Value objects. [RFA] crash while re-reading symbols from objfile on ppc-aix. [RFA] Fix c++/14819 (implicit this) [RFA] Fix DW_OP_GNU_regval_type with FP registers Re: [RFA] Fix namespace aliases (c++/7539, c++/10541) [RFA] Fix PR 16201: internal error on a cygwin program linked against a DLL with no .data section [RFA] Rename "read_reg" into "read_addr_from_reg" in struct dwarf_expr_context_funcs Re: [RFC 00/12] Merge value optimized_out and unavailable Re: [RFC 1/6 -V2] Fix display of tabulation character for mingw hosts. Re: [RFC 2/6] Avoid missing char before incomplete sequence in wchar_iterate. [RFC 3/3] GDB/MI: Add new "--language LANG" command option. Re: [RFC 5/6] Handle "set print sevenbit-strings on" in print_wchar Re: [RFC 6/6] Fix remaining failures in gdb.base/printcmds.exp for mingw hosts. [RFC/Ada 1/2] Add command to list Ada exceptions [RFC/Ada 2/2] Implement GDB/MI equivalent of "info exceptions" CLI command. [RFC] Add ada-exception-catchpoints to -list-features command output. [RFC] Allow to find parameters in completion for case insensitive settings Re: [RFC] Debug Methods in GDB Python [RFC] New GDB/MI command "-info-gdb-mi-command" [RFC] Python's gdb.FRAME_UNWIND_NULL_ID is no longer used. What to do with it? Re: [RFC] Use Doxygen for internals documentation [RFC][PATCH] GDB->Python API changes in preparation for Guile support Re: C99 variable length array support Re: Extending RSP with vCont;n and vCont;f Fix for bugzilla/16152 Fix for bugzilla/16168 Fix for pr16196: Honor fetch limit for strings of known size FYI: gdbserver crash due to: [pushed] [PATCH V7 0/8] Intel(R) MPX register support Re: gdb.texinfo is getting too big gdb.threads/*.exp failures RE: gdbserver crash due to: [pushed] [PATCH V7 0/8] Intel(R) MPX register support Re: gdbserver, aarch64: Zero out regs in aarch64_linux_set_debug_regs guile scripting for gdb Re: I think permanent breakpoints are fundamentally broken as is Inconsistency between cli and python breakpoints for ignore count tracking New ARI warning Tue Nov 19 01:53:09 UTC 2013 Re: Old config.guess in binutils-gdb git Re: PING: Re: [RFC 00/12] Merge value optimized_out and unavailable PowerPC64 ELFv2 trampoline match Publishing binary interfaces [was Re: [PATCH] Move "types deeply equal" code from py-type.c to gdbtypes.c] pushed: [RFA 2/3] New function cli-utils.c:extract_arg_const pushed: [RFA/Ada(v2) 1/3] Add command to list Ada exceptions pushed: [RFA/commit 1/3] language.h: Add "symtab.h" #include pushed: [RFA/commit+doco] Add "language-option" to -list-features pushed: [RFA/Python] Fix int() builtin with range type gdb.Value objects. pushed: [RFA] crash while re-reading symbols from objfile on ppc-aix. pushed: [RFA] Rename "read_reg" into "read_addr_from_reg" in struct dwarf_expr_context_funcs Python API Retention Policy [was Re: [RFC] Python's gdb.FRAME_UNWIND_NULL_ID is no longer used. What to do with it?] Question regarding gdb pkg Re: Regression for gdb.pascal/* [Re: [RFA 4/4] Constify parse_linesepc] Re: Release 2.24 release-related minor questions (post switch to git) Rename gdb.dwarf2/dw2-bad-cfi.* to gdb.dwarf2/dw2-unspecified-ret-addr.*. (was: Re: [PATCH] update comment in dw2-bad-cfi.S.) RFA/Ada (v2) new CLI + GDB/MI commands to list Ada exceptions RFC new CLI + GDB/MI commands to list Ada exceptions Re: Sim hangs on new target at dup_arg_p() in infinite loop. Re: supporting all kinds of partially-<unavailable> enum target_object types
http://www.sourceware.org/ml/gdb-patches/2013-11/subjects.html
CC-MAIN-2015-32
refinedweb
3,755
55.64
Hi, Using a for loop, I have wrote a program that displays a “6 times table” multiplication table: public class TimesTable3_4a { public static void main(String[] args) { int table = 6; for ( int row=1; row<=12; row++ ) { System.out.println( row + " * " + table + " = " + ( table * row ) ); } } } How could I adapt the above program so that instead of a “6 times” table, the user chooses which table is displayed? Thanks in advance. the user can gice the program info in the following way: java TimesTable3_4a 6 This would give a 6 to the application. Parameters passed to applications in such a way end up in the String[] args variable. (order in wich they appear is the same as wich they were entered on command line. This should send you in the right way... Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140655-duplicate-entry-problem&goto=nextnewest
CC-MAIN-2014-49
refinedweb
148
61.77
We are releasing an update to Windows Azure WebJobs SDK introduced by Scott Hanselman here. Download this release You can download WebJobs SDK in a console application project from the NuGet gallery. You can install or update to these packages through NuGet gallery using the NuGet Package Manager Console, like this: - Install-Package Microsoft.WindowsAzure.Jobs -Pre - Install-Package Microsoft.WindowsAzure.Jobs.Host -Pre What is WebJobs SDK? The WebJobs feature of Windows WebJobs SDK, connecting and running background task requires a lot of complex programming. The SDK: -. Scenarios. When the web site needs to get work done, it pushes a message onto a queue. A backend service pulls messages from the queue and does the work. This is a common producer – consumer pattern. - and you want to do analysis on them. Or you might want to schedule a task to run weekly to clean up old log files. Goals of the SDK - Provide a. Features of the SDK “longqueue”. For more details on triggers please see this post. - publicstaticvoid ProcessQueue([QueueInput(“longqueue”)] string output) - { - Console.WriteLine(output); - - } all functions do you have in your program. A JobHost object (which lives in Microsoft.WindowsAzure.Jobs.Host ) reads the bindings, listens on the triggers, and invokes the functions. In the following example, you create an instance of JobHost and call RunAndBlock(), which will cause the JobHost to listen for any triggers on any functions that you define in this Host. - staticvoid Main(string[] args) - { - JobHost host = newJob1-input”, the SDK will trigger WaterMark function. Watermark will process the image and write to “images2-input” container which will trigger the Resize function. Resize function will resize the image and write it to “images2-output” Blob container. The following code shows the WebJob described above. For a full working sample, please see the sample here When you run the WebJob in Azure, you can view the WebJobs Dashboard by clicking the logs link of the “ImageResizeAndWaterMark” in the WEBJOBS tab of Windows Azure Websites portal. Since the Dashboard is a SiteExtension you can access it by going to the url:. You will need your deployment credentials to access the SiteExtension. For more information on accessing Site Extension, see the documentation on the Kudu project Function execution details When. Invoke & Replay. Samples Samples for WebJobs SDK can be found at - - - List of articles on WebJobs and WebJobs SDK Deploying WebJobs with SDK If you don’t want to use the WebJobs portal page to upload your scripts, you can use FTP, git, or Web Deploy. For more information, see How to deploy Windows Azure WebJobs and Git deploying a .NET console app to Azure using WebJobs If you want to deploy your WebJobs along with your Websites, check out the following Visual Studio extension. Known Issues from 0.1.0-alpha1 to 0.2.0-alpha2 Dashboard will only work for WebJobs deployed with 0.2.0-alpha2 If you had a WebJob deployed with 0.1.0-alpha.1.0-alpha1. To work around this error, please update your WebJob to use 0.2.0-alpha2 NuGet package and redeploy your WebJob. Give feedback and get help and the tag Azure-WebJobsSDK for StackOverflow. Join the conversationAdd Comment Is there a way to ensure only one copy of a web job runs if there are multiple web site instances? @Brian, you cannot restrict a job to have only one running instance. However, if you have multiple instances of the same job and you are listening for queue messages, only one instance of the job will pick a specific queue message. The same applies to blobs. Hello all, great job with WebJobs. There's any possibility a WebJobs on premise? Tks! @Brian there is some support for it. Please see github.com/…/Web-jobs There is no User Interface to set it yet in the Azure portal. As Victor mentioned if you are using Queues, then only instance of the WebJob will pick up the Queue message @Luiz I can take this feedback back to the team. Hi. I understand the purpose of the sdk, but I would like to know if something like the following is in line with the design — especially the scheduler bit: class Program { static void Main(string[] args) { var host = new JobHost(); // Will run the queue worker trigger on background thread host.RunOnBackgroundThread(); // Will run the scheduler in the main thread host.Call(typeof(Workers).GetMethod("Scheduler")); } } public class Workers { [NoAutomaticTrigger] public static void Scheduler(IBinder binder) { while (true) { // Do something on a regular basis // Thread.Sleep(5000); } } public static void QueueWorker( [QueueInput("workerqueue")] WorkItem workItem, IBinder binder) { // Do something when triggered // } } If there are better ways to do it, please advise. Thx @lnaie: A better approach is to use "RunAndBlock" instead of "RunOnBackgroundThread". That way, you don't need the "Scheduler" function because RunAndBlock will keep the process alive. I've been playing with this a solution. I really like webjobs so far. One behavior I noticed though: webjobs seem to start back up if they were interrupted mid process. It really became evident when debugging it on a local machine and stopping several debug sessions in the middle of the job. The next time I ran a debug session, the webjob started back up saying it received a message in the queue, even though the queue was empty. Is there a way to see if a webjob is pending and will restart on next run via .net library? You can check the status of the WebJob in the dashboard. In your case since you were debugging, you might have stopped the WebJob midway through the processing so the SDK would have put the message back on the Queue. The next time when you started debugging since the Queue now has a message it will trigger the function again. If you are running tests, you can call CloudQueue clear method to clear the queue before you start the test. Ah yes, it appears that the SDK puts the message back on the queue after a while to retry them. Is there a way to control this behavior? Currently no. the reason the SDK puts the message back in the Queue is because if the WebJob stopped in the middle or processing, the message is put back on the queue since the processing did not finish so when the webjobs comes back up next time, the function will be triggered and the processing can happen again. Try this: blog.smarx.com/…/deleting-windows-azure-queue-messages-handling-exceptions If there is a binding error, and the job fails then the message is deleted from the queue. Clearly it shouldn't be dequeued if the job fails, the invocation log shows the job failing, but the message is deleted from the queue. I am getting this error and the message is deleted, how do I report a fault? Error while binding parameter #0 'TheClassLibrary.Certificate Certificate':Binding parameters to complex objects (such as 'Certificate') uses Json.NET serialization. 1. Bind the parameter type as 'string' instead of 'Certificate' to get the raw values and avoid JSON deserialization, or 2. Change the queue payload to be valid json. The JSON parser failed: Error parsing boolean value. Path '', line 0, position 0. @John M, Thank you for reporting this issue. We have opened an issue to track this. Any guidance on the pricing for web jobs? Obviously this is in Alpha but it would be worth knowing if architecting a solution that runs every minute is going to hit me in the wallet hard, or whether i should look at different options. I have heterogeneous objects in my table. How can I access them from the IDictionary implementation? I've tried string as the value type, thinking I could access the raw JSON, but the JobHost throws an error. Thanks, Josh
https://blogs.msdn.microsoft.com/webdev/2014/03/27/announcing-0-2-0-alpha2-preview-of-windows-azure-webjobs-sdk/?replytocom=40564
CC-MAIN-2018-34
refinedweb
1,309
64.81
01 June 2012 09:51 [Source: ICIS news] By Ong Sheau Ling ?xml:namespace> Prices headed south soon after the news of an unplanned cracker shutdown by a major naphtha buyer hit the market on Thursday and naphtha nosedived to its 18-month low. At 806 GMT, the open-spec naphtha was at $782-809/tonne (€633-655/tonne) CFR (cost and freight) Japan for second half of July, with July Brent at $107.32/bbl on 31 May and the July crack spread at $26.60/tonne. The news about “This is a major decision that the FPCC has taken. This shows how bad the downstream demand is,” a South Korean cracker operator said. FPCC, a major buyer, will skip spot naphtha purchases for loading in the second half of July, a source close to the company said. “Looking at the margins, more crackers may reduce their run rates or even shut the plants. If so, supply will turn even longer,” a trader said. The recent slump in olefins spot prices has squeezed margins of cracker operators, pressuring to cut cracker run rates or even shutdown. For instance, However, YNCC may be still needs to buy spot first half of July loading cargoes, depending on the duration of the reduced operations, sources close to the company said. According to market sources it is not just the regional naphtha supply in Asia that will turn long, arbitrage volumes from “The weak prices in Europe will impact Consequently, spot discussions by Asian cracker operators were muted. “Until there is a clearer direction of how margins will perform, end-users will not be ready to buy,” another trader said. “If we do see some price stability in the downstream products, perhaps buying appetite of the [cracker operators] will renew. However, in the long run, demand is still dependent on the macroeconomic situation,” a Singapore-based trader said. Bearish market sentiment has also led to tender premiums to fall below $20/tonne, as compared with levels above $20/tonne last week. State-owned Indian Oil Corp (IOC) has sold by tender 35,000 tonnes of naphtha to PetroChina at a premium of $18.75/tonne to the However, premium offered by ADNOC on one-year term naphtha supplies for July 2012 to June 2013 were high at $26.00-27.50/tonne and the company has no intention of reducing its premium, given its limited volumes. “It is hard to say whether [naphtha] prices will bottom out soon, but looking at the low crack [spread between naphtha and ICE Brent], this low pricing can’t last for too long,” another Singapore-based trader said. (
http://www.icis.com/Articles/2012/06/01/9566141/asia-naphtha-at-18-month-low-on-cracker-shutdown-reduced-ops.html
CC-MAIN-2013-48
refinedweb
441
58.92
Tayss wrote: > Ok, now being sober... > > > mike420 at ziplip.com wrote: > >>Why do you need a macro for that? Why don't you just write >> >>def start_window(name) : # 1 >> app = wxPySimpleApp() # 2 >> frame = MainWindow(None, -1, name) # 3 >> frame.Show(True) # 4 >> app.MainLoop() # 5 >> return (app, frame) # 6 > > > Remember that we lose this game if lines #2-5 are executed out of > order. The system crashes. > <snip> > Your example above is sensible, because we don't have the power to do > anything more meaningful. So we're stuck with some trivial function > that does nothing really important. Boa Constructor, the great Python > IDE, pops up three windows of different kinds and names on startup, > but /we/ are stuck with the trivial ability to customize one window's > name. > This isn't really the case I think, we just have a different idiom for doing something more meaningful. The solutions in the wxPython world that I have seen do something like this: class Application: def __init__(self, name): app = wxPySimpleApp() self.name = name try: self.createApplication() app.MainLoop() except Exception: # report on applicatino failure # and close down properly raise def createApplication(self): """create your application here""" # note, the window name is name raise NotImplementedError So know the usage is class MyApp(Application): def createApplication(self): frame = self.frame = wxFrame(None, -1, self.name) frame.Show() app = MyApp("test") So there are non-macro solutions to these issues that are equally expressive in some cases. This is just more verbose than the corresponding macro solution, I still think that it is quite readable though. app.app = application app.frame = main window Brian.
https://mail.python.org/pipermail/python-list/2003-October/226545.html
CC-MAIN-2018-05
refinedweb
271
58.48
Scikit-learn for Python is a library for machine learning. It has many algorithms for regression, classification, and clustering, including SVMs, gradient boost, k-means, random forests, and DBSCAN. It is planned to work with Numpy and SciPy in Python. As a Google Summer of Code (also known as GSoC) project by David Cournapeau, the scikit-learn project started as scikit. learn. It gets its name from a different third-party extension to SciPy, “Scikit.” Python Scikit-learn Scikit (most of it) is written in Python and some of its main algorithms are written for even better results in Cython. Scikit-learn is used to construct models and, as there are better frameworks available for the purpose, it is not recommended to use it for reading, manipulating, and summarizing data. It is open source and is licensed under BSD. Scikit-learn (Sklearn) is the most useful and robust library for machine learning in Python. It provides a selection of efficient tools for machine learning and statistical modeling including classification, regression, clustering, and dimensionality reduction via a consistent interface in Python. This library, which is largely written in Python, is built upon NumPy, SciPy, and Matplotlib.) Install Scikit Learn Scikit assumes that you have a Python 2.7 or above framework running on your computer with NumPY (1.8.2 and above) and SciPY (0.13.3 and above) packages. We will continue with the installation once we have these packages installed. For pip installation, in the terminal, run the following command: pip install scikit-learn import sklearn Scikit Learn Loading Dataset Let’s begin by loading a dataset with which to play. Let’s load a straightforward dataset called Iris. It is a flower dataset and includes 150 observations of various measurements of the flower. Using scikit-learn, let’s see how to load the dataset. # Import scikit learn from sklearn import datasets # Load data iris= datasets.load_iris() # Print shape of data to confirm data is loaded print(iris.data.shape) gives us: (150,4) Scikit Learn SVM – Learning and Predicting Now that we have the data loaded, let’s try to learn from it and predict new data. We have to construct an estimator for this reason and then call its method of fit. from sklearn import svm from sklearn import datasets # Load dataset iris = datasets.load_iris() clf = svm.LinearSVC() # learn from the data clf.fit(iris.data, iris.target) # predict for unseen data clf.predict([[ 5.0, 3.6, 1.3, 0.25]]) # Parameters of model can be changed by using the attributes ending with an underscore print(clf.coef_ ) Here is what we get when we run this script: Scikit Learn Linear Regression Creating various models is rather simple using scikit-learn. Let’s start with a simple example of regression. Now that we have the data loaded, let’s try to learn from it and predict new data. We have to construct an estimator for this reason and then call its method of fit. #import the model from sklearn import linear_model reg = linear_model.LinearRegression() # use it to fit a data reg.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2]) # Let's look into the fitted data print(reg.coef_) gives us: [0.5 0.5] kNN Let’s try a simple classification algorithm. from sklearn import datasets # Load dataset iris = datasets.load_iris() # Create and fit a nearest-neighbor classifier from sklearn import neighbors knn = neighbors.KNeighborsClassifier() knn.fit(iris.data, iris.target) # Predict and print the result result=knn.predict([[0.1, 0.2, 0.3, 0.4]]) print(result) gives us: [0] K-means clustering This is the simplest clustering algorithm. The set is divided into ‘k’ clusters and each observation is assigned to a cluster. This is done iteratively until the clusters converge. from sklearn import cluster, datasets # load data iris = datasets.load_iris() # create clusters for k=3 k=3 k_means = cluster.KMeans(k) # fit data k_means.fit(iris.data) # print results print( k_means.labels_[::10]) print( iris.target[::10]) Ending Note If you liked reading this article and want to read more, continue to follow the site! We have a lot of interesting articles upcoming in the near future. If you are new to any of these concepts, we recommend you take up tutorials concerning these topics.
https://www.codegigs.app/free-data-science-course/1581-2/
CC-MAIN-2022-05
refinedweb
716
58.99
XA datasource with JBM problemPeris Brodsky Aug 19, 2008 10:58 AM Following the advice in the wiki page:, I've almost got it working, except for the intended behavior, which I'm not seeing. What I want is for a server component, when a certain event occurs, to send a message. However, if the transaction that is active when the event occurs is rolled back, the send of the message should be rolled back, as well. I assume that this is a legitimate goal. Problem is, I am not seeing this behavior. In my test, the originating Tx is rolled back, but the message is still delivered. Now for the specifics: The server component involved is an entity EJB, and more specifically, the code that is doing the messaging is a custom Interceptor written for the entity. The purpose is to intercept certain mutator method invocations on the bean, and to log them to a message queue. However, if the transaction is rolled back, so are the mutations, and so should the log messages. The code is using the JMS JCA resource adaptor named java:/JmsXA to create the JMS connection. Is there anything else I need to do to get the destination enlisted as an XA resource with the EJB container's TransactionManager? -Peris 1. Re: XA datasource with JBM problemAndy Taylor Aug 19, 2008 11:39 AM (in response to Peris Brodsky) firstly, what versions are you using. secondly, if you think its not working as required can you provide a test case. 2. Re: XA datasource with JBM problemPeris Brodsky Aug 19, 2008 1:16 PM (in response to Peris Brodsky) I am on 4.0.2 Here is the Interceptor: package test; import org.jboss.invocation.Invocation; import org.jboss.ejb.Interceptor; import org.jboss.ejb.Container; import org.jboss.ejb.EntityContainer; import javax.naming.Context; import javax.naming.InitialContext; import javax.jms.*; import javax.ejb.EJBObject; import javax.transaction.TransactionManager; import java.lang.reflect.Method; import java.util.Arrays; public class TestInterceptor implements Interceptor { public void setNext(Interceptor next) { this.next = next; } public Interceptor getNext() { return next; } public void setContainer(Container container) { this.container = (EntityContainer)container; } public void create() throws Exception { name = container.getBeanMetaData().getEjbName(); if (!name.equals("Foo")) // only operate on entity Foo return; Context ctx = new InitialContext(); QueueConnectionFactory qcf = (QueueConnectionFactory)ctx.lookup("java:/JmsXA"); conn = qcf.createQueueConnection(); queue = (Queue)ctx.lookup("queue/DDSListenerQ"); session = conn.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE); conn.start(); } public Object invokeHome(Invocation invocation) throws Exception { Object returnValue = getNext().invokeHome(invocation); return returnValue; } public Object invoke(Invocation invocation) throws Exception { Object returnValue = getNext().invoke(invocation); if (!name.equals("Foo")) return returnValue; Method method = invocation.getMethod(); if (method != null) { String mname = method.getName(); if (mname.startsWith("set")) { QueueSender sender = session.createSender(queue); TextMessage tm = session.createTextMessage(invocation.getId()+",mname.substring(3),"+invocation.getArguments()[0]); sender.send(tm); sender.close(); } } return returnValue; } public void start() throws Exception { } public void stop() { } public void destroy() { //session.close(); //conn.close(); } private Interceptor next; private EntityContainer container; private String name; private QueueConnection conn; private QueueSession session; private Queue queue; } Here is the remote client test driver snippet: (there is an EJB wrapper framework in place; the Context object wraps UserTransaction, FooHomeRemote, etc.) Context ctx = ContextRemoteImpl.instance(); ctx.begin(); Foo foo = ctx.FooFactory().findByOID(333l); foo.setBar("Bat"); ctx.rollback(); Finally, here is an MDB that listens to the Queue, and receives a message, even though the client rolls back: package test; import javax.ejb.MessageDrivenBean; import javax.ejb.MessageDrivenContext; import javax.ejb.EJBException; import javax.jms.MessageListener; import javax.jms.Message; public class TestMDBImpl implements MessageDrivenBean, MessageListener { public void ejbCreate() { } public void setMessageDrivenContext(MessageDrivenContext mdc) { } public void onMessage(Message message) { System.err.println("GOT MESSAGE:"+message); } public void ejbRemove() throws EJBException {\ } } 3. Re: XA datasource with JBM problemAndy Taylor Aug 20, 2008 4:01 AM (in response to Peris Brodsky) Which version of JBoss Messaging are you using, can you try with the latest versions (jboss 4.2.2 with JBM 1.4) to make sure it is still a problem. 4. Re: XA datasource with JBM problemTim Fox Aug 20, 2008 4:38 AM (in response to Peris Brodsky) Looks like you're creating a non transacted session. There are lots of examples around of doing this correctly. I suggest you find one of those and copy it. 5. Re: XA datasource with JBM problemPeris Brodsky Aug 20, 2008 9:16 AM (in response to Peris Brodsky) So can I take it that what I am trying to do should be doable? ataylor: I am using what comes bundled with 4.0.2. I can try moving to 4.2.2 just to know, if you're saying that this pattern is not supported under 4.0.2, but 4.0.2 is still our production environment. timfox: The example I used came from the 4.0.2 admin guide. Since my last post, I noticed the transacted-session problem, said "aha!", and changed the first argument to createQueueSession() to true. This had no effect :{(> Where might I find working examples of this, please? In the forum? I've read the admin guide, the wiki, the users guide, and failed. I didn't really expect to find a working example of an Interceptor, so my first question is "should this work?". The JBMXADataSource wiki entry leads me to believe that can; it just lacks a working example. 6. Re: XA datasource with JBM problemTim Fox Aug 20, 2008 9:25 AM (in response to Peris Brodsky) Ah I just noticed you're using an interceptor, I thought you were using an EJB which is an extremely common pattern. The JMS JCA resource adapter requires a managed environment (read servlet or EJB) otherwise transaction won't get enlisted automatically. But this is app server stuff and not part of JBoss Messaging. I've never seen anyone do what you're doing before so no idea whether it would work or not. Best to ask the JCA/AS guys. As I say, none of this is part of JBM. 7. Re: XA datasource with JBM problemPeris Brodsky Aug 20, 2008 9:50 AM (in response to Peris Brodsky) So your saying the enlistment of the JMS Session as an XA resource has nothing to do with JBM (or is completely orthogonal to it)? If not, is there anything explicit I can do to make this work? Just curious: the connection factory name java:/JmsXA I'm using returns a JmsConnectionFactory. Snooping the source I noticed class JmsManagedConnectionFactory. What would be the purpose of this? 8. Re: XA datasource with JBM problemTim Fox Aug 20, 2008 9:55 AM (in response to Peris Brodsky) "perisb" wrote: So your saying the enlistment of the JMS Session as an XA resource has nothing to do with JBM (or is completely orthogonal to it)? Yes, the resource enlistment is done by the JCA adapter, not JBM. Just curious: the connection factory name java:/JmsXA I'm using returns a JmsConnectionFactory. Snooping the source I noticed class JmsManagedConnectionFactory. What would be the purpose of this? That's a JCA resource adapter. The JCA spec will explain what all those things mean. The JCA adapter is not part of JBM, it's handled by the JCA team (which is part of the AS). - 10. Re: XA datasource with JBM problemTim Fox Aug 20, 2008 1:35 PM (in response to Peris Brodsky) "perisb" wrote: If not, is there anything explicit I can do to make this work? If you can get a reference to the transaction, then you can always enlist the resources yourself. I.e. don't use JmsXA, use XAConnectionFactory, create an XAConnection, create an XASession, and enlist the XAResource in the JTA tx manually. The tx should be associated to the current thread. Have a look at the JTA api to see how to do this.
https://developer.jboss.org/thread/129094
CC-MAIN-2018-30
refinedweb
1,316
50.53
Let's dive straight into the code. As we will see, this GameObject will have a lot in common with the GameObject class from the previous project. The most significant difference will be that this latest GameObject will of course draw itself using a handle to the GL program, primitive (vertex) data from a child class, and the viewport matrix contained in viewportMatrix. Create a new class, call it GameObject, and enter these import statements, noting again that that some of them are static: import android.graphics.PointF; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import static android.opengl.GLES20.GL_FLOAT; import static android.opengl.GLES20.GL_LINES; ... No credit card required
https://www.oreilly.com/library/view/android-game-programming/9781785280122/ch09s04.html
CC-MAIN-2019-43
refinedweb
116
61.22
Flutter + Source Generation: The birth of a Magical Widget [Part 2] In the first article, I shared with you my experience as a Flutter developer, and what carried me into creating the magical widget. And at the end of the article, I showed you the magic that this widget can perform. With just a simple enum definition, complex UI interactions and state management could be automatically orchestrated. No need to read the first article in order to understand this one, because this article is the main one that discusses the technicalities of the magical widget and how to use it. However, I urge you to go back and read the last paragraph of the first article entitled Magical Widget. This will show you what this package is capable of, and hopefully will make you read this article with more enthusiasm. The GitHub repository of the Magical Widget package can be found here. The repository also contains a detailed README file and the code for the example discussed here. Prerequisites The magical widget package depends on source_gen to automatically generate code. It also implements the BLoC pattern, so it needs the rxdart package to operate properly. Therefore after creating your Flutter project, you need to include the following dependencies in your pubspec.yaml file: dependencies: rxdart:^0.20.0 magical_widget:1.0.1 ... dev_dependencies: magical_widget_generator:1.0.1 build_runner:^1.1.2 The Annotation With dependencies out of the way, we can discuss the concept behind the annotation. The magical widget extends the GeneratorForAnnotation class provided by the source_gen package. This means that the build_runner will generate my custom code whenever it encounters a specific annotation. In our case, the annotation is called Alakazam, and it should be used to annotate an enum. If you try to annotate something else, the code will not generate and Flutter will complain. This annotation takes an optional argument called withProvider, its default value is true. This argument and if set to true, will generate the code for an inherited widget that will help to pass the generated BLoC through the widget tree. The elements of the annotated enum needs to follow a specific format which is: element_name$element_type$default_value. So basically, three fields exist, and they are separated by a $ sign. Now the only required field is the first one, element_name. If the other two fields are omitted, then default settings will apply. Supported types are String, bool, and num. If the element_type is not present, then the type will be defaulted to String. The default_value and if not specified depends on the type of the element in question: - If it is a String, the default value is an empty string - If it is a bool, the default value is false - If it is a num, the default value is 0 To start demonstrating, we will structure our project. Usually, I like to create a src folder inside the lib folder, and inside the src I create different folders to keep my code logically separated. For our toy example, I have created this structure: The blocs folder will contain our bloc dart files. The pages will contain the page dart files. And clearly the widgets folder will contain the UI widget of my project. My main.dart file, contains nothing other than a call to a page in the pages folder. The code of the main.dart is: import 'package:example/src/pages/mypage.dart'; import 'package:flutter/material.dart'; void main() => runApp(ExampleApp()); class ExampleApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: MyPage(), ) ); }} Now let us go back to the annotation and the enum. Usually, you want to create your file in the blocs folder. So let us create one that is called first_page_bloc.dart. This file will contain the enum. The enum should contain controls to manipulate your UI widget. These controls are specific to your app, you can add as much controls as you want, the code will be correctly generated. The file will look something like: import 'package:magical_widget/magical_widget.dart'; import 'package:flutter/material.dart'; import 'package:rxdart/rxdart.dart'; import 'package:quiver/iterables.dart'; part 'first_page_bloc.g.dart'; @Alakazam() // The default @Alakazam(withProvider=true) enum firstPageControls { enableFirstBtn$bool$true, enableSecondBtn$bool, //this defaults to enableSecondBtn$bool$false txtField1Input, // this defaults to txtFieldInput$String$ txtField2Input$String$Magic } For our toy example, I only want these controls, but I believe that you are already familiar with the idea here. It all depends on your app logic. If you are a bit overwhelmed, keep going and it will all make sense later on. You may have noticed the imports, they are all required in order for the generated code to properly work. However, if you set withProvider to false, then no inherited widget will be created, and it will be no more necessary to import material.dart. The part 'your_file_name.g.dart' is required for the build_runner to properly generate the code, if you forget it, don’t expect to see any generated code. Ignore the error when you write this part syntax, it will go away when you generate for the first time. Now, take a deep look at what we did so far, can you see how simple it is? Our code is just an enum with elements following a specific syntax. OK, that is all you need to do. That is literally all things required on your behalf, in order to have a proper BLoC implementation that will work for you as you want it to. Generating the Code Now that we have our annotated enum, we can generate the code by running this command in the terminal (the working directory should be the repository of our flutter project): flutter packages pub run build_runner build This command will trigger a process that will search for the Alakazam annotation in our code, then it will create the generated file where the generated codes sit, and finally the process will terminate. So when you do another change to the enum, you need to trigger the process again using this method. Another command that could generate the code is: flutter packages pub run build_runner watch This command will trigger a process that will do just as the one above, however it will not terminate when it finishes. Rather it will sit there listening to any new changes you make to the annotated enums in your project, and then it will reflect the changes directly. So basically you run this command one time regardless of the changes you make, unlike the previous command. After the generation, we will have another dart file, with the .g.dart extension, next to our original one. The Generated Code Before discussing the generated code, just a few words about BLoC for those who are new to it. The BLoC pattern is a reactive programming pattern that will let you handle events and interactions in a 100% reactive and asynchronous manner. I can’t really teach the BLoC pattern or its fundamentals, but for what it matters, when you are using the magical widget package, you wont care much. The BLoC code will be automatically generated, and you only need to know how to employ it in your widgets (usually through a StreamBuilder). In the simplest form, you can think of a BLoC as a stream of events. Observers (like your widgets) listen to this stream, and whenever a new event is added to the stream, the observers are notified and they can react accordingly and asynchronously. Just this bit about BLoC is enough to use the magical widget. Now, If you go to the generated file, you will see that it first creates an enum. The name of the enum is the same as your original enum but prefixed with MAGICAL_. This generated enum will be used later on to tell the package what control you want to change. enum MAGICAL_firstPageControls { txtField1Input, enableFirstBtn, enableSecondBtn, txtField2Input, } As you can see, the generated enum is deduced directly from the enum we entered at the beginning. It has the same elements but without the type and value information. Other than the enum, you will always find a class called MagicalController. This class is the type of the events that will fly through the stream. The generated code for this class contains a lot of boilerplate code, and you will not need to explicitly use this class. You will only reference it when you will use the StreamBuilder widget, and you can use its empty constructor MagicalWidget() if you ever want to employ the initialData property of the StreamBuilder. Basically, a StreamBuilder is the main flutter widget that you will employ whenever you want to use the BLoC pattern for state management, and the initialData property is used to initialize your UI widgets that depend on the stream, in case the stream is still empty. Next, we have the main class which is the MagicalBloc. This class contains the BehavioSubject that you will use to manipulate your UI widgets asynchronously. If you don’t know what a behavior subject is, don’t worry, with the magical widget package it doesn’t really matter. You can just think of it as the stream that will contain our MagicalController events. In the MagicalBloc class, there are two properties and two methods that you will want to use. - Property 1 is called magicalValue, and it gives you access to the current value of the stream - Propery 2 is called magicalStream, and it lets you reference the stream. You will only want to use this property to fill the stream property of the StreamBuilder. - Method 1 is called changeUIElement(value, control).Through this method, you can change any control you initially created in your enum. The controls are basically the elements of the generated enum that is prefixed with MAGICAL_ - Method 2 is called changeUIElements(values, controls). It is the same as the method above, but it lets you submit a list of values and controls to change your controls in batches. The size of the values list needs to equal the size of the controls list, otherwise an exception will be thrown. The two methods above are the most important API of the magical widget package. These will let you change anything you want by using the same method over and over. And since we are using the BLoC pattern, the changes will be automatically and asynchronously reflected in your UI widgets. It is just like magic 😉. At this point, you are ready to take of, the BLoC class is available, you can just instantiate it and use it as you wish in your widgets. Before showing the example code, we need to cover one last case. Remember the withProvider argument? if it is set to false, then no further code will be generated. However, if it is set to true, then an inherited widget will be generated for you. The inherited widget is called MagicalWidget, and it contains a MagicalBloc called magicalBloc as an instance variable. The MagicalWidget is created to your convenience so you could easily propagate the magicalBloc in the widget tree. You can always get this BLoC by calling the static of() method of the InheritedWidget class. Take a look back at the enum that we just written, it is only a simple enum, yet it took us many lines to discuss what it generated. So, we agree it is simple, but it is powerful. Now, we will continue the example to demonstrate this power. In the pages folder, I only have one dart file called mypage.dart, it contains the following: import 'package:example/src/widgets/Button_one.dart'; import 'package:example/src/widgets/button_two.dart'; import'package:example/src/widgets/txt_field_one.dart'; import'package:example/src/widgets/txt_field_two.dart'; import'package:flutter/material.dart'; import 'package:example/src/blocs/first_page_bloc.dart' as first_bloc; class MyPage extends StatelessWidget{ @override Widget build(BuildContext context) { return first_block.MagicalWidget( child: Column( mainAxisAlignment:MainAxisAlignment.spaceEvenly, crossAxisAlignment:CrossAxisAlignment.center, children[ BtnOneWidget(), BtnTwoWidget(), TxtFieldOneWidget(), TxtFieldTwoWidget(),] )); }} This file contains a column, which in turn contains the different widgets. These widgets exist in the widgets folder as shown in the picture. The main thing to notice in this file is that we wrapped the column with the automatically created MagicalWidget. Now all our widgets within this tree will have access to the MagicalBloc provided by this MagicalWidget, as we will see next. Also pay attention to the import of the first_page_bloc.dart, I gave it an alias, and this is a good practice to follow. The magical widget package will always create classes with the same name (they are MagicalController, MagicalBloc, and MagicalWidget), so aliases is the only way to differentiate between them in case you imported more than one magical widget into the same file. Now we develop our application logic. Going back to our enum definition, we want button one to be enabled by default, button two to be disabled, text field one to have empty string, and text field two to have the word Magic by default. We also want: - When we click on button one we enable button two - When we click on button two we change the text of input field one - The text of button two is what we write in input field two - When we write enable in text field one button one is enabled but if we write disable then it is disabled These are just dummy interactions to demonstrate the package. The state and the interactions will depend on your specific app. Even thought, these are dummy and simple, but good luck writing the code from the start. If you use setState() your code will easily become gibberish, and if you use any other state management technique, and want to write everything from the beginning, then you have a lot to do. Let’s use the magical widget in our case, and let it do the trick. Remember, we only wrote a simple enum for our BLoC implementation, nothing else. In Flutter, whenever we want to declare our widget as a listener to a stream, we use the StreamBuilder widget, and this way, if something changed in the stream our widget will be updated. So let’s use StreamBuilder to declare our widgets as listeners on specific controls, and let’s change these controls as we stated above: Look at the code of button one: First as the good practice states, I imported first_page_bloc.dart with an alias called first_bloc. Second in the overridden build method we get the MagicalBloc instance that is provided by the MagicalWidget. You almost always want to do this to get a reference to this instance. We can do this because we wrapped our main column widget with a MagicalWidget in first_page.dart, you can recheck it above. Next we want to return a button, but this button is enabled or disabled according to our enabbleFirstBtn control. So this button is dependent on the stream and therefore we wrap it with a StreamBuilder. The generic type provided to this builder as you can see is first_bloc.MagicalController. This tells the builder that the events in the stream are of this type. This is done on this line return StreamBuilder<first_bloc.MagicalController>(...);. You need to provide a stream and a builder property for the StreamBuilder, initialData is optional. However, you can use initialData and call an empty MagicalController constructor to initialize the state of your widgets. The stream property is always bloc.magicalStream, we are just referencing the stream of MagicalController. Then your custom logic is written in the builder property. You need to provide a function to this property. This function takes two parameters (context, snapshot) and returns a widget. context is apparently the context of the page, and snapshot gives you access to the current event in the stream through its data property (so snapshot.data will return the current MagicalController in the stream). In our code, we are getting the enableFirstBtn field from the MagicalController event, and we are using it on the onPressed property of the RaisedButton. If enableFirstBtn is true, then we assign _onPressedFirstBtn to onPressed, otherwise we assign it null so it will disabled. The _onPressedFirstBtn, and as you can see, sets the enableSecondBtn control to true, using the changeUIElement method. That’s it, now any change in enableFirstBtn will update the state of this widget immediately because it is wrapped in a StreamBuilder and the BLoC pattern is already implemented for you on the other side. No need to add any other code. I will add the code for the other widgets, and let you understand them. I bet it is not that hard right now. It is just a logical flow of events, as stated by our requirement above. The second button depends on the stream to set it to enabled or disabled, but also to change its shown text. When button two is pressed, it changes the text of input field one. The code for the first text field is shown above. Basically, its text depends on the control txtField1Input. This control is changed in the button_two.dart when we press the button. Then we test the entered text here, and if it is ‘disable’ then we disable button one, else if it is ‘enable’ then we enable button one. And I am showing how to use changeUIElements as well to change more than one controls at a time. I didn’t wrap the text field with a StreamBuilder, because its state depends on no changes coming from interactions with other widgets. And if you noticed I used the magicValue variable to reference the current value of the stream, this is only useful when we first build the widget to have the default value which is ‘Magic’ as stated by our original enum. Now if we run the app, and as expected we will have this screen: You can see that everything is as expected. Every widget has it default value that we specified in our original enum. The first button is enabled by default. The second one is disabled and its text equals to the content of the second text field, which is ‘Magic’. The first text field is empty, and as expected, the second text field is defaulted to ‘Magic’. Now, and if you implemented the code all along, try to play with these widgets on your device or emulator and see if the interactions are as expected. Check the GIF below, to see these interactions in action. I hope this article gave you an idea about what you could do with the magical widget package. With just a simple enum, you can now manipulate your UI widgets and manage their states with ease. The number of controls in the enum depends on the complexity of your app, but you should add as much as you want, don’t use one control for two things for example, after all you are not writing the code yourself. The type of interactions that you could achieve is really limitless, it is up to you what controls to include and how to use them. At the end, I want to say that this package saved me a lot of time developing my app, and it will save you time too. And if you didn’t see the first article, then go back and just check the last paragraph, to see how much a simple enum could offer you in a real app. If you missed part one, you can read it here:
https://medium.com/flutter-community/flutter-source-generation-971a4144a2ac?source=rss----86fb29d7cc6a---4
CC-MAIN-2019-09
refinedweb
3,261
62.38
This patch renames AffineParallelNormalize to AffineLoopNormalize to make it more generic and be able to hold more loop normalization transformations in the future for affine.for and affine.parallel ops. Eventually, it could also be extended to support scf.for and scf.parallel. As a starting point for affine.for, the patch also adds support for removing single iteration affine.for ops to the the pass. This needs to be defined in the mlir namespace (this was a bug I introduced originally). This also needs to be defined in the mlir namespace: void mlir::normalizeAffineFor(... This should only be defined in the mlir namespace if it was declared in it, which does not seem to be the case. For functions scoped in a translation unit, MLIR uses static instead of anonymous namespaces. Thanks for the review! Ok, I'll fix it before committing it. Ok, I'll make it static before committing. We can make it public once it implements all the functionality. I'd prefer if this was part of the public interface so that I can use it in downstream projects. Shouldn't this then be in the mlir namespace? I think if you want to use this as a utility you may probably want to invoke the promoteIfSingleIteration directly since it's the only thing this method is doing for now. If we make normalizeAffineFor public when it's actually not normalizing the loop bounds it's going to be a bit confusing for external users, I think.
https://reviews.llvm.org/D90267
CC-MAIN-2021-10
refinedweb
251
67.25
Adding Instance Properties Base Example There may be data/utilities you’d like to use in many components, but you don’t want to pollute the global scope. In these cases, you can make them available to each Vue instance by defining them on the prototype: Vue.prototype.$appName = 'My App' Now $appName is available on all Vue instances, even before creation. If we run: new Vue({ beforeCreate: function() { console.log(this.$appName) } }) Then "My App" will be logged to the console! The Importance of Scoping Instance Properties You may be wondering: “Why does appNamestart with $? Is that important? What does it do? No magic is happening here. $ is a convention Vue uses for properties that are available to all instances. This avoids conflicts with any defined data, computed properties, or methods. “Conflicts? What do you mean?” Another great question! If you set: Vue.prototype.appName = 'My App' Then what would you expect to be logged below? new Vue({ data: { // Uh oh - appName is *also* the name of the // instance property we defined! appName: 'The name of some other app' }, beforeCreate: function() { console.log(this.appName) }, created: function() { console.log(this.appName) } }) It would be "My App", then "The name of some other app", because this.appName is overwritten (sort of) by data when the instance is created. We scope instance properties with $ to avoid this. You can even use your own convention if you’d like, such as $_appName or ΩappName, to prevent even conflicts with plugins or future features. Real-World Example: Replacing Vue Resource with Axios Let’s say you’re replacing the now-retired Vue Resource. You really enjoyed accessing request methods through this.$http and you want to do the same thing with Axios instead. All you have to do is include axios in your project: <script src=""></script> <div id="app"> <ul> <li v-{{ user.name }}</li> </ul> </div> Alias axios to V ue.prototype.$http: Vue.prototype.$http = axios Then you’ll be able to use methods like this.$http.get in any Vue instance: new Vue({ el: '#app', data: { users: [] }, created() { var vm = this this.$http .get('') .then(function(response) { vm.users = response.data }) } }) The Context of Prototype Methods In case you’re not aware, methods added to a prototype in JavaScript gain the context of the instance. That means they can use this to access data, computed properties, methods, or anything else defined on the instance. Let’s take advantage of this in a $reverseText method: Vue.prototype.$reverseText = function(propertyName) { this[propertyName] = this[propertyName] .split('') .reverse() .join('') } new Vue({ data: { message: 'Hello' }, created: function() { console.log(this.message) // => "Hello" this.$reverseText('message') console.log(this.message) // => "olleH" } }) Note that the context binding will not work if you use an ES6/2015 arrow function, as they implicitly bind to their parent scope. That means the arrow function version: Vue.prototype.$reverseText = propertyName => { this[propertyName] = this[propertyName] .split('') .reverse() .join('') } Would throw an error: Uncaught TypeError: Cannot read property 'split' of undefined When To Avoid This Pattern As long as you’re vigilant in scoping prototype properties, using this pattern is quite safe - as in, unlikely to produce bugs. However, it can sometimes cause confusion with other developers. They might see this.$http, for example, and think, “Oh, I didn’t know about this Vue feature!” Then they move to a different project and are confused when this.$http is undefined. Or, maybe they want to Google how to do something, but can’t find results because they don’t realize they’re actually using Axios under an alias. The convenience comes at the cost of explicitness. When looking at a component, it’s impossible to tell where $http came from. Vue itself? A plugin? A coworker? So what are the alternatives? Alternative Patterns When Not Using a Module System In applications with no module system (e.g. via Webpack or Browserify), there’s a pattern that’s often used with any JavaScript-enhanced frontend: a global App object. If what you want to add has nothing to do with Vue specifically, this may be a good alternative to reach for. Here’s an example: var App = Object.freeze({ name: 'My App', version: '2.1.4', helpers: { // This is a purely functional version of // the $reverseText method we saw earlier reverseText: function(text) { return text .split('') .reverse() .join('') } } }) If you raised an eyebrow at Object.freeze, what it does is prevent the object from being changed in the future. This essentially makes all its properties constants, protecting you from future state bugs. Now the source of these shared properties is more obvious: there’s an App object defined somewhere in the app. To find it, developers can run a project-wide search. Another advantage is that App can now be used anywhere in your code, whether it’s Vue-related or not. That includes attaching values directly to instance options, rather than having to enter a function to access properties on this: new Vue({ data: { appVersion: App.version }, methods: { reverseText: App.helpers.reverseText } }) When Using a Module System When you have access to a module system, you can easily organize shared code into modules, then require/ import those modules wherever they’re needed. This is the epitome of explicitness, because in each file you gain a list of dependencies. You know exactly where each one came from. While certainly more verbose, this approach is definitely the most maintainable, especially when working with other developers and/or building a large app.
http://semantic-portal.net/vue-cookbook-adding-instance-properties
CC-MAIN-2021-39
refinedweb
918
58.69
Assignment: Read a micro-controller data sheet. Program your board to do something, with as many different programming languages and programming environments as possible. For extra credit experiment with other architectures. There is often some confusion between these two, which I will try to clarify: A microprocessor is an integrated circuit or “computer chip” which typically contains the central processing unit (CPU) or “brain” of the laptops, tablets and smartphones we all use. Typically these chips rely on external Random Access Memory (RAM), Read-Only Memory (ROM), and other peripherals that work together as part of a larger computer system. A microcontroller (or MCU for microcontroller unit) is a simpler computer, which contains a CPU, a fixed amount of RAM, ROM and other peripherals all embedded onto a single chip. It’s basically a micro computer on a single chip. While MCU’s vary in memory and processing speed, these are typically programmed to execute specific, less complex tasks. Here is the ATtiny44 (enlarged, it actually is teeny!) which we will be programming this week: Most microcontrollers are based on Harvard architecture where program memory and data memory are kept separate, while microprocessors are based on von Neumann model where program and data are stored in same memory module. A Harvard architecture generally looks like this: As we will be learning to program AMTELS AVR family of micro-controllers, these use the Harvard architecture with RISC (Reduced Instruction Set Computer) because it is better for real time performance. RISC-based systems generally have lower cycles per instruction (CPI) than a complex instruction set computer. AVR (which is now owned by microchip) is only one of many family of processors, but there are a few reasons why we are using the AVR family for FabAcademy: Here is the ATtiny44 pin layout: The ATtiny44 is a low-power CMOS 8-bit micro-controller based on the AVR enhanced RISC architecture. These tiny little brains are actually quite powerful, and as Neil says none of the software in the world will make any sense until we understand how the hardware works. So lets dissect the Attiny44: First of all some basic specs. The ATtiny44 provides: 2K/4K byte of In-System Programmable Flash, 128/256 bytes EEPROM, 128/256 bytes SRAM 12 general purpose Input/Output lines. If you look at the 32 general purpose working registers. All 32 registers are directly connected to the Arithmetic Logic Unit (ALU), allowing two independent registers to be accessed in one single instruction executed in one clock cycle. One 8-Bit Timer/Counter with Two Pulse Width Modulation Channels. The timer/counter allows accurate program execution timing, event management, and wave generation. It has Pulse Width Modulation support. Chapter 11 One 16-Bit Timer/Counter with Two Pulse Width Modulation Channels. Chapter 12 One 10-bit Analog to Digital Converter. It converts an analog input voltage to a 10-bit digital value. The minimum value represents GND and the maximum value represents the reference voltage. The voltage reference can be selected between the VCC supply, a specific pin or an internal 1.1V voltage reference. Chapter 16 One Analog Comparator. It compares two analog input voltages and outputs a signal level indicating which of the inputs is greater or lesser. Chapter 15 Universal Serial Interface for I2C communications. It provides the basic hardware resources needed for serial communication allowing significantly higher transfer rates and less code than software solutions. Chapter 14 The AVR Architecture has two main memory spaces, the Data memory and Program memory. In addition, the ATtiny44 features an EEPROM Memory for data storage. Data Memory (Static and Dynamic RAM) SRAM memory is used for storing your data which is processed during the run time (including also the registers) volatile memory FLASH memory which your program stored - non volatile. Endurance: 10,000 Write/Erase Cycles EEPROM memory which can be used for storing non volatile data and changeable during run-time. (for example: setting values) Endurance: 100,000 Write/Erase Cycles So in a nutshell we have: A/D: Reads in analogue voltages. Comparator: compares voltages. D/A: writes out voltages. These are all essential for controlling lights and motors. USART and USB communicate to the outside world. Attiny is an 8 bit processors. In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor. The word size or number of bits in a word (most embedded MCUs use 8, 16, 32 or 63 bits) is an important characteristic of any specific processor design or computer architecture. According to Neil, there is a common misunderstanding about word sizes. An 8 bit processor can execute millions of instructions per second. Say you want to play audio, like playing back a .WAV file. Audio takes a thousands of cycles per second, so the processors can do a thousand instructions per audio cycle. So an 8-bit processor can use a 64 bit numbers but it takes more than one instruction to do it. ISP stands for In-System Programming just incase. Back in week 5 we made our own FabISP programmer, which is basically a ATtiny44 micro-controller setup to load programs into other micro-controllers. Which begs the question, who programmed the first micro-controller? But I digress… So how does this work? The computer tells the FabISP via a USB connection how to load the program, it gets transferred into the board I am going to program via the 6-pin ISP header. This setup looks something like this: Typically we are gonna use the ISP header to program the board but there are many ways to do this. To load the program you need a programmer. Here I did a blink test with the Arduino IDE using the Port Direction Registers I found here: Leaving registers aside, programming the Attiny44 via Arduino IDE is very stragihtforward. HigTechLowTech has a very easy tutorial on how to install the right libraries and configure the Attiny to run at the correct clock speed. C is basically a general-purpose language that we are using in FabAcademy to program our micro-controllers. It is straightforward to compile, provides low-level access to a processor’s memory, and maps efficiently to machine instructions. As you can see from this graph, there is a huge difference in speed performance between a RaspberryPi running in C and the same one using Python. Here is Hello World in C. #include <stdio.h> int main() { // printf() displays the string inside quotation printf("Hello, World!"); return 0; } #include <avr/io.h>is a preprocessor command. This command tells compiler to include the contents of avr/io.h (standard input and output) file in the program. stdio.hfile contains functions such as scanf()and print()to take input and display output respectively. printf()function without writing #include <stdio.h>, the program will not be compiled. main()function. printf()is a library function to send formatted output to the screen. In this program, the printf()displays Hello, World! text on the screen. The return 0; statement is the “Exit status” of the program. In simple terms, program ends with this statement. For all this magic to work we need to install AVRDUDE which for all you Macphiles comes included with CrossPack which I installed back in week 5 to program my FabISP. CrossPack contains GCC an open source compiler, a C library for the AVR, the AVRDUDE uploader and several other very useful tools. So how does this work? GCC compiles our C code into an instruction set or hex code that the micro-controller understands, then we use AVRDUDE to upload it to the micro-controller via the FabISP. Hex looks something like this, and if I am not mistaken each character represents an 8bit unit unit of 1s and 0s. Shall I compare thee to a summer’s day? :020000020000FC :1000000028C019E11A95F1F708950AE0309508948F :1000100010F4D99802C0D99A0000F3DFF2DF3695C8 :100020000A95B1F7089509E0C899FECFEADFE9DF44 :10003000E8DF8894C89908940A9511F04795F7CF9E :100040000895C895302D303019F0DFDF3196F9CFA3 :10005000089510E820E016BD26BD11E01EBF1FE583 :100060001DBFD99AD19ADFDFEEE6F0E0EADF6865DE :100070006C6C6F2E667464692E34342E6563686F01 :100080002E61736D3A20796F7520747970656420E4 :0C0090000000342FBADF3AE0B8DFE5CF03 :00000001FF Since I had already programmed the Hello Board from week 7 to blink using the Arduino IDE, I decided I would test how my computer communicates with the board over Serial. For this I managed to program the Hello Board with the hello.ftdi.44.echo.c example provided by Neil using my FabISP programmer. He provides a cryptic little tutorial here. Because we will be talking to the board via an FTDI cable, it was recommended that we install libFTDI which is an an open source library to talk to FTDI chips. Crazy right. For this the Homebrew package manager again to the rescue: $ brew install libFTDI Then check to which USB port your board is connected with: $ ls -l /dev/cu.usb* This gives me: crw-rw-rw- 1 root wheel 19, 15 18 Mar 18:32 /dev/cu.usbserial-A505DVC9 Then I hooked up my FabISP to the computer via USB, connected the ISP ribbon cable between the FabISP and my Hello World board using the two six-pin headers, and connected the Hello board to the other USB using the FTDI cable. CHECK to make sure that the ground line (usually black) corresponds with your circuit ground connection like so: Now: $ make -f hello.ftdi.44.echo.c.make $ make -f hello.ftdi.44.echo.c.make program-usbtiny-fuses $ make -f hello.ftdi.44.echo.c.make program-usbtiny So to interface with between the Hello World board and the computer we are using serial communication and a sweet little term.py program Neil wrote in Python. The command for this is: $ python term.py /dev/tty.usbserial-A505DVC9 115200 In the C code we can see that 15200 is the baud rate or speed we will be communicating with the host computer over the FTDI cable. And as we checked above, cu.usbserial-A505DVC9 is the USB port to which it is connected. So here i started getting syntax errors, of the sort: File "term.py", line 57 widget_text.insert(INSERT,'\n') ^ TabError: inconsistent use of tabs and spaces in indentation I initially thought was something to do with the code but actually on my Mac I use Python with the Anaconda open-source distribution and Conda package management. This helps keep all the various Python versions, packages and dependencies up-to date and easy to find. Anaconda, however, runs on the latest version of Python 3.6 and this meant I couldn’t run Neil’s little Hello World interface he wrote it eons ago in Python 2.7. To avoid pip installing overlapping versions of Python, I just created a new environment and installed Python 2.7 like so: $ conda create -n py27 python=2.7 anaconda This installed Python 2.7 with all it’s packages (probably overkill I know) and then all I had to do is activate my py27 environment in the terminal. $ source activate py27 You can always check what version you are in running by: python --version to which gave me: Python 2.7.14 And to go back to default Python 3.6, just hit: $ source deactivate Then I installed PySerial which provides backends for Python running on Windows, OSX, Linux to access your computers serial port. $ conda install pyserial So that done I could proceed with: $ python term.py /dev/tty.usbserial-A505DVC9 115200 YAY! ATtiny speaks back. This was a little more complicated, but I finally got it working. Here is the code below which I modified from Neil’s original Hello Board, changing a few variables for my input and output pins: #define led_port PORTA #define led_direction DDRA #define led_pin (1 << PA7) #define serial_pins PINA #define button_port PORTA #define button_direction DDRA #define button_pin (1 << PA3) Here below is my main function: // initialize LED pin // clear(led_port, led_pin); output(led_direction, led_pin); input(button_direction,button_pin); // // main loop // while (1) { if(pin_test(serial_pins,button_pin)){ set(led_port, led_pin); led_delay(); } else { clear(led_port, led_pin); led_delay(); } } } Then I ran the programmer with: make -f led_button.c.make make -f led_button.c.make program-usbtiny You can download all my C code files for week 9 from my Gitlab repository.
http://fab.academany.org/2018/labs/barcelona/students/nicolo-gnecchi/embedded-programming/
CC-MAIN-2019-13
refinedweb
2,036
55.13
exit () is used to exit the program as a whole. In other words it returns control to the operating system. After exit () all memory and temporary storage areas are all flushed out and control goes out of program. In contrast the return statement is used to return from a function and return control to the calling function. Also in a program there can be only one exit () statement but a function can have number of return statements. In other words there is no restriction on the number of return statements that can be present in a function. exit () statement is placed as the last statement in a program since after this program is totally exited. In contrast return statement can take its presence anywhere in the function. It need not present as the last statement of the function. It is important to note that whenever a control is passed to a function and when it returns back from the function some value gets returned. Only if one uses a return statement the correct value would get returned from the called function to the calling function. For instance consider the program main () { int a; int b= 5; a = f1(b); printf("a=%d",a); } f1(a1) int a1; { int a2; a2 = 5 * a1; printf("a2=%d",a2); return(a2); } The value of a2 is calculated in function f1 ( ) is returned to the calling function using return () statement and hence output of program is a2=10 a=10
http://ecomputernotes.com/what-is-c/function-a-pointer/how-does-the-exit-and-return-differ
CC-MAIN-2013-48
refinedweb
246
60.04
Opened 2 years ago Last modified 1 year ago At the moment, any test made from t.trial.unittest.TestCase will have a two minute default timeout. This is incorrect. The default timeout for tests should be specified by the runner, not by the Trial API. By default, tests should not timeout. I'm not sure I agree with: "By default, tests should not timeout." "By default, tests should not timeout." but then I'm not sure what you're referring to. I think I agree with the general sentiment here of who should decide and why, but the default configuration of the default runner should specify a reasonable timeout. Like glyph, most of this ticket makes sense to me, but the last sentence of the description also confuses me. This test suite: from twisted.trial.unittest import TestCase from twisted.internet.defer import Deferred class HangingTests(TestCase): def test_hangs(self): return Deferred() should not result in trial test_hangs requiring a ^C (or other signal) to cause it to complete. Right. The default configuration of the default runner should timeout tests. What I meant was that, HangingTests('test_hangs').run(TestResult()) should hang unless an explicit timeout-ing API is called. It would also be very useful if the timeout could be specified as a command line argument to the trial program. I kind of want this for a number of reasons, so I'm bumping the priority. Site design By huw.wilkins.
http://twistedmatrix.com/trac/ticket/2675
crawl-002
refinedweb
242
67.96
| Join Last post 04-30-2009 4:57 AM by Sandy1234. 80 replies. Sort Posts: Oldest to newest Newest to oldest I just spent the day figuring out why my ScriptResources.axd file wasn't being rendered and thus causing the "Sys is undefined" message in IE. My problem had to do with inheriting from a custom base class (which in turn inherited from System.Web.UI.Page). The "OnPreRenderComplete" event was being overridden but it wasn't making a call to "base.OnPreRenderComplete(e)." Once I put the call to the base event on "Page", the ScriptResources.axd file was being output correctly. This was tricky, we're using a master page with the content pages using lots of custom user controls. The user controls contain the Ajax enabled components. I am having this same "sys is not defined" problem implementing into DotNetNuke 4.4.0 ... It works fine locally running windows 2003, but is giving me the page error when used on the windows 2003 production server. The only difference i can see is that VS.NET is not installed on the production server. Update: I just checked and it appears that in DotNetNuke, although the base system.web.ui.page class is inherited into DotNetNuke.Framework.PageBase, we do not override the base OnPreRenderComplete function, so i do not think that is the problem. I can see in view source that there is script trying to access the Sys namespace, but I am not sure what to be looking for to affirm that the asp.net ajax is loading, other than the error is still there. I have added the web.config changes as mentioned, and it works without error locally... hmm For all 'sys is undefined' error people, (who have gone to RC1 from beta) Simply change THIS: < TO THIS: Cheers I am still having this problem on server 2003 / .net /w ajax 1. the files are in the gac, the web.config is correct. works fine on my local box, but not production. only difference is that prod does not have vs.net installed. can anyone else help PLEASE? this is really frustrating! What else is required to implement this.... Bueller?? You can see it at and view source.... Hello, And if you turn off ViewState? Second, I had this strange thing suddenly.. Maybe your time setting of your server? Both a shot in the dark, because sys is undefined can have lots (and lots) of causes.Most of them are allready mentioned here and on the forum, but you never know.. Marchu This is my experience with this issue: When browser requested ScriptResource.axd it was returning a 404 not found error and it was being caused by a bad configuration on my web application with isapi mappings. The site where I found the solution explained it as follows: ------------------------------------------------------------------------------------------------------------="?[snip - long query string]" type="text/javascript"></script> <script src="?. I had a wildcard mapping that with the "Verify is file exists" activated and that was the problem. I deactivated it and finnally I stopped getting the 'Sys' is undefined error (and other similar JS errors such as 'AjaxControlToolkit' is undefined, etc) If this still didn't solve the problem, check this site out which had other very helpful tips:> this is now FIXED by uncommenting and turning off compression in the web.config setting: <!-- <jsonSerialization maxJsonLength="500"> <converters> <add name="ConvertMe" type="Acme.SubAcme.ConvertMeTypeConverter"/> </converters> </jsonSerialization> <authenticationService enabled="true" requireSSL = "true|false"/>" /> </ thecrispy1> That wasn't my problem since I had that uncommented from the beggining. My problem was that browser requested ScriptResource.axd and it always returned a 404 Error due to the "Verify that file exists" option being enabled in my ISAPI wildcard mapping None of this is working for me, I have a host. Everything was great until you guys decided to put it in the GAC How about making that optional. I love it when MS decides how I should run my code. ScottG said this was done for 'windows update' reasons - well, that is another mess of crap. Perhaps I don't want Windows Update clobbering my code as well. So... I still get this error and I have no fix for it - I had a great site running, now it's hosed. hello, I have this 'Sys Undefined' error when i access the website after deploying in our test Server (Windows 2003 Server running IIS6). I have done all the changes mentioned in this thread plus some of the suggesstion metioned in the google search. Problem still exists. 1. No problem in the VS2005 dev enviroment. AJAX extended control works fine. 2. But got problem after deploying the web app in the test server with windows 2003 server with IIS6. AJAX extender control is not working now and i get the 'Sys' undefined error message. The browser i use to access the web app is IE6-sp1. Any help would be appreciated. Thanks. Whooooooooooooo...... Got my above problem resolved!!!!!! The problem is in configuring IIS6 for your web site. In the application configuration for your website, make sure the "Verify file exist" oiption is disabled for .axd. Just made this only change and boom got my AJAX control working like a charm... Now iam a happy camper.!!!!!!!!!!!!! -Suresh. It seems that the "Sys not found" problem seems to be caused generically by IIIS being unable to load the AXD file. I have just upgraded to IIS7/Vista, and an existing website under my old OS (XP) will not run any AJAX app due to the "Sys not found". Initially it wouldn't run any ASP.Net 2.0 app, until I discovered that the applicationHost.config file was missing entries for some ASP.Net 2.0 mappings for ASPX files. I wonder if it's also missing entries for AXD files. I tried copying an entry for the 1.1 AXD mapping and adjusting it to point to 2.0 framework, but that didn't change anything. <add name="ASPNET-ISAPI-2.0-AXD" path="*.axd" verb="GET,HEAD,POST,DEBUG" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.Net\Framework\v2.0.50727\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv1.1,bitness32" responseBufferLimit="0" /> I also notice that if I look at the browser source for an AJAX website that works (any public ajax website) and you copy the relative URL for the any included AXD file and paste it into the browser, it loads that AXD file. But on my dev server, if I do the same, I get a 404 error. Some configuration setting is missing to support loading these AXD files into the browser. Hi,I was having this issue on Windows Vista + IIS7 and AJAX 1.0.In web.config under system.webServer/handlers I had te following entry: but then I changed it and added preCondition="integratedMode" like so: <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> And it works! Advertise | Ads by BanManPro | Running IIS7 Trademarks | Privacy Statement © 2009 Microsoft Corporation.
http://forums.asp.net/p/1055400/1577471.aspx
crawl-002
refinedweb
1,187
66.74
512Kb SRAM expansion for the Arduino Mega (software) The final part of this series of blog posts will present some open source software that you can use to exploit the new memory space. You can download the library source code from my downloads page. The xmem library I’ve put together a collection of functions into a library that you can use to manage access to the extended memory. The aim of the library is to enable the existing memory allocation functions such as malloc, free, new and delete to work with the extended memory. The xmem functions are contained in a namespace called xmem so you need to prefix your calls to these functions with xmem::, for example xmem::begin(…). The functions are declared in xmem.h, so you must include that header file like this: #include "xmem.h" Installation instructions for the Arduino IDE - Download the library zip file from my downloads page. - Browse to the libraries subdirectory of your Arduino installation. For example, for me that would be C:\Program Files (x86)\arduino-1.0.1\libraries. - Unzip the download into that directory. It should create a new folder called xmem with two files in it. - Start up the Arduino IDE and use the Tools -> Import Library -> xmem option to add the library to your project xmem functions void begin(bool heapInXmem_) This function must be called before you do anything with the extended memory. It sets up the AVR registers for external memory access and selects bank zero as the current bank. If you set the heapInXmem_ parameter to true (recommended) then the heap used by malloc et. al. will be located in external memory should you use it. xmem::begin(true); void setMemoryBank(uint8_t bank_,bool switchHeap_=true); Use this function to switch to another memory bank. bank_ must be a number between zero and seven. If switchHeap_ is true (the default) then the current state of the heap is saved before the bank is switched and the saved state of the new bank is made active. This heap management means that you can freely switch between banks calling malloc et. al. to make optimum use of the external memory. xmem::begin(true); // use the memory in bank zero xmem::setMemoryBank(1,true); // use the memory in bank one SelfTestResults selfTest(); This is a diagnostic function that can be used to check that the hardware is functioning correctly. It will write a bit pattern to every byte in every bank and will then read back those bytes to ensure that all is OK. Because it overwrites the entire external memory space you do not want to call it during normal program operation. This function returns a structure that contains the results of the self test. The structure is defined as follows. struct SelfTestResults { bool succeeded; volatile uint8_t *failedAddress; uint8_t failedBank; }; If the self-test succeeded then succeeded is set to true. If it fails it is set to false and the failed memory location is stored in failedAddress together with the failed bank number in failedBank. xmem::SelfTestResults results; xmem::begin(true); results=xmem::selfTest(); if(!results.succeeded) fail(); Some common scenarios In this section I’ll outline some common scenarios and how you can configure the external memory to achieve them. Default heap, default global data, external memory directly controlled In this scenario the malloc heap and global data remain in internal memory and you take direct control over the external memory by declaring pointers into it. Memory layout for direct access Your code could declare pointers into the external memory and use them directly. This is the simplest scenario. #include <xmem.h> void setup() { // initialise the memory xmem::begin(false); } void loop() { // declare some pointers into external memory int *intptr=reinterpret_cast<int *>(0x2200); char *charptr=reinterpret_cast<char *>(0x2200); // store integers in bank 0 xmem::setMemoryBank(0,false); intptr[0]=1; intptr[10000]=2; intptr[20000]=3; // store characters in bank 1 xmem::setMemoryBank(1,false); charptr[0]='a'; charptr[10000]='b'; charptr[20000]='c'; delay(1000); } Default global data, heaps in external memory In this scenario we move the malloc heap into external memory. Furthermore we maintain separate heap states when switching banks so you effectively have eight independent 56Kb heaps that you can play with. [p> Memory layout for external heaps This is a powerful scenario that opens up the possibility of using memory-hungry libraries such as the STL that I ported to the Arduino. Example 1, using c-style malloc and free to store data in the external memory. #include <xmem.h> byte *buffers[8]; void setup() { uint8_t i; xmem::begin(true); // setup 8 50K buffers, one per bank for(i=0;i<8;i++) { xmem::setMemoryBank(i,true); buffers[i]=(byte *)malloc(50000); } } void loop() { uint16_t i,j; // fill each allocated bank with some data for(i=0;i<8;i++) { xmem::setMemoryBank(i,true); for(j=0;j<50000;j++) buffers[i][j]=0xaa+i; } delay(1000); } Example 2. Using the standard template library to store some vectors in external memory. This is the slowest option but undoubtedly the most flexible. vectors only scratch the surface of what’s possible. maps and sets are all perfectly feasible with all this memory available. #include <xmem.h> #include <iterator> #include <vector> #include <new.cpp> byte *buffers[8]; void setup() { xmem::begin(true); } void loop() { std::vector<byte> *testVectors[8]; uint16_t i,j; for(i=0;i<8;i++) { xmem::setMemoryBank(i,true); testVectors[i]=new std::vector<byte>(); for(j=0;j<10000;j++) testVectors[i]->push_back(i+0xaa); } for(i=0;i<8;i++) { xmem::setMemoryBank(i,true); for(j=0;j<10000;j++) if((*testVectors[i])[j]!=0xaa+i) fail(); delete testVectors[i]; } delay(1000); } /* * flash the on-board LED if something goes wrong */ void fail() { pinMode(13,OUTPUT); for(;;) { digitalWrite(13,HIGH); delay(200); digitalWrite(13,LOW); delay(200); } } Other articles in this series Schematics, PCB CAD and Gerbers I don’t have any more boards left to sell, but you can now print and build your own by downloading the package from my downloads page. Good luck!
http://andybrown.me.uk/2011/08/28/512kb-sram-expansion-for-the-arduino-mega-software/
CC-MAIN-2017-17
refinedweb
1,021
51.89
-27-2014 05:49 AM Although Debian is a non supported platform for the Xilinx tools, it is a distribution widley used if we count with its derivatives (Ubuntu, et al). After some work I have been able to use Vivado on Debian. Here is what I have done in case somebody is in the same situation 1) Setup bash as default shell instead of dash Xilinx uses some bashishm on its scripts and does not specify bash on the script :/. To solve this issue run: dpkg-reconfigure dash and select bash as your default shell 2) Use system jvm instead of the provided jvm. For some reason the provided jvm sigsegv, this can be solved by installing gopenjdk-7-jdk on your system and running mv /opt/Xilinx/Vivado/2013.4/tps/lnx64/jre/lib/amd64/server/libjvm.so /opt/Xilinx/Vivado/2013.4/tps/lnx64/jre/lib/amd64/server/libjvm.so.old ln -s /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libjvm.so /opt/Xilinx/Vivado/2013.4/tps/lnx64/jre/lib/amd64/server/libjvm.so 3) Replace udev_device_new_from_syspath from udev When the tool is checking the license of my system it is sigsegv. To solve it I have created a library that replaces the funcion udev_device_new_from_syspath with an empty one. I LD_PRELOAD it before calling the tool. The library: define _GNU_SOURCE #include <sys/ioctl.h> #include <dlfcn.h> #include <stdio.h> #include <stdarg.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #include <signal.h> #include <execinfo.h> void *udev_device_new_from_syspath(void *null, char *name){ return NULL; } How to build: gcc -shared -o lib.so lib.c -fPIC -O2 -Wall -Werror -Wstrict-prototypes -Wall -ldl 01-27-2014 07:37 AM 04-09-2014 01:38 AM Hello every body, I am trying to install Vivado 2013.2 on Ubuntu 13.10 I hace followed steps described ( sudo apt-get install openjdk-7-jre, sudo mv /opt/Xilinx/Vivado/2013.2/tps/lnx64/jre/lib/amd64/server/libjvm.so /opt/Xilinx/Vivado/2013.2/tps/lnx64/jre/lib/amd64/server/libjvm.so.old sudo ln -s /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libjvm.so /opt/Xilinx/Vivado/2013.2/tps/lnx64/jre/lib/amd64/server/ but when i try to "add Ip" and i choose a block i have this ERROR : ERROR: [Vivado 12-106] *** Exception: java.lang.NumberFormatException: For input string: "1,08846" (See /home/sabeur/vivado_pid3993.debug) Some one can help me ? Thanks a lot Best regards 04-09-2014 09:56 AM Hi, Try setting the environmental variables LANG and LC_ALL to your locale. Thanks 04-16-2014 03:50 AM 06-04-2014 01:10 PM You cant have customers using Debian, your product does not work out of the box there :). There are many distros based on Debian, you could get much more compatibitly just by supporting it. If you dont plan to support it at least it would be nice to fix the problems reported. Cheers 06-05-2014 08:21 AM I am running Vivado 2013.4 and 2014.1 using Ubuntu 13.10 and 14.04 in my "Zynq Design From Scratch" blog without any problems so far. Sven 06-27-2014 11:33 AM Over in this thread I describe a problem installing Vivado 2014.2 on an up-to-date Debian Jessie system. Actually the same problem happens when I try to run vivado (2014.1) or run the installer for 2014.2. I see the small splash window, then the large window appears, but it remains blank white, and the system sits idle forever with zero CPU usage, until the process is killed. I cannot explain why I was able to install 2014.1 but three months later could not run it. The biggest change on my system was going from the GNOME desktop environment to a stripped down X plus tiling window manager (highly recommended btw), but my best guess is that it has something to do with java. The OP in this thread recommends changing java from the Xilinx-installed version to the Debian-provided one. My question is, if the Xilinx java doesn't play well with Debian, how did you get Vivado installed in the first place? I would like to try this but the installer runs java from a directory off of /tmp and I don't see how to change that. Any other Debian users experiencing the zero-CPU-freeze problem? It really has me baffled. 07-14-2014 03:15 PM vijayak wrote: Hi, The supported list are Microsoft Windows Support • Windows XP Professional (32-bit and 64-bit), English/Japanese • Windows 7 Professional (32-bit and 64-bit), English/Japanese • Windows Server 2008 (64-bit) Linux Support • Red Hat Enterprise Workstation 5 (32-bit and 64-bit) • Red Hat Enterprise Workstation 6 (32-bit and 64-bit) • SUSE Linux Enterprise 11 (32-bit and 64-bit) Additional support of OS are customer usage driven. We dont have much customers using Debian. Correction, Vivado 2014.1 and 2014.2 support: Microsoft Windows Support • Windows XP Professional (32-bit and 64-bit), English/Japanese • Windows 7 and 7 SP1 Professional (32-bit and 64-bit), English/Japanese • Windows 8.1 Professional (64-bit), English/Japanese Linux Support • Red Hat Enterprise Workstation 5.8 - 5.10 (32-bit and 64-bit) • Red Hat Enterprise Workstation 6.4 - 6.5 (32-bit and 64-bit) • SUSE Linux Enterprise 11 (32-bit and 64-bit) • Cent OS 6.4 and 6.5 (64-bit) (UG973 Release notes userguide)
https://forums.xilinx.com/t5/Installation-and-Licensing/Run-Vivado-2013-4-on-Debian/m-p/407359
CC-MAIN-2017-51
refinedweb
934
58.69
Example To run the example project, clone the repo, and run pod install from the Example directory first. Requirements Installation WordWrapLabel is available through CocoaPods. To install it, simply add the following line to your Podfile: pod 'WordWrapLabel' Description A UILabel subclass that really makes sure that every word in the label´s text fits into one line. The standard UILabel can automatically adjust the scale of the font size to make the whole text fit the label bounds but it doesn´t take into account the actual length of the different words of the given text. This results in words being broken up into multiple lines because they are too long to fit one line. The WordWrapLabel finds the suitable font size to make every word of the given text fit into one line. It also makes sure that the text fits into the whole bounds of the label when there is a line number or height restriction. Usage To use WordWrapLabel just import the module in your code. import WordWrapLabel Then use it as the custom UILabel subclass in interface builder or initialize it in code. You can define a minimum and maximum font size which will be taken into account when determining the right font size for the label. The search will start at the maximumFontPointSize and then reduce the font size until either every word fits one line or minimumFontPointSize is reached. maximumFontPointSize and minimumFontPointSize can be set in interface builder or in code. wordWrapLabel.maximumFontPointSize = 25 // Default is 40 wordWrapLabel.minimumFontPointSize = 10 // Default is 1 Author Philipp, [email protected] License WordWrapLabel is available under the MIT license. See the LICENSE file for more info. Latest podspec { "name": "WordWrapLabel", "version": "1.0.0", "summary": "A short description of WordWrapLabel.", "description": "A UILabel subclass that really makes sure that every word in the labels text fits into one line.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Philipp": "[email protected]" }, "source": { "git": "", "tag": "1.0.0" }, "swift_version": "4.2", "platforms": { "ios": "8.0" }, "source_files": "WordWrapLabel/Classes/**/*" } Mon, 25 Mar 2019 11:00:19 +0000
https://tryexcept.com/articles/cocoapod/wordwraplabel
CC-MAIN-2020-24
refinedweb
346
56.15
yet another namedtuple alternative Project description Yet another namedtuple alternative for Python compose.Struct is something like an alternative to namedtuple, attrs and now dataclasses in Python 3.7. to create a new struct, you simply: class Foo(compose.Struct): bar = ... baz = 'spam' This generates a class like this: class Foo: __slots__ = 'bar', 'baz' def __init__(self, bar, baz='spam'): self.bar = bar self.baz = baz You can, naturally, implement any other methods you wish. You can also use type annotation syntax for positional arguments: class Foo(compose.Struct): bar: int baz: str = 'spam' If the name = ... syntax is used in combination with type annotation syntax for positional arguments, all positional arguments with annotations will come before positional arguments without. However, this should be considered an implementation detail. best practice is to not mix the two styles. Use typing.Any if you are using type annotations and don’t want one of the arguments to care about type. How’s this different from attrs and dataclasses? A few ways. Aside from the use of ellipsis to create positional parameters, another difference that can be seen here is that everything is based on __slots__, which means your attribute lookup will be faster and your instances more compact in memory. attrs allows you to use slots, but struct only uses slots. This means that attributes cannot be dynamically created. If a class needs private attributes, you may create additional slots with the usual method of defining __slots__ inside the class body. Another important distinction is compose.Struct doesn’t define a bunch of random dunder methods. You get your __init__, __repr__, and to_dict and that’s it [1]. It is the opinion of the author that sticking all attributes in a tuple and comparing them usually is not what you want when defining a new type. However, it is still easy to get more dunder methods, as you will see in the following section. Interfaces Perhaps the most significant difference between our structs and alternatives is that we emphasize composition over inheritance. A struct isn’t even able to inherit in the normal way! It’s an outrage! What about interfaces!? What about polymorphism!? Well, what compose provides is a simple way to generate pass-through methods to attributes. from compose import Struct, Provider class ListWrapper(Struct): data = Provider('__getitem__', '__iter__') metadata = None So this will generate pass-through methods for __getitem__ and __iter__ to the data attribute. Certain python keywords and operators can be used as shorthand for adding dunder methods as well. @struct class ListWrapper: data = Provider('[]', 'for') metadata = None Here, [] is shorthand for item access and implements __getitem__, __setitem__ and __delitem__. for implements the __iter__ method. A full list of these abbreviations can be found below in the Pre-Defined Interfaces section. Going even deeper, interfaces can be specified as classes. Wrapper methods will be created for any method attached to a class which is given as an argument to Provider. The following code is more or less equivalent to subclassing collections.UserList, but no inheritance is used. from collections import abc class ListWrapper(Struct): data = Provider(abc.MutableSequence) metadata = None An instances of this class tested with isinstance(instance, abc.MutableSequence) will return True because wrapper methods have been generated on self.data for all the methods in abc.MutableSequence. Note that ``abc.MutableSequence`` does not actually provide all of the methods a real list does. If you want ALL of them, you can use ``Provides(list)``. You cannot implicitly make pass-through methods for __setattr__ and __getattribute__ by passing in a class that implements them, since they have some rather strange behaviors. You can, however, pass them explicitly to Provider to force the issue. In the case of __setattr__, This invokes special behavior. See __setattr__ hacks for details. All methods defined with a provider can be overridden in the body of the class as desired. Methods can also be overridden by other providers. It’s first-come, first-serve in that case. The Provider you want to define the methods has to be placed above any other interfaces that implement the same method. Mix-in Classes vs. Inheritance There is no inheritance with Structs. Because of metaclass magic, a class that inherits from Struct is not its child. It is always a child of object. Provider is a way to implement pass-through methods easily. Mix-in classes bind methods from other classes directly to your class. It doesn’t go through the class hierarchy and rebind everything, only methods defined directly on the mix-in class. Inheriting from normal python classes may have unpredictable results. compose provides one mix-in class: Immutable, which is implemented like this: class Mutablility(Exception): pass class Immutable: def __setattr__(self, attr, value): raise Mutablility( "can't set {0}.{1}. type {0} is immutable.".format( self.__class__.__name__, attr, value )) It can be used like this: from compose import Struct, Immutable class Foo(Struct, Immutable): bar = ... baz = ... When an instance of Foo is created, it will not be possible to set attributes afterwards in the normal way. (Though it is technically possible if you set it with object.__setattr__(instance, 'attr', value)). Attempting to do foo.bar = 7 will raise a Mutability error. If you need a struct to look like a child of another class, I suggest using the abc module to define abstract classes. This allows classes to look like children for the purposes of type-checking, but without actually using inheritance. Order This is the order of priority for where methods come from: - Struct generates a unique __init__ method for each class it creates. This cannot be overriden. Alternative constructors should be implemented as class methods. - methods defined in the body of the struct get next dibs. - any attributes defined on your mix-ins will be defined on the class if they don’t already exist. - Only then are Provider attributes allowed to add any methods which haven’t yet been defined. *args and **kwargs Though it is not especially recommended, it is possible to implement *args and **kwargs for your constructor. >>> from compose import Struct, args, kwargs >>> class Foo(Struct): ... items = args ... mapping = kwargs ... >>> f = Foo('bar', 'baz', spam='eggs') >>> f Foo(*items=('bar', 'baz'), **mapping={'spam': 'eggs'}) This breaks the principle that the object’s repr can be used to instantiate an identical instance, but it does at least give the option and still makes the internal structure of the class transparent. With Provider parameters, simply pass in compose.args or compose.kwargs as arguments the constructor. >>> class MySequence(Struct): ... data = Provider('__getitem__', '__iter__', args) ... >>> s = MySequence('foo', 'bar', 'baz') >>> s MySequence(*data=('foo', 'bar', 'baz')) >>> for i in s: ... print(i) ... foo bar baz Caveats This library uses code generation at class-creation time. The intent is to optimize performance of instances at the cost of slowing class creation. If you’re dynamically creating huge numbers of classes, using compose.Struct might be a bad idea. FYI, namedtuple does the same. I haven’t looked at the source for attrs too much, but I did see some strings with sourcecode there as well. Pre-Defined Interfaces This is the code that implements the expansion of interface abbreviations for dunder methods. Any key in the interfaces dictionary may be used to implement the corresponding dunder methods on an attribute with the Provides() constructor. interfaces = { '+': 'add radd', '-': 'sub rsub', '*': 'mul rmul', '@': 'matmul rmatmul', '/': 'truediv rtruediv', '//': 'floordiv rfloordiv', '%': 'mod rmod', '**': 'pow rpow', '<<': 'lshift rlshift', '>>': 'rshift rrshift', '&': 'and rand', '^': 'xor rxor', '|': 'or ror', '~': 'invert', '==': 'eq', '!=': 'ne', '>': 'gt', '<': 'lt', '>=': 'ge', '<=': 'le', '()': 'call', '[]': 'getitem setitem delitem', '.': 'get set delete set_name', 'in': 'contains', 'for': 'iter', 'with': 'enter exit', 'del': 'del', 'await': 'await' } interfaces = {k: ['__%s__' % n for n in v.split()] for k, v in interfaces.items()} __setattr__ hacks If you choose to create an attribute wrapper for __setattr__, the default will look like this so you won’t hit a recursion error while accessing pre-defined attributes: def __setattr__(self, attribute, value): try: object.__setattr__(self, attribute, value) except AttributeError: setattr(self.wrapped_attribute, attribute, value) If you want to override __setattr__ with a more, eh, “exotic” method, the attributes defined in the class body will be set properly when the instance is initialized, but will use your method at all other times, including in other methods, which may break your stuff. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/compose-struct/
CC-MAIN-2019-51
refinedweb
1,414
58.28
Your message dated Thu, 06 Nov 2014 22:45:55 +0200 with message-id <address@hidden> and subject line Re: bug#18961: gud Cannot find bounds of current function, but gdb works has caused the debbugs.gnu.org bug report #18961, regarding gud Cannot find bounds of current function, but gdb works to be marked as done. (If you believe you have received this mail in error, please contact address@hidden) -- 18961: GNU Bug Tracking System Contact address@hidden with problems --- Begin Message ---I try debug such simple C++ code( from here:): #include<iostream> #include<boost/tokenizer.hpp> #include<string> int main(){ using namespace std; using namespace boost; string s = "This is, a test"; tokenizer<> tok(s); for(tokenizer<>::iterator beg=tok.begin(); beg!=tok.end();++beg){ cout << *beg << "\n"; } } compiled with "g++ -Wall -ggdb test.cpp". using "Next Line" I reach for(tokenizer<>::iterator beg=tok.begin(); beg!=tok.end();++beg) and the I use "Step Line" class iterator_facade in /usr/include/boost/iterator/iterator_facade.hpp, after that "Step Line" stop working, and gud says "Cannot find bounds of current function", bt show #0 0x00007fffffffdd40 in ?? () #1 0x00007ffff7ddb678 in std::string::_Rep::_S_empty_rep_storage () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libstdc++.so.6 #2 0x0000000000000000 in ?? () But if I run the same binary in gdb without emacs mediation, and use step on the same line, all works fine, I can step until the end of program, also simple script like this reach the end of main without any problems in plain gdb: br main run while true step end -- /Evgeniy --- End Message --- --- Begin Message ---> Date: Thu, 6 Nov 2014 23:45:05 +0300 > From: Evgeniy Dushistov <address@hidden> > Cc: address@hidden > > On Thu, Nov 06, 2014 at 06:12:43PM +0200, Eli Zaretskii wrote: > > I used GDB 7.8, so I suggest that you upgrade your GDB and try again. > > Debugging of C++ programs gets significant improvements with each GDB > > release, so using the latest one (GDB 7.8.1 was released a few days > > ago) is recommended. > > > > Thanks, gdb 7.8.1 solved problem for me. OK, closing the bug. --- End Message ---
https://lists.gnu.org/archive/html/emacs-bug-tracker/2014-11/msg00064.html
CC-MAIN-2021-25
refinedweb
358
62.17
Will a Captcha Block Spam? I: from django.newforms import ModelForm from aprilandjake.captcha import CaptchaField from aprilandjake.blog.models import EntryComment class EntryCommentForm(ModelForm): class Meta: model = EntryComment exclude = ('active', 'entry') captcha = CaptchaField(label='Captcha', options={'fgcolor': '#0099ff', 'bgcolor': '#efefef' } ) Also seen are just a few of the many options for the CaptchaField. You can apply these for every captcha field you write or globally in settings.py: CAPTCHA = { 'fgcolor': '#000000', # default: '#000000' (color for characters and lines) 'bgcolor': '#ffffff', # default: '#ffffff' (color for background) 'captchas_dir': None, # default: None (uses MEDIA_ROOT/captchas) 'upload_url': None, # default: None (uses MEDIA_URL/captchas) 'captchaconf_dir': None, # default: None (uses the directory of the captcha module) 'auto_cleanup': True, # default: True (delete all captchas older than 20 minutes) 'minmaxvpos': (8, 15), # default: (8, 15) (vertical position of characters) 'minmaxrotations': (-30,31), # default: (-30,31) (rotate characters) 'minmaxheight': (30,45), # default: (30,45) (font size) 'minmaxkerning': (-2,1), # default: (-2,1) (space between characters) 'alphabet': "abdeghkmnqrt2346789AEFGHKMNRT", # default: "abdeghkmnqrt2346789AEFGHKMNRT" 'num_lines': 1, # default: 1 'line_weight': 3, # default: 3 'imagesize': (190,55), # default: (200,60) 'iterations': 1, # default 1 (change to a high value (200 is a good choice) # for trying out new settings # WARNING: changing this value will lead to as many images in your # "captchas" directory!) } When I implemented this for aprilandjake.com, Josh helped me out and showed me where the code needed to be tweaked: On line 174: def value_from_datadict(self, data, name): was changed to def value_from_datadict(self, data, files, name): Voila! It worked like a charm. But then I uploaded it to my server and tried it out -- no dice. I couldn't figure it out. A couple days later, I was determined to find the problem... After a frustrating while and some good 'ol debugging print statements, I determined that the problem was that because I put my project under version control and didn't bother to remove .svn folders from production, they were being pulled in by the captcha code, thinking it was an option for a font in the 'fonts' subdirectory of the module. To get around the problem, I needed to change the code around line 110 from: fontdir = path.join(cs['captchaconf_dir'], 'fonts') fontnames = [path.join(fontdir, x) for x in listdir(fontdir) ] to import glob /** ... */ fontdir = path.join(cs['captchaconf_dir'], 'fonts', '*.ttf') fontnames = [path.join(fontdir, x) for x in glob.glob(fontdir) ] Using the wildcard *.ttf protected against using non-fonts in PIL as ImageFont objects. listdir() doesn't support wildcards, so glob.glob() was required. Take that, spam!
https://jaketrent.com/post/will-captcha-block-spam/
CC-MAIN-2022-40
refinedweb
422
55.95
. Using C11 and function pointers: #include <stdio.h> #include <stdlib.h> double deriv(double (*f)(double), double x); double cube(double x); int main(int argc, char **argv) { printf("%f\n", deriv(cube, 2)); printf("%f\n", deriv(cube, 3)); printf("%f\n", deriv(cube, 4)); exit(0); } double deriv(double (*f)(double), double x) { const double dx = 0.0000001; const double dy = f(x + dx) - f(x); return dy / dx; } double cube(double x) { return x * x * x; } Using a slightly different formula. Results using the same example as Rutger: Haskell Gforth Prints Here’s another Haskell version, similar to Josef’s. This example shows that you can use it with various numeric types. For example, using the “constructive reals” (CReal) from the “numbers” package we can specify an arbitrary precision. (Also, note that ^3 is shorthand for the cube function, \x -> x^3.) MUMPS V1: EXTR ; Test of calculating derivatives N I F I=0:1:9 W !,I,” –> “,$$DERIV(“CUBE”,I,.0000001) Q ; CUBE (N) ; Cube a number Q N*N*N ; DERIV (FUNC,VAL,DX) ; Q @(“$$”_FUNC_”(VAL+DX)”)-@(“$$”_FUNC_”(VAL)”)/DX MCL> D ^EXTR 0 –> 0 1 –> 3.0000003 2 –> 12.0000006 3 –> 27.0000009 4 –> 48.0000012 5 –> 75.0000015 6 –> 108.0000018 7 –> 147.0000021 8 –> 192.0000024 9 –> 243.0000027
https://programmingpraxis.com/2017/05/05/calculating-derivatives/
CC-MAIN-2022-27
refinedweb
218
70.39
I'm concerned about the licensing of cdrkit[1,2] aka debburn, which was recently forked from cdrecord. The current license seems to be GPLv2 + additional restrictions which IMHO is not right because GPLv2 doesn't allow any such additional restrictions. An example from libscg/scsi-linux-ata.c[3]: <skip> *. <skip> /* *. */ <skip> /* * If you changed this source, you are not allowed to * return "schily" for the SCG_AUTHOR request. */ case SCG_AUTHOR: return (_scg_auth_cdrkit); case SCG_SCCS_ID: return (ata_sccsid); <skip> An another example from cdrecord/cdrecord.c[4]: /* * Warning: you are not allowed to modify or to remove this * version checking code! */ vers = scg_version(0, SCG_VERSION); auth = scg_version(0, SCG_AUTHOR); <SKIP over 20 lines of that code> I mentioned this problem over a week ago[5] at debburn-devel but didn't get any response. Recently Nathanael Nerode mentioned this problem again[6] and Albert Cahalan answered[7]: On 9/12/06, Nathanael Nerode <neroden at fastmail.fm> wrote: > (1).). [1] [2] [3] [4] [5] [6] [7] [8] -- Markus Laire
https://lists.debian.org/debian-legal/2006/09/msg00078.html
CC-MAIN-2015-32
refinedweb
168
65.12
Steve Ballmer - Quick chat with Microsoft's CEO - Posted: Jul 07, 2005 at 4:24 PM - 306,809 Views - 176 is not only the head of a $300 billion organization, but is personally worth >$10 billion. Yet he really looks very passionate about what he's doing and the company. Maybe it's just his personality, but I'm astounded. He sits down and talks to the video camera just like any other guy. Congrats on the scoop though! hes pretty funny ill say that you always read about the banging of fists and clapping of hands etc - but you never really see it (except for the famous dance routine) good work steve - thanks for talking to scoble and channel 9 get a blog - or maybe Vloging would be better for you - so people can feel slightly uncomfortable wondering if your a brilliant ceo - or a raving lunatic haha How long will Longhorn take to go mainstream? MS seems to be moving towards stadards more these days, has MS given up on the "platform that wins" (obviously on the server side) and going with platform that co-exists? Some comments on this interview: Good to see a CEO who trusts his employees! What's with the whole "winning" thing? As a developer I am not so much interested in a battle between competing vendors as I am in evolving future-proofed standards. If I can write HTML I could care less if the browser is Microsoft or Mozilla or Mac. I don't want to worry about Microsoft-HTML vs. Mozilla-HTML vs. Mac-HTML. Some suggestions for future videos: Harder questions? You said the interview got cut short. What questions didn't you get to ask? Transcripts will solve the long-interview problem - especially if they have timestamps associated with them. Make it a volunteer project if you don't want to assign staff. I got to ask most of them. I had another softball question about what his day was like. I don't usually prepare a whole lot of questions in advance. Usually I come up with them in response to what's going on. If I had more time, I'd ask him more about some of the famous software industry stories. Like being told he was wrong by Larry Osterman and still hiring him anyway. Steve Ballmer! Steve Ballmer! Steve Ballmer! Steve Veselinovic! Steve. Steve. By the way, I wore my London Underground T-shirt to the Interview that I got a couple of weeks ago. Thought that would be appropriate because of this morning's events. I love the "Who are You?" you ask in the beginning Scoble Lol. Great work. Lol, what happend when you asked him about blogs. He gets this energy kick and then starts talking normal agian? Anyway, great interview. But who is this guy? =) Best video yet. I haven't seen much of Steve and after seeing him on the video I can see why he is CEO. On the subject of video length, recently I have been snoozing through the videos. I used to like the wondering nature of the content and I still love the human factor that brings, but a lot of the time now I wish some of the guys had prepared a little better before you arrived with the camera. And that questions had more direct answers. I know you guys have a no editing policy, but maybe it is time to review that. P.s I bet Bill feels leftout now that you have interviewed Steve. You don't want him to feel leftout do you!? I'll take that to heart that you liked this style. Keep in mind that Steve is also an awesome communicator. Not very many people I've met can get so much into a 10-minute interview. The last few questions were made up on the spot by me too. I think Steve Ballmer should do Microsoft keynotes at CES and other events instead of Bill. He has great enthusiasm which surpasses that of Steve Jobs and would help give Microsoft a much hipper look. Scoble is the best Loadsgood. Keep up the good work Robert. Balmer said during the interview talked about being big and bold. While stating that, I look back at the decisions made to cut back Longhorn. Moving WinFS to a later release. Although never confirmed but highly rumored that Office 12 was going to be a LH only office release that took advanatage of the LH platform. While I do see Innovation coming out of MS. It has become a big ship. innovation is hard to put into the system when that Innovation must support 20 years to 30 years of legacy. Although that support is what has brought the system this far. Sometimes I wonder what would LH would Look like from an OS point of view if it could fully innovate. If it could truely come out and say that from an OS point of view. Legacy support is not guarenteed. Dos 1.0 to 6.22 will not run in the platform except in a virtual machine. If MS was in the position to alienate all of its past customers to bring a new system forward. (which is what apple basically did with OS/x) It will be an interesting 10 years. but it will take most of that 10 years to get to the vision that MS has for the new platform built on .net. It will be at least Office 13 before we see more than just Infospace built in avalon technolgies. It will be blackcomb before we see the full implementation of a platform described as Cairo in 1995, although there have been a few technologies changes that is basically what blackcomb will be the fullfillment of the cairo vision. Looking forward to seeing what MS brings forward. Monad is very interesting. Avalon brings a fairly simple windowing ui platform to the platform. Indigo brings int eh communication platform that allows apps to work with each other. My only regret was that when MS went to windows in 1.0 and further that I didn't learn to program that module while I could fully program in DOS. with the .net framework and avalon and indigo and finally winfs when it shows up. I find that I can understand the platform again and work with it.It has made my interest in programming again much stronger but I find that I have alot to relearn. Douglas (Sorry for the username - which is via BugMeNot...I was too lazy to register to make a single post)? I would agree with robert that MS getting involved in these areas is a good thing. I do not think that they will kill off these other apps and techs; I think they will make them better. I happen to know that companies like Adobe base there application model around companies like MS. They say to themselves okay there is an App out there. How can we make the experience that much better for the user and I think Steve hit it on the button we have to innovate and we have to have developers out there that write good code and take it the level and make that experience that much better. I think that if MS makes one developer go out and say hey look at what MS did I can do the same thing and make the user Experience that much better for that developers users then Microsoft has done more then make a great Application; It has allowed the next generation of application to be that much better and thats why I think MS getting involved into these areas is a great thing. Steve is utterly, unambiguously amazing. I'm name dropping here, but every time he sees me, he says "Hi Larry" and we talk a bit about our kids (his eldest is one week younger than my eldest, his second is a couple of weeks off from my second). Think about that - he runs a 60,000 person company and he will still talk to a random IC (individual contributor) whenever he runs into him. That man is special. "I think developers want to know: Are you gonna win?" Exactly! Edit: I do have to say, though. He kept it to 10 minutes. I'd hate to see how packed his schedule is Umm You haven't been paying attention to Adam's blog, have you... Monad's a totally different paradigm for a shell than anyone's ever seen - yeah, it's vaguely like the various *nix shells but its ability to composite complex structures is totally new. If someone built a shell out of mock-lisp, it might be similar... passion Passion, PASSION!!! Since Steve (or even Bill) may in fact read this thread, I'll try not to: a) help it turn into to a flame war b) keep it veering wildly off tangent c) bring up obscure articles for no good reason d) insert picture jokes e) mention recursion f) bash Scoble's blog g) bring up my meddlings with Linux h) mention how this will surely get slashdotted i) feed the Slashdot trolls when they show up j) help this thread break all posting records In short, I'll be an all-around nice guy. Having played with Monad since way before beta1. MONAD is in no way like any existing shell. Here is a secret though. LH will include most of the Unix based shells. which are all text based by the way and cannot work with objects. Monad is a new beast built on top of a Object based platform not the text based platform of Unix. douglas I wish he would go fully bold though, that Mr Burns hair cut isn't doing him any good. "Halo 3" eh? Pretty sweet. Too bad you couldn't show us the view out the Windows, maybe a Buzzcast would be better next time "Who's Bill Gates?" - Steve Ballmer. Loadsgood. Does Steve alude to a more requent Windows release schedule in the future? Perhaps a bit more like Apples system of a major release like OSX - but then frequent major updates like Tiger, Panther etc every 18 months where new features are introduced to enhance the platform? I didn't like the Apple system at first - I thought they were ripping off their customers by asking for $129 every 18 months. But looking at all of the improvements and new features that have been added to a 4-5 year old OS. Unfortunately Win XP still looks like it did when it first came out. Only real improvement has been to include better security. Perhaps this is the future for Longhorn? A major release followed by regular updates that are more than just patches and bug fixes? LarryO: deep agreement as to the intense coolness of SteveB. Not only will he stop to chat with a LarryO --- he'll take a moment to respond to an every-now-and-then random email from a guy who worked at Microsoft for brief periods way back when. SteveB: the thought of your kids possessing the same energy is smile-inductive. -- stan We have to get Longhorn out first. Then we'll talk about what we'll do in the future. This is definitely the best yet. Next stop, Bill!! I wrote my first ever email ever to Bill. Back in about 93/94: It was about the future And Bob, you DO have that much energy etc For this video I only used Windows Media Encoder (it's a free tool available at ). I didn't do any editing. Just started the camcorder and stopped it at the end. On other videos, if I need to do any editing, I use Windows Movie Maker. The camcorder I used is a Panasonic PV-150. It costs about $750 at Best Buy (probably less now). George, MSDN Webcasts I would like to know what his definition of "innovative" is. Thanks Scob. That sounds very good! I'm waiting... examples: --------- * Being a VB3 developer, MS promise me to get rid of the ridiculous menu control (not happen until vb.net) * You promise that VB will be real OO in VB5 (not happen until vb.net) * You promise that ActiveX will be the solutions to all the problems with components (we are getting rid of DLL Hell with LH) * You promise that .Net will have an Application Server (not happened, you have to use the ooold COM+) I don't really believe you. Blackcomb is the future. LH was always a stepping stone to it when it was announced. LH just kept getting more and more of the blackcomb features. There will be some interesting announcements at PDC. Whistler combined the consumer and business code base. LH starts the transistion based on that code base. (almost had v1 of most of the tech. WinFS was delayed for good reason again PDC) Blackcomb will be the first version that fully implements the vision. layed out in a platform called cairo in 1995 although it has greatly expanded from that time. Not to belittle LH, it is like the middle film in a trilogy:) It has alot to keep people interested enough to make it to the third movie. And will be compelling. Even more so when the WinFS api is added to it. but it is still a lead in to a more complete platform known code name blackcomb. douglas Very nice! Now that's service. XP is a much much better OS than 95. Then there's the Tablet PC. Windows Media Center. If you haven't had your hands on those you aren't really qualified to talk about what major OS innovations have been done in the past decade. My point is that there is plenty of innovation in Open source. This isn't to say that there isn't any at Microsoft. I agree that the Tablet is incredibly innovative and so is the likes of OneNote. I don't want to start flamewars because I make use of the benefits of both open and closed source. Another thing that I have to comment on (like others have) is how passionate about his work he is. It's great to see an executive who isn't just another boring man in a suit. I honestly think that he would still be as passionate about the company if he wasn't earning as much money. Since english isn't my native language, I needed to read the transcript to see what he said 'if I wanted to allow blogging to happen or not.' was not what I was hearing. Now I understand what happend =) A lof of copying, and improving. Improving security and stability of things that MS and other non-OS companies do. Developers...Developers...Developers...Developers... Developers...Developers...Developers...Developers... Developers...Developers...Developers...Developers... Developers... I'm surprised you think that. Microsoft wasn't the first company to employ a WIMP interface in it's OS, so surely MS is the one doing the copying. Linux was in no useable desktop state at that time, hell it's still got a long way to go before it becomes good for the masses. It has been doing rounds in the India MVP mailing list Steve Balmer talks like any other CEO And that part about inovations wasn't very informative. He mentioned only products but not the inovations... He didn't persuade me that Microsoft is very inovating company and IBM or Oracle haven't had any new things to show... So, when we are going to see the video with Bill Gates? Orbit86, the NT kernel exists since 1989. Nice try spreading FUD. 95, 98, ME NT kernel: NT 3.51, NT 4, 2000 (NT 5), XP/2003 (NT 5.1) Apple doesn't want the heck of doing drivers for the thousands of different configurations of PC hardware out there. They will only support their own machines, according to my brother-in-law who works on the Mac team. That said I imagine that Windows will run great on Apple machines. Apple could, if they play their cards right, become the biggest Windows OEM overnight. Robert When is Billgs interview coming Robert? Great interview! The best part about the Channel 9 executive interviews is their unscripted nature. Otherwise we are seeing the professional press or well-rehearsed demo keynotes. So this really gives us a more personal view of the folks at the top of Microsoft. I'd like to see more focus on what drives Microsoft executives. What about their business innovations? We know about product plans thanks to corporate transparency, but a lot less about those people and how they like to run the firm. For example, the press are talking about the excruciating Microsoft interview process of the past and how that might change. What would Steve like to do with that situation? Steve touched on a vision of Microsoft success for the next ten years. What does he think is important for that success, business-wise? How does the partner program expansion mesh with the traditional developer support? And so on. Channel 9 just keeps getting better and better. P.S. Reading the comments has shown me the error of my ways. Someone else is wearing the Tuxedo! Damn, now I have to upload another avatar! Sometimes with the grin Steve has on his face he can be a bit creepy to look at.... is it just me or what... He wants to do one but getting on his schedule is harder than getting on Ballmer's. Hopefully soon. 10 minutes of Steves time is worth a lot of money. What you should have asked him was "What will Longhorn be called?" because I bet Steve, Bill and a very small few others know exactly what it will be named. I'm sure they have a handfull of names to choose from, from Windows 2007 to eXPedition but I bet theyre already set on one of them. How far between the beta or Release candidates and the RTM of XP was there for the code name change of Whistler to XP title? Will we first hear of it during PDC? or early next year during the RC stages? As the marketing team and graphic designers all need time to start designing logos and branding to be ready for this time next year. Let alone the guy who needs to run a name check through the code to change any longhorn references to the new title. Does the M in Project M give something away? I'm sure longhorn server will be called Windows Server 2007. Im sure Internet explorer in Longhorn will be called Internet Explorer 7 Enhanced, but what about Windows? Can anyone guess What words begin with M that would fit the bill? If I had more time with Steve, that's actually what I'd like to go into. What makes a great leader, that kind of stuff as well. Good questions! I'll definitely come back to this thread next time we get some time with Steve. I liked your questions on the fly - they encouraged Steve to share his enthusiasm for the future. I especially liked his answer to your question "Why do you allow blogging?" 10 minutes is a short amount of time to share his vision of the company with developers, employees, customers and the world at large. Yet his hopes for the future and how he wants to be remembered - both in business and a father - come through loud and clear. Gee, I was unaware that the Internet was even a "competition". Actually, I had always thought of it more nearly as a "cooperation", but obviously that attitude is alien to MS. I do not want MS to "win" the Internet. I have been using the Net since way before MS even existed, and not only does it not need MS (or anyone else, for that matter, including Sun) to "win" it and control it, but in fact it's better if no one does so. To a certain extent, Hailstorm was MS's attempt to "win" the Internet, by coming between the end users and the content providers and other commerical businesses (no doubt with the ambition that eventually the users would regard the MS middleman as their single point of contact, thereby enabling MS to assume the back-end role themselves and then take over whatever business area they intended to "win"), but that failed, and I'm glad it did. Paladium / DRM is another approach to MS's "winning" the traffic between the end users and the content providers / retail businesses, and I hope that initiative fails too, for the same reason. So Mr. Ballmer is mistaken in his belief that developers want, most of all, MS to "win". Perhaps developers **really** want MS to conform to agreed-upon standards, so then the developers' apps will succeed, regardless of which platform provider "wins"! Ah, so Blackcomb will be "The Godfather Part III". Now, that will really be worth waiting for! I think instead I'll peak in and see what's playing on the next screen over, here at the multiplex. Pass the popcorn, Mr. Jobs! I'd love to get him on my PodTech.net show Keep this going... this is what users want...real interviews An interview with Bill G will only be useful if it can be somewhere between then length of the Steve Balmer interview and the Alchin interview. I like the longer ones, Alchin was fascinating, almost as interesting as Bill Hill (who a admire quite allot). Q: Hey, where are we? A: Heyyyyy, how are you guys doing? Q: Who are you? A: Steve Ballmer. Who are you? Q: I'm Robert Scoble. A: I know that. You're in my office at Microsoft Corporation. Q: We're in your office? I heard there's an Xbox up here, is that true? A: There's no Xbox. There are some suggestions, from my son, on Halo 2 and 3 on the board.. Q: I’m on the evangelism team here, why do we have an evangelism team? A: Well, really helping developers understand what we got available for them to use, not just frankly in Windows, but in Office and our Server products, what they can take advantage of, exploit that. How they can save their time. Save their energy. How they can do in some senses better applications. Applications that integrate better with other people’s applications. You know you’ve gotta communicate. Some of that has to happen on the Web site. Some of that has to happen in face-to-face meetings. People here have to be listening and creating code samples. All of that we call “evangelism.” Q: What is your call to action for developers? A: Well, I think that one of the key things that people have to understand is that the PC is an important part of the overall ecosystem that people are using. It’s intelligence at the edge of the Internet. And I think that people got very excited, appropriately, about Internet and HTML and browsers, but I think there’s gonna be two places of innovation for developers over the next few years. One is gonna be taking advantage of intelligence at the edge of the network. The PC and other intelligent edge (mobile phones) devices and the other is going to be using Web services and XML to glue things together: applications, services, across the Internet, inside the datacenter, inside a developer’s applications. Between those two phenomenon I think there’s plenty for developers to think about. Q:. Q: Now time for some tough questions. A: OK. End of the softballs. Q: On the blogs there are those who say that Microsoft doesn’t innovate anymore. Can you give us some examples of where you see innovation? A: I think we’re doing a ton of innovative work. If you take a look at the stuff we’re doing with interactive television I think it is super innovative. I think the Tablet stuff has been a little slower to take off than we had hoped, but I think it’s super innovative stuff. If you take a look at what we’re doing right now in the Office world with our next generation, with this generation, and the next generation of Office products. The stuff we’re doing with Live Communicator and the real-time stuff I think it’s very good and innovative work. Take a look at what we’re doing in Visual Studio and Systems Center and the DSI, the management issue, I think it’s very innovative work. If you take a look at MSN Messenger, I think it’s very innovative work. We have other things coming to the fore. Longhorn and a bunch of important -- the Xbox 360 – even before that. Very innovative stuff. I do think that people miss. There was a big gap between the last major release of Windows and this one and people kind of miss that. They want more frequent releases. We got that message. That’s important. But, I look out at the world and I say ‘who is doing the innovative stuff over the last few years?’ Did IBM out innovate us? I don’t think so. I don’t think they’ve done much interesting at all. What about Oracle? I don’t think they’ve done much innovative at all. What about the open source guys? Ah, the business model is interesting but we haven’t seen much in the way of technical innovation. People cite Google. Google has done some interesting stuff. We’ve done some interesting stuff. Peace. There are going to be some other companies that do some innovative work. And our job is to go out and do what we’re gonna do which is to out-innovate them going forward. Which is what we will do, even in their prime domain of search. Q: Coming up with tough questions for you is pretty hard, if you were in my position, what tough questions would you be asking the CEO of Microsoft? A: I think developers have to ask the following basic questions. Number one are you guys going to create opportunities, not just for me to write programs more simply, but are you gonna create opportunities where my program somehow works with another guy’s programs and one plus one equal three. Windows has been that. You get to use multiple applications at the same time with some level of data interchange. I think the work we’re doing in Longhorn. The work we’re doing with the file system in Longhorn. The work we’re doing with Avalon in Longhorn. All falls into that category and I feel very very good about that. I think you have to ask us are you gonna give us a way to have one plus one be three with other applications in terms of the way they communicate and work out on the Internet. We’re working hard on strategies to facilitate that. With MSN and some of the other things we’re doing. I think that’s an important area. I think at the end of the day developers, though, more than almost anything wanna know “are you guys gonna win?” Because the technical stuff is interesting, very interesting, very important, but people want to bet on platforms that are going to win because platforms that win get more support. Get more management tools. They get more of everything. I think our track record has shown a consistent track record of winning and even in areas, like the Internet, where people we weren’t gonna win we came back to win. So, I’d say to us “are you guys still committed to winning?” Of which the obvious answer is “absolutely we are” and that success of our platforms benefits our developers. Q: To end it up, since a lot of Microsoft employees watch Channel 9 too, what would you say to all the Microsoft employees around the world who work at Microsoft? A: I’d say the same thing to our developer customers as I would say to our employees. There has never been a better opportunity than today to make a real difference in the world. The next 10 years are going to be as exciting in computing and information technology as the last 10. Don’t be confused. Even though more than 10 years ago most people didn’t have PCs. They didn’t have cell phones. They didn’t know what the Internet was. The next 10 are gonna be every bit as good. Whether it is Web Services. Whether it is intelligence at the end. Whether it is service based applications. Whether it is next generation user interfaces. Whether it is mobility. The next 10 years are going to be very exciting. The key is to set big bold goals for yourself. Whether it is the skills that you develop individually. It is the projects that you work on with others and seek to go after. I think we have got to be able to be big and bold. I tell our people let’s be big and bold in our ambitions. I tell developers who use our platforms and tools to be big. Be bold. Be ambitious and count on it – the future is so bright you gotta wear shades. Q: I wore jeans right into the CEO’s office. Q: What do you want to be remembered as? A: Now you’re asking deep, profound, questions. Mostly I want to be remembered by my three sons as a great dad and a great husband. But when you get past that I wanna be, you know, kind of remembered as a guy who helped build a company that did great innovative work that was able to continue to do great and innovative work long after its founding. The company is 30 years old. We started out as the beginner in this industry. We’re about the only company that has not only survived, but thrived through the whole period and 30 years from now when I’m long gone I want this company to be still knocking out the innovative hits. Q: Thanks very much Steve. That was a very pleasant interview. I rather enjoyed it. I work for your company, and I'm very proud to do so. Also, if you could make me the boss of the entire Windows divison, I'd appreciate it. Otherwise, it was very cool of you to spend a few minutes chatting on video. Sincerely, - Rory - Exactly, that's why somewhere around 30 minutes is good. That's why I said somewhere between Balmer an Alchin. The Alchin interview was VERY long if I remember correctly. (maybe it wasn't, hmmm..) I wish you got to ask about MSN Search - I remember that developer speech, and I've heard him speak about getting more relevant search results than google, but I don't understand why on earth Microsoft doesn't rally the developer community around search. .Search is already 100 times more powerful than Google Mini and it might even proove better than Google Applications, but the MSN search team is attempting to brute force their way past google, when all we have to do is leave it to developers to do it by allowing 3rd parties to create better ranking modules, add-ins, to make it super easy to search any application. Or the internet if you have enough servers or cheap pcs. I did it, why can't Microsoft? I'd give any thing for 5 minutes with Balmer to ask him why Microsoft isn't putting part of the search game in the hands of developers, and how he could do it with almost no effort. So if you get a second interview someday, in a couple of years, ask him if Microsoft will ever take a revolutionary innitiative and give the .Net community some tools and products that would blow google out of the water, and how come he hasn't hired me yet Paul from Nata1 Although the whole "intelligence at the edges" of the net thing got me thinking back to when AOL chatrooms were just starting out...lots of "edges" but not a lot of ... There was that namespace that disappeared, System.Search - whatever happened to that? Well search is not something that you would ever want to be in the framework, it would have to be an outside api. There should be a single API where you can spider a couple million pages if you have 10 gigs free sql space - and if you want to override text extraction, the developer can write their own module to plug in, want to implement your own ranking algorithm, plug in a module, want to use your own stemmer? plug a module in. Personally, I'm going with Community Server by Telligents 100%, even from winform apps - because of the module architecture, and dataprovider architecture Like if I ship a product and include my help files, I can use the html generator, or I could use a much more powerful system that uses an embedded web server running on the cd-rom - and I could ask full text questions, and when I'm authoring this help, I could quickly alter the default ranking, sticky certain pages so they always come up first, etc. and I could keep my entire corpus in xml. In order to beat Google at relevancy, it's going to take getting developers on their side, to develop solutions as they arise. Whoever heard of tagging a year ago? I didn't - or Social Search? Or the next cool blog feature of tommorrow that allows relevancy to take off. Yes, relevancy is about a bunch of algorithms, Googles got em and Microsoft is throwing dozens of C++ programmers at the problem (and hiring some awesome Gurus, some of my personal heros like Selberg) but their missing the point that its not about how many C++ developers you throw at the problem, its the tools that you give to the .Net community, and let them innovate as new 'things' arrise. Google and MSN relevancy right now is all about SEO and changing some constants in the ranking algorithms rather than going after the real problem The REAL problem is that we do have the capability to handle relevancy. You and I. But we're not given the chance. It shouldn't be Google vs MSN Search. It should be Google vs MSN Search solutions - where MSN just takes care of the hefty work of basic spidering and indexing of the entire internet and serving up search results, but where WE have control over our own ranking if we want, where we can use that huge corpus in our own apps in different ways than your typical web search - where dozens of smaller companies start popping up that would use the MSN Search service, pull a thousasand pre-ranked hits (with basic ranking control), and then WE rank ourselfs - so tommorrow when some new Social Search innovation comes out of nowhere, new companies can use MSN Search and implement their own ranking solution. I've already proven that its possible, and Selsberg works for Microsoft now - he created the Meta Crawler which is very similar in concept back in 96. So there's alot that Microsoft could be doing that its not doing and that it probably won't be doing, but we can hope Sorry for the Rant! Great interview Robert. channel 9 videos are great, Keep up the good work Ian The Windows Media Center Show Of course I got here from a Slashdot posting. In about 2 weeks I will be purchasing a Power Mac. My first ever Apple. I have Visual Studio on every computer I use and around 3 months ago I got my company to start using C# for an application I am the lead developer on. So I'm not anti-Microsoft. However, I think Microsoft is really missing the "average user" need on the desktop. If Apple can find a strategy that allows them to get in more homes (the Intel switch could help) then I say Microsoft is in real trouble in the home-user market. Backwards compatability is great, but in the home it's only useful for 3-5 years (IMHO). So, instead we get the same bloated OSes that power business computers. The home OS needs to be A LOT more nible and forward thinking. Unfortunately I'm not sure the monolith that is M$ can actually recognize this need and effectively deal with it. I think a new strategy is needed in the home. What about a site license for the home as well? A big factor for me was when I looked at upgrades from one version of an Apple OS for 3-5 computers (which is what I normally have at home) compared to an upgrade for those same 3-5 computers from one version of Windows to the next, the cost is like a difference of a factor of 3-4. Of course there's a lot of factors there, Apple has a more frequent release schedule, but that means I can deal with the cost yearly instead of every three years. Helps the financial impact seem smaller. Wow, this turned in to quite a ramble. I guess that's what happens at 7:28 A.M. when hurricane Dennis is threatening to take you out. Hi guys how are you? My name is Vincenzo and I am Italian. I write from Sheffield (UK) where I study marketing. I was watching Steve Ballmer interview. He positively impressed me. He didn’t look like a CEO, it was like I knew him since a long time. Actually, at first Steve Ballmer looked like my uncle Nicola in Italy and I said “UNCLE NICOLA IS MICROSOFT CEO AND I DIDN’T KNOW ANYTHING ABOUT THAT?! He can probably help me with my dissertation!” But then I realised that it wasn’t my uncle. I will have to cope with this sorrow for the rest of my life, eh eh eh J Joking aside I enjoyed the interview. As marketing student I have to appreciate Ballmer natural communication skills to promote his company. Also, I think it is great when a CEO finds the time to listen to his employees especially in a big company like Microsoft. I would like to add something to what Ballmer said about innovation. I think that innovation is not only interactive television, messenger, live communicator and the real-time stuff. Don’t get me wrong guys, these things are important. But they cannot be the only form of innovation we value. I make an example. During the master in marketing that I am currently attending I have participated to a project with IBM UK. We have studied the technology that IBM has developed to help people with disabilities (blind, hearing impaired, with a low vision and so on) use IT applications that we can all use today. I think that this technology doesn’t have a great impact on IBM revenue but still is “technology that matters”…to society. It is still great innovation. Ballmer said about IBM "I don’t think they’ve done much interesting at all”. Mr Ballmer let me disagree with you. I think that technology that helps people with disabilities is very interesting. I don’t work for IBM and, believe or not, I have no interest to promote this company; yes I am studying this company for my dissertation but I consider myself a free thinker like you guys. By the way, I like your blog. Keep it up guys and hope to participate more to the discussions on channel 9 eh eh eh eh eh J Vincenzo Graziano What a cool guy, a ball of energy with a passion his company. *sniff* I could cry, I love Microsoft! Go Steve! They built that monopoly by building great software, it's a common side-effect of being very succesful. Slashdot has a lot of nerve linking to a video stream. It seems like the only purpose of Slashdot is to conduct distributed DoS attacks and then belittle the content. >what tough questions would you be asking the CEO of Microsoft? Apart from that question, nice interview. IMHO, tough questions are: - How can I make sustainable business on your platform over the next 3 - 5 years? - What are models for me to collaborate and compete with your 50,000+ workforce? BTW: Maybe in the future, could you post some of those hilarious BillG and SteveB videos they show at the company meetings? Remember last week when Balmer was talking about how they were going to overtake Google in Relevancy in 6 months? I have to unfortunately work right next to Mac heads (small company) and listen to that hype on a daily basis - they get a kick out of watching slashdot threads after anything Microsoft (sorry M$) related gets posted! Well the slashdot posting really gets dull after a while, in fact I think I can write a little chatbot that can produce the same output. Like the whole 20,000 replies to the Balmer / google speach (you know the one where he was in Australia talking to partners) were the same old junk! Nothing new! And no one even Once brought up the fact that MSN Search Won't overtake Google relevancy because they don't take social search seriously. I should seriously release a anti-Microsoft chat bot and see if anyone could tell it was a bot. Slashdot posters are a bore I hear this quite a bit about Tablets, but rarely is it ever backed up with a concrete example. I'll grant that some of the hardware is pretty cool -- *love* this, for instance -- but Microsoft doesn't make the hardware, so they can't take any credit for that. As far as Window XP Tablet PC Edition, there are precious few things that make it any different than standard XP, and I'm not sure they qualify as innovative in most cases anyway. Handwriting recognition is certainly nothing new. The Ink input system is OK, but Apple's Inkwell is similar, just to name one example (yes, I realize Apple doesn't make a Tablet). OneNote is a very good application, and MS should be justifiably proud of it, but that isn't even Tablet-specific. So what are all these great things going on the the Tablet world? Borland? Haven't they been squeezed out of the market yet? What was the last thing they created? Turbo Pascal?!? 'Architect, Architect, Architect, Architect'----------L.S you as boss can fire all developers , only a few Architect remain, put all architect to do what developers did, your company is still works. Ok, so why would he do that? what is *your* vision for the windows platform? Shaun McDonnell As for you...platform zealot? What's wrong with you people? If people like Microsoft and they like what they do, why do always take the time to come on boards like this and cry about it? What's wrong with appreciating the strengths? I love Apple, I have Debian on my laptop, but I also like what MS are doing with Longhorn and the newer technologies. All the negativity and anti-MS BS is getting tiresome. You have a picture of Jobs in your avatar, he's never given anything away for free so I'm missing your point about free software their. If free software does continue to grow and it destroys our software industry, what then? They'll be no programmers willing to take up a career that pays nothing. Your vision of freedom doesn't work well in the society we live in, people need paying. That part made me laugh 1. You manage your data 2. They manage their data 3. Data is easily communicated between the two of you Three "applications of use" for two codebases. I'll ask around here and see if there are any other ideas on the equation. Right On. That was just an example of the "1 + 1 = 3" concept, not exactly what Steve was talking about. I'm not quite sure what you're asking, though. Here's an article on J2EE and .NET Web Services integration, demonstrating what I meant in my example.! Question - Did you prime Steve for the "If you were in my position what tough questions would you ask Microsoft's CEO" question What's amazing is you can't tell where we switched from the prepared questions to the unprepared ones. He answers them all with the same speed and confidence. It's an amazing skill. It's also why I asked that question that way. I had been told by others that even if I asked a really probing, really embarrassing question he'd answer it his own way without even skipping a beat. So, I didn't try. The Slashdotters gave me some heck about that, though. So next time I'll have to ask some go-for-the-throat type of questions just to show I can ask those. But, I bet the answers would turn out the same. It's sorta like when you ask a politician a debate question and they say "that's an interesting question, but the real issue is..." and they switch into what they wanted to talk about anyway. Steve gets asked the world's toughest questions by the world's best journalists, so there wasn't much of a chance that I was gonna get him to reveal anything new in my 10-minute interview with him. Great job. I concur with the others, Steve should get a blog. Microsot unofficially answered this question months ago but I forget where I read it, but it was straight from the MS OS teams. MS has already said they WILL release Longhorn in late 2006 and they will remove (more) features if neccesary in order to make this timeframe. Balmer does not have some magic date just sittng on his desk that the public doesn't know about. If you have ever worked (or watched) developement on a large project you will see how difficlt it can be to forecast a magic date. MS is hoping on sooner than later, and if they can finish Longhorn in August 2006, then they will be wasting everybody's time if they told everybody Nov 30th. It's better to just answer the question in quarters and let the project decide when it will be done. As far as your "platform that wins" question, I think you are looking at Windows Server frm a very narrow view. The only reason 2003 Server is a "platform that co-exists" now more so than previous OSs is because MS finally has the time to add more bells and whistles to their NT5 kernel, becuase MS spent all their time developing NT4 and 2000 Server just to do everything it needed to do, and not enough time to add "what would be nice". I'd also like to point out that NT4 had support for Unix compatibility (something with printing) and 2000 added even more support with better integraton with a Novell network (which 2000 has quickly replaced anyways). I could spend a lot more detailing my points but I think you should read up on your 2000 and 2003 white papers along with taking into consideration what it takes to write an entire OS from scratch before you make blanket statements like these. MS has spend the past 8-9 years adding all the features every other server OS has had (for the most part) and now MS is spending their time adding bells & whistles and adding more innovative things (ex: Longhorn server). Nothing has changed in MS's goals of their server OSs, it's business as usual. If you have some specific points you'd likd to discuss on how 2003 appears to have different goals or business model, please post them, as I'd welcome the discussion. Did you even read this article, it mentions 2 features in longhorn. 1. The bootup can now detect if a critical hardware cange will cause your system to lockup. As mentioned one that I've seen mylsef is a otherboard change. If you make a drastic change in motherboards Windows will BSOD durng bootup and you HAVE to re-install the entire OS to get windows to work again. This is a life saving feature to upgraders. 2. Windows has a built in benchmark tool along with the abltiy to enable/disable features (ex: services, runtime applications) on the fly. So a game will run and it can disable anything you wouldnt' want running anyways such as the File Indexer. I mean do you really want an optional service running in the background chewing up memory and network speed while you are playing a game? Obviously this implies all your settings are returned to normal when the game quits. Although by the sound of it it's up to the game to trigger this optimize event but I sure hope Windows does this by default when a full screen DirectX or OpenGL game runs. ..... So what part of this article claims that Longhorn is stealing my PC or screwing it up? It sounds like to me Windows it just getting smarter and flipping settings a smart gamer would (and should) flip anyways before running a game. Let's see, I seem to recall MS launching an OS that was so user friendly that it was the sole reason that PCs became as populate as owning a TV. Ironically that same OS also made the internet just as popular. MS offered the first server OS with a graphical UI that was also so admin friendly it took over 50% of all servers in less than 10 years, and this number is still growing. And despite the myths, MS did many innovative things as far as web app developing went. They were ahead of Sun Java on many fronts and here's a timeline to back this up. 1) 1996 Microsoft releases ASP; in 1998 Sun releases JSP 2) 1997 Microsoft releases ADSI; in 1998 Sun releases JNDI 3) 1997 Microsoft releases MSMQ; in 1998 Sun releases JMS 4) 1997 Microsoft releases Microsoft Transaction Server; in 1998 Sun releases EJB 5) 1998 Microsoft releases MSXML; in 2001 Sun releases JAXP 6) 2000 Microsoft releases Queued Components; in 2001 Sun releases Message Driven Beans 7) 2000 Microsoft releases XML Web Services; in 2001 Sun releases Java Web Services Developer Pack But if you want to focus in on just the past couple years, you are correct, MS did not invent internet searching, satellite maps or IM, I'm sorry you're so dissapointed, but I'll take a 2003 SP1 server anyday over the newest solutions Oracle/IBM/Sun is selling. Your questions were really good, this was actually the one of the best videos I've seen on Channel in a long time, I love the "short and to the point" of it. I rarely have time to watch 30 or 45 min videos where there's too much giggling and tagent discussions. I wouldn't be so quick to feed the ./ crew, they are very inflamatory in nature and many being slightly on the egotistical side, would never be satisfied even if you did ask the questions they want. I think I explained this in a previous post just 2 posts up, asking release dates is silly, if a date is available it will be told at a conference or leaked, but don't probe management for dates of projects, because they don't exist. Even if you try to put him on the spot expect a "when it's ready answer". I mean really, if Ballmer gave a hard date and MS release early or late, any /.'er can twist it to say MS doesn't know what they are doing. PS: I'm an avid ./ user No, he's CEO because he knows how to talk to people. He met Bill Gates in Harvard, big deal. If he shuffled through his sentences slowly, looked at his feet, mumbled, yet had incredible business sense, do you really think he'd have been given media-intensive position of CEO? Not on your life. You want your most charismatic and passionate person at the head of your company. In fact, you'll find that CEO's of a lot of big corps are very suave and energetic people. How else are you going to promote your business, woo investors and generally make more money? Someone has to be in the media spotlight. It should damn well be the person who can handle it best. Stop looking at things so black-and-white (read: being a blind fanboy). Business is not like that. Oh, and on the argument about "open source eventually winning" - whilst OSS is completely uncoordinated, without a central body of leadership and with no public promotion, it's going nowhere. Period. And that's not FUD, it's FACT. [Edit: Slight edit 'cause I can't spell] There is something in that but please don't go MTV about this, keeping it short for shortness'sake. Don't script if it's not absolutely necessary and don't overedit, leave in the spontaneous that makes C9. So if it takes an hour, it takes an hour. If you can do it in less than a minute, do that. But in th end, take in the comments and trust your gut feeling. It created and sustained C9 and it will continue doing that. Last year I went to a Technet session where Steve was interviewed as part of the keynote. Because of him being there, the parking garage under the congress center was closed, everyone was (supposed to be) checked upon entry and you couldn't carry keep bags etc with you. Closing the parking garage was a bit silly, in my opinion, and impractical. The few streets with free parking spaces in the neighborhood were packed with cars. What did they expect? Someone driving in a car full of explosives in it because of Steve? Unlikely. But that is just me. If you want to inspire me, you don't go all excited, you take an hour or so, explaining, in detail, the logic and rationale about whatever the subject is. There would need to be a lot of 'hmmm', 'yes, I see' and 'how does that work, exactly' going on with me. The first time I saw Steve he was part of the keynote of a Technet day. He was nice and all but it wasn't my thing. You'd expect that any moment he'd go like 'stand up and applaud for yourself, yes, yes, yes, go on, stand up'. The thoughtfull parts in that video (and 90% of it was) were for me. The "take home" message of the Steve Ballmer video is that the next 10 years, for developers, will be as good as or better than the last 10 years. It is left to the viewer to decide what "better" means. Perhaps it means more interesting work. Perhaps it means higher volume of work in general. Perhaps it means increased gross receipts for developers. What Mr. Ballmer thinks when he thinks "developer" remains a mystery. What aids to developers Microsoft plans to offer over the next 10 years is unspecified. I come away from his brief comments with the same confused, "what was THAT all about?" that I used to experience at half-time when coach would give us a pep talk. Nevertheless, it is extremely good to have the big cheeses honor us with a few comments. I don't think this qualifies as a contribution to the still missing dialogue between Microsoft management and developers, but it is a start. (I know, management believes there is an ongoing, long-term, in-depth dialogue that makes the world safe for innovation. In my neck of the woods, the means to advise Microsoft management is non-existent.) Good show! Market share is less important to Linux cs than it is to Microsoft. Linux can and has survived and grown with nearly 0% market share. You paint the picture of Microsoft with, say, 10% market share. They operate on slighty different markets but more importantly, from different angles. They dance around each other and that is good. If they come a bit closer they might see the other isn't as bad as they thought it was. The FUD comes from both sides (and Steve has done and is doing his share of that and is often not much different from, say, your average ./er) but I find that if you get one-on-one and face-to-face with any of them, you'll find them quite reasonable. That's what I saw with Steve when I looked over the wall of 'Go go Microsoft, the competition is silly and doesn't amount to much' (I was to use a one-word expletive but thought the better of it Customers...Customers...Customers...Customers... Customers...Customers...Customers...Customers... or users, if you like. In the end, it is the person using the software, even if it is someone writing software that only that one person uses, that makes it worthwile. Your comments are unfairly sarcastic. Laugh? If you're such a *nix zealot then you should embrace MS's adoption of *nix. If MS does change their mind one day that's a good thing, not a bad thing nor a sign that they don't know what they're doing. It would just prove they realize they need to adopt to meet the market's demands (which they have done MANY times in the past). If you're going to take the time to post, at least make it intellectually stimulating. Call me a stickler but posts like these are borderline flaming imho, nothing productive comes out of them. Did you know it's a fireable offense for anyone working on Windows at Microsoft to look at GPL code? So, I doubt it. I agree, It's nice to see top-level management in-tune with what's really going on as well as actively interested in it. I met with a couple of PGs a few months ago out in Redmond and I was amazed at how willing everyone was to sit down with you and ask "what do you think of this", or "how can we improve..." It was very interesting. Does this clause apply to the SFU/SUA teams as well? Since GPL code such as Linux is regarded as being that "cancerous", I suppose MS people wanting to recycle *NIX code will have to dig into their 16-bit archives and pull out a listing for one of MS's first products, namely XENIX. MS still owns some of the rights to that, don't they, or is the source code now owned by SCO (present or previous incarnation thereof)? XENIX incorporated some BSD features into the ATT code base, so Linux fans might find a few worthwhile tidbits in it, albeit it ones that you could easier write from scratch than extract. (I'm assuming it would be of very limited use, unless MS has plans to release Longhorn for the 8086 or 80286, that is!) Everyone needs a hobby I think he means to say that Linux isn't cancerous, per se, the GPL is. A lot of OSS devs tend to just release their code under the GPL without thinking what it means for other developers. If you really want your program to be used in any application (closed source or not), then don't release it under GPL. For the record, I've had XP installed for a few years now, and I've never had a BSOD. Ever (although my memory could be faulty, but I doubt it). I've had those god-forsaken "Foo Program has encountered an error and needs to close" crap, but I would argue that's the program developer's fault, not Windows'. Then again, I could be wrong At the same time I don't leave my machine on 24/7 since I do most of my work on a laptop. I also run Gentoo on my desktop at home - go figure I'm looking forward to trying out OS X on an Intel. I have a lot of respect for Mac and the only thing stopping me buying one in the past was the price. If the price comes down, I'm willing to part with a grand or two! One last thing: why is it that when people think "innovation" it always has to be something totally new? Why can't you take an old idea/technology and improve upon it in ways people hadn't thought of - is that not innovative also? I would argue that it is.. You will be able to run Windows on a Mac. You won't be able to run Mac OS X on a non-Apple PC. Simple as that. 3 years I made the mistake of hiring about 6 guys like you to do the graphic arts and video editing at my shop. They're Mac, Linux guys, I'm open minded, they said they didn't mind .Net in their interviews. For the last 3 years my work has been ridiculed, crapped on, all the Nights, Days, and Weekends I spent trying to make a name for my company by working on the most involved open source search engine framework to ever exist, that wasn't good enough for your type. Getting made fun of almost every other day because it's written in .Net and just isn't going to ever be 'cool' because it just isn't this or that. Well that stuff used to get to me. And then there came a point in my life where I realized that you guys are just full of it. Not to listen to anything you guys say because its always a bunch of Hogwash. Now I'm laughing inside because some day soon I'll be in Redmond, if things go right, and I'll never have to listen to guys like you ever again, because I don't have to read your sensless and stupid posts, I don't have to read slashdot posts, and someday soon I'll be working with real people who are rational and whos work is based on Merit not on the stupidity that your type of people spew out that at one point in time I used to take seriously. Someday when I work for Microsoft I won't have to wear headphones all day to drown out the Linux/Mac drivel. Just reading your slashdotish posts, man that drivel, after all these years, now all sounds the same. Your not out to make a point, your just out to be cool, to put other people down if they're not like you. Its a redneck mentality and I've come to realize that it is in fact mental illness. There are linux and mac gurus who are merit based and then there are others who are all talk I have to second this, while he seems like a reasonably smart guy, he's spending as much time as all of us posters put together to make half arsed attempts to bash MS. While I may not be happy with everything MS has done in the past/present/future, you seem to obsess only on the (small amount) of negative ones. If they're such a crappy company why waste your time commenting on them. Go write a better IDE for Linux, or improve the interface or help get XML based display to the UI (ala Avalon or what MasOS10 offers). Any fool can stand by the side of the road watching everybody pass and poke fun of "those silly enough to keep driving". Personal attacks are weak, people resort to them because they can't control their temper or have simply ran out of sensible points to make against something. .............. But despite my agreement with you Orbit86, your posts are pretentious and have, so far, lacked any factual point behind them, the exception being the most recent post you made just before this one. Instead you keep displaying a "I can't wait till MS gets in" attitude that is often associated with the linux/mac zealots, making you guilty of the same crime you are accusing others of. If you want to blame somebody for a language not being .Net-a-fied then blame that language's team, community, and if your a coder in that language, yourself UPDATE: Python is available for Visual Studio although I can't tell if it compiles Python to .Net code or if this just lets you use VS as an IDE for normal Python code. Well this post has had a more than normal amount of users because it's actually been one of the best video's we've seen for a long time and people were hoping for some decent stimulating conversation on the topic. Obviously people can't help but read your posts. The worst part is that it appears your posts have thread capped this discussion. Perhaps you should add a disclaimer on the top of your post that says... DISCLAIMER: I'm about to poke fun of MS just for the sake of it, if you don't want to engage in flames and pointless abuse, please skip this message. You can't expect everybody to know what what they're getting into by reading some of the garbage you typed. Heck if you had that disclamer on your posts, I would spend more time defending you against the other posters than attacking you myself because at least they knew what they were getting into and you were man enough to admit your biaseness against MS. The fact you use their products don't mean you're not biased, I mean I am biased against ASP.Net 1.1 because I feel 1.1 wasn't ready for mainstream use until 2.0, but that's a whole different subject. I don't sit here and randomly interject something like "What innovation, ASP.Net 1.1 sux!!!" I shouldn't comment on this thread any further, I think we've all flushed it down the toilet pretty far by now. Back on topic now - I was thrilled to see he likes MSoft employees blogging, because it helps the customer relationship, however he said it. Ask me a year ago if I would ever work with Microsoft, I would have said "probably not" But all the blogs I read from MSofter like Schobel, for me moreso from the MSN Search team, that's huge - now I'm glued to every post I read from MSofties, and I've learned that MSoft is just a company, there's nothing magical, nothing scary, its a real company - with people like me. What sets it aside is the people who work there, for one, the structure for another. I used to work for IBM, and I have to say that I do think that Microsoft is an innovator - without any previous product knowledge, based on the structure of the company, at IBM we were squashed into tiny compartmentalized boxes, and if we stepped outside the box, well you just didn't do that - at Microsoft, you can step outside of the box, to the perfect degree (like there are still rules, still a box), but to the right degree where inovation happens. Heck - MSofters can blog even, that's a sign to me that there is the freedom to innovate. At IBM, you could have had the greatest idea in the world, but you had to keep it to yourself. And that worked great. But it isn't the way you innovate. Now can we get this thread back OT? The only way to do that is not to respond to sensless /. arguments. If a babies crying and nothing is wrong with it, every good parent knows that if you reinforce the bad behavior you just get more bad behavior. So for me, that's the last time I'll make a reference to the you know what. I can't remember if he talked about that in the video or if that was an article I read. Trying to track down more info on what he was saying about business AI there's a transcript of this one floating around there somewhere right? Dang that makes me want to get a PhD so I can get in there someday I remember at the 2001 PDC in LA, the Research guy (when are they gonna get an interview with him, that guy is rad) - showed off this thing that was way cooler than a search engine - he typed in "where is Osama Bin Laden" and the result he got back was "in a cave in afghanastan". Now that was pretty frickin cool, combine a search engine with AI like that, god knows what they're working on now, but it sucks that thing never made it into the product groups (yet) Anyone read B Gates books? he talks alot about MSoft research, about how much it costs, but how its worth it, or infact, necessary for MSoft to survive. I didn't know that about Xerox! Its good to listen to videos like this to know that innovation is one of the primary (the?) goals of the company. Where's that MSoft research site again? Amazing that some of that stuff is public, but it gives us an idea what to prepare for. Speaking of which - Balmer doesn't do this, but Gates does - he kinda does this thing you see in baseball, where you call out where the ball is going to go and it goes there, like in the book before business at the speed of thought Bill was saying "in ten years, computers are going to be the size of a notebook, and you'll be able to write on them like a notebook" and then they make it happen. I didn't take it too seriously at the time - "yeah right that's going to happen", but now... if only B Gates said "In 5 years, hailstorm is going to take off" ROFL - I was banking my retirement on that one Then on top of that an empolyee at Xerox tried to create the worlds first true digital document (as word processor files are specific to their not-for-free app and HTML wasn't very feature rich and too easy to alter) so he created the PDF. Although I don't know if he left Xerox to make a business out of PDF because it was a good idea or beacuse Xerox had no interest in non paper documents. Somehow I think it was a little of both. I give a lot of credit to MS for embracing R&D as some really neat things have been floating around their R&D that will be required for upcoming technology. Once in awhile Channel9 posts a video of some group in the R&D showing off new technology concepts, such as the big screen interface that appeared here a few months ago. "For example, "information overload" is becoming a serious drag on productivity" I was about to ask why my channel9 email notifications don't go through - but on second though Unfortunetly the Osama question isn't fact, it's speculation but conceptually, the example you mentioned has existed for at least 4 months now as far as I've known. it did ok - I was asking it who the president of the united states is, (answered correctly), what the president of the united states was (said george bush), and where ... was (said born in texas) From that link you gave, the BGates link, he was saying that in the future search will be able to tell what repository your looking to search - I think we're just seeing the beginning of free text queries I can't believe I didn't even know that answer engine existed! Man I'm out of it. Thanks Travis! When I get out of the loop, stop going to PDCs, etc. I start forgetting how cool the future is. Another neat feature most people don't know about is Google's book search trick (this is different from Google Print, a project I'm also really hyped about). When in Google type "books about keyword" and you will get a special list (top 3) of books on that item and the keyword does not have to be in the title or category of the book. If you search for "books about textbox" you will get .Net books (and some java). Sorry MSN, you're still falling short here. (Love the Google Print btw - it still seems like the same thing you get when you search for "books about textbox" though) When google started taking different corpuses other than the Web, and if there was enough relevance, showing the little cluster of that corpuses search results at the top of the web search results, that was pretty frickin cool. So google is doing Maps, Books, and that sponsored search results thing goes way back. Whoever thought that one, I'd love to shake that guys hand - like the guy who invented the paperclip - in my lifetime if I came up with a 'paperclip' ... Surely you're not talking about clippy, right? mVPstar ROFL - I just copied this from some website, searched for "who invented the paperclip" - whoever made clippy, that was Not innovation ! --- Temple Robert<The Genius of China 3000 years of Science, discovery, and invention> I'd say. Although you might want to widen it somewhat to include the rest of Asia, espicially when it comes to philosophy. Anyway, we (Europe/the Western world) did it to ourselves. Not wanting to start a social-theological debate, let me put it this way: the lid was closed on science, philosophy, spirituality and art, the achievements of the Greeks denounced and it took some time until renaissance freed Europe from the chains the clergy put it. They showed the Greeks were way ahead of us, even after 1500 years because of stagnation in the 4 fields I mentioned. Had that not happened, flying to Alpha Centauri would be considered a daily commuters routine. yes,absolutely you(Europe/the Western world) did it to yourselves,but we(China/the East world) affect what you did.(search "who invented the paper and printing").the communication is a way to affect the Western world,the Silk Road. Today,you(Europe/the Western world) affect us(China/the East world) in many fields include you meationed. The idea of having "something" there to aid you is a good idea, but there's something annoying about a bouncy little paperclip that aggressively suggests what you want to do. You could say that Office 2003's "Task Pane" is partially similar as instead of animating and suggesting, it merely has lists of things you probably want to do, and they are also categorized into many pages within the pane. I might be stretching this too much, but I just felt like it was a point worth exploring. Enough complaining from me though. Great vid! Ballmer actually came out with something really intelligent and an insight into what I really agree with and am actually basing my business on. Namely: Integration hooking up applications with other 3rd party applications. How can I connect this to this... How can I interface with that... That really made sense to me. Sure Windows has granted a level of interoperability which puts technologies like X to shame but it's not nearly what I want. We work in web design and are dealing with a website with over 600 articles, what's the one thing that we're working on that we feel should be a lot easier? Integration. We're dealing with documents that are still in HTML 4.2 Transitional that contain content and layout with no seperation, Databases which use different packages, a huge Media database and an Outlook powered calendar system. We're charged with the task of putting these together and forming a web solution which will be easy to edit. If Microsoft are seriously looking at making Windows applications more interoperable with each other, it's the first step to making our job easier, that way the left hand can know more about what the right hand's doing. Ballmer's clearly picked up on this, Microsoft has too, Longhorn is going to look like a serious investment for our clients when the time comes but we're getting excited about it. You might have understood me saying that Europe did a lot original inventions (although I might have misunderstood etcetcetc Anyway, what I meant is that Europeans could have been original inventors but we kept ourselves backward. So what happened is that whatever was invented, the Asians and especially the Chinese were first 9 out of 10. The printing press is a good example. In school we learned a Dutchman invented it, then I found out Gutenberg beat us to it and much later found that the Chinese were first. Only after the renaissance started to undo the shackles of religion Europe was starting to get somewhere. Europe from about 500 to 1500 AD was not a great place to be for anyone with original thoughts. An all-time classic! Who can resist the temptations of the Cardfile and Reversi? Still want to know if he really has an X-Box hidden somewhere in that office. <grin>. That was funny There have almost always been people amongst developers that have considered Microsoft to be secret about their development. (Like the preMicrosoft time, IBM where to developers) I think that Steve have realized that openness is the best proven practice to stick with. To belong to a group fulfills a very basic human need. That is vital for any successful longtime biz. All major companies have enthusiasm and innovation drive in the beginning, and as the company gets bigger the communications get more and more restricted. Historically that will always lead to a downturn and in the end… The End. When I look at a successful company I almost always see a CEO that recognize the end customer, and understand that communication, both internally and externally should be as open as possible. When the leaders of a company stop communicate because of focusing on the top tier customers, the company won’t hear the day to day facts presented from the lower tier customers. And when that happens the days are numbered for that company. It seams that Steve have understand that and acknowledges both his own developers and the external community developers. The way Microsoft does this is historically unique. Historically it was impossible to communicate with lower tiers customers as the company grew. Now Microsoft uses blogging, Web seminars, Video presentation, FAQ, Third party vendors and Cannel9 etc. I really believe that Microsoft can continue to be THE major Software Company for a very long time to come. I’m usually not that pro Microsoft but I can’t help acknowledge their passion and striving in the right directions. Go Steve!! Regards TheSWELinker Wowza, I've never seen or heard Steve Ballmer in real life, and was taken a bit back by his energy. Talk about contrast to Bill Gates eh! cheers tom I think with the advantages in Virtual machine technology, Microsoft can/should/might quite likely cut more and more legacy support in the OS and move forward, and indeed I believe they should. I also quite believe they consider this very heavily. Already you see how Vista Enterprise is bundled with Virtual PC Express (although its a bit funny that Vista enterprise has VPCe as a selling point when you can easily use virtual PC full version for free and run in all Vista versions...) as a way to be able to run legacy business applications. I think MS will be able to drop legacy support in the OS sooner than we think, at least I hope so. Remove this comment Remove this threadclose
http://channel9.msdn.com/blogs/scobleizer/steve-ballmer-quick-chat-with-microsofts-ceo
CC-MAIN-2015-18
refinedweb
13,540
70.94
Hello all, I’m trying to script the -Check command in a python script. I want the output to be written in a temporary text file. I got it to work, then after, it suddenly stopped to work. Here is the code: import rhinoscriptsyntax as rs filename = tempfile.mktemp(suffix=".txt") cmd = “-Check selid " + str(mesh)+ " File " + filename + " _Enter” commandOK = rs.Command(cmd, True) The output is: “Unknown command: File” Then the command prompt ask for “Text destination (HistoryWIndow File Clipboard Dialog)” Is there something wrong in my cmd? If not, Can I do something to help Rhino understand it? Thanks for your help!
https://discourse.mcneel.com/t/check-command-to-a-file/71918
CC-MAIN-2020-29
refinedweb
104
67.65
Android Interface Definition Language (AIDL) and Remote Service We also mentioned that, for security reasons, each Android application runs in its own process, and cannot normally access the data of another application running in a different process. So mechanisms to cross process boundaries have to go through well-defined channels. To allow one application to communicate with another running in a different process, Android provides an implementation of IPC through the Android Interface Definition Language (AIDL). The actual AIDL mechanism should be a familiar one to Java developers: you provide an interface, and a tool (the aidl tool) will generate the necessary plumbing in order for other applications (clients) to communicate with your application (service) across process boundaries. Time for some concrete example. Say we want to implement a phone book service so that other Android applications can do a look up by name and get a list of corresponding phone numbers. We start by creating a simple Interface to express that capability, by writing a IPhoneBookService.aidl file in our source directory: package com.ts.phonebook.service; /* PhoneBook remote service, provides a list of matching phone numbers given a person's name*/ interface IPhoneBookService{ List<String> lookUpPhone(String name); } As we save this .aidl file in our Android project in Eclipse, Android's aidl tool automatically generates a corresponding IPhoneBookService.java stub file in the "gen" directory of our project. The next step is for us to actually implement our service, and we'll create a concrete PhoneBookService class. Here's the skeleton code: package com.ts.phonebook.service; import java.util.ArrayList; import java.util.List; import android.app.Service; import android.content.Intent; import android.os.IBinder; import android.os.RemoteException; public class PhoneBookService extends Service{ @Override public IBinder onBind(Intent intent) { return new IPhoneBookService.Stub() { //helper method generated by Android public List<String> lookUp(String name) throws RemoteException { // our service implementation List<String> phoneList = new ArrayList<String>(); // populate list above by looking up by name in our phone book database //... code here // return list of phone numbers corresponding to name parameter return phoneList; } }; } } What we did was: - Make our service extend Service and implement its onBind method, which will be called once clients send requests (i.e. bind to our service). We are using a Bound Service, which means that our phone book service will not run indefinitely in the background, but come alive and remain active only for the time needed to service client requests. - The onBind method returns an IBinder, which represents our service's remote implementation. We saw earlier that Android generated a gen/IPhoneBookService.java stub file, so what we are doing is using that generated helper method to return a concrete stub to the client that contains our service method implementation. Our client applications can now call our lookUp method to get a list of 0, 1, or more phone numbers in our database matching the name they supplied. We have implemented our service, but we still need to register it in the AndroidManifest.xml file of our application: <application...> ... <service android: <intent-filter> <action android: </intent-filter> </service> ... </application>Here we are providing the intent-filter to which our service will respond. Our phone book service is now ready. Some questions the reader may have at this point: - The previous article mentioned Parcels for IPC. Where exactly does that fit in here? Where are those Parcelable objects? - Do I have to use AIDL for IPC, or are there any other alternative(s)? Here, our very basic remote service simply returns a List of String objects. String types are supported out-of-the-box by AIDL , and a List of supported elements is also supported. But suppose we want to implement a more complex remote service, one that not only returns phone numbers, but a whole set of user data, like age, occupation, sex, salary etc... If we want to return a custom User object with all those attributes to our callers, our User class will need to be Parcelable. So we would write a User class implementing the Parcelable interface the same way we did in the previous article: package com.ts.userdata.service; import android.os.Parcel; import android.os.Parcelable; public class User implements Parcelable { // code here @see previous article } And also write a user.aidl file containing just two lines: package com.ts.userdata.service; parcelable User; Why a second .adl file? The AIDL contract requires that we create a separate .aidl file for each class declared as Parcelable that we wish to use in our service . That's it. We declared User as Parcelable, and implemented it as such so it can be marshaled/unmarshaled across process boundaries. This is what our service AIDL interface in IUserDataService.aidl will look like: package com.ts.userdata.service; import com.ts.userdata.service.User; // needed here /* User data remote service, provides all available info about an individual given its id number */ interface IUserDataService{ User lookUpUser(long userid); } The actual implementation will again use the generated Stub() but return a User type instead of a List of String types. Note the import statement. In the IUserDataService.aidl file, we still need to import our User.aidl definition file even when it's in the same package. As for the second question, the answer is yes, we can do IPC without AIDL, using a Messenger. In that case, the client will not call methods on the service, but instead send messages to it. We will explore the Messenger alternative in an upcoming article. We also haven't talked about the client side yet, i.e. create a client in a different application (in a separate Android project) that will communicate with our remote service. From Tony's Blog
https://dzone.com/articles/android-interface-definition
CC-MAIN-2015-40
refinedweb
958
57.47
std::list::sort From cppreference.com Sorts the elements in ascending order. The order of equal elements is preserved. The first version uses operator< to compare the elements, the second version uses the given comparison function comp. If an exception is thrown, the order of elements in *this is unspecified. [edit] Parameters [edit] Return value (none) [edit] Complexity Approximately N log N comparisons, where N is the number of elements in the list. [edit]. [edit] Example Run this code #include <iostream> #include <functional> #include <list> std::ostream& operator<<(std::ostream& ostr, const std::list<int>& list) { for (auto &i : list) { ostr << " " << i; } return ostr; } int main() { std::list<int> list = { 8,7,5,9,0,1,3,2,6,4 }; std::cout << "before: " << list << "\n"; list.sort(); std::cout << "ascending: " << list << "\n"; list.sort(std::greater<int>()); std::cout << "descending: " << list << "\n"; } Output: before: 8 7 5 9 0 1 3 2 6 4 ascending: 0 1 2 3 4 5 6 7 8 9 descending: 9 8 7 6 5 4 3 2 1 0
https://en.cppreference.com/w/cpp/container/list/sort
CC-MAIN-2019-04
refinedweb
176
54.63
A pass through system with input u and output y = u. More... #include <drake/systems/primitives/pass_through.h> A pass through system with input u and output y = u. This is mathematically equivalent to a Gain system with its gain equal to one. However this system incurs no computational cost. The input to this system directly feeds through to its output. This system is used, for instance, in PidController which is a Diagram composed of simple framework primitives. In this case a PassThrough is used to connect the exported input of the Diagram to the inputs of the Gain systems for the proportioanal and integral constants of the controller. This is necessary to provide an output port to which the internal Gain subsystems connect. In this case the PassThrough is effectively creating an output port that feeds through the input to the Diagram and that can now be connected to the inputs of the inner subsystems to the Diagram. A detailed discussion of the PidController can be found at. This class uses Drake's -inl.h pattern. When seeing linker errors from this class, please refer to. Instantiated templates for the following kinds of T's are provided: They are already available to link against in the containing library. Constructs a pass through system ( y = u). Constructs a pass through system ( y = u). Scalar-type converting copy constructor. See System Scalar Conversion. Sets the output port to equal the input port. Returns true if there is direct-feedthrough from the given input_port to the given output_port, false if there is not direct-feedthrough, or nullopt if unknown (in which case SystemSymbolicInspector will attempt to measure the feedthrough using symbolic form). By default, LeafSystem assumes there is direct feedthrough of values from every input to every output. This is a conservative assumption that ensures we detect and can prevent the formation of algebraic loops (implicit computations) in system Diagrams. Systems which do not have direct feedthrough may override that assumption in two ways: Reimplemented from LeafSystem< T >. Returns the sole input port. Returns the sole output port.
http://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_pass_through.html
CC-MAIN-2018-43
refinedweb
346
57.27
I was searching for some information on S3 and CloudFront and found this little gem mentioned in a comment on a discussion forum somewhere. CloudBerry Explorer makes managing files in Amazon S3 storage EASY. By providing a user interface to Amazon S3 accounts, files, and buckets, CloudBerry lets you manage your files on cloud just as you would on your own local computer. Visit the CloudBerry S3 Explorer homepage I’ve started using this tool in favor of the S3Fox plugin for Firefox. Its simply brilliant! And best of all, its FREE! Djangonauts of Dallas rejoice! We are joining forces to have our very first Django Sprint event for the upcoming 1.1 release. For more info: When: Saturday, April 18, 2009 at 9:00am to Sunday, April 19, 2009 at 5:00pm Where: Cohabitat 2517 Thomas Ave. Dallas, TX Facebook Event Page: The jQUery minitabs plugin allows the quick and easy creation of tabbed widgets anywhere on a page. The plugin includes a detailed example and sample CSS to get you started. The plugin supports: - specifying the first active tab - specifying transition speed for tab changes - a callback hook for when a tab is changed This plugin is based on simpleTabs () which was originally developed by Jonathan Coulet. This version is simplified a bit and requires simpler HTML. This is my first foray into the world of writing jQuery plugins so feedback is greatly appreciated. Download from: A while back, I had posted a template tag on djangosnippets which generates UUIDs on the fly. I figured that I’d share the same snippet here and explain why I did it. My rationale for writing this: I needed a quick way to generate random IDs to assign to dynamically generated HTML elements and then use jQuery to wire them all up. from django.template import Library, Node, TemplateSyntaxError from uuid import uuid4 register = Library() class UUIDNode(Node): """ Implements the logic of this tag. """ def __init__(self, var_name): self.var_name = var_name def render(self, context): context[self.var_name] = str(uuid4()) return '' def do_uuid(parser, token): """ The purpose of this template tag is to generate a random UUID and store it in a named context variable. Sample usage: {% uuid var_name %} var_name will contain the generated UUID """ try: tag_name, var_name = token.split_contents() except ValueError: raise TemplateSyntaxError, "%r tag requires exactly one argument" % token.contents.split()[0] return UUIDNode(var_name) do_uuid = register.tag('uuid', do_uuid) Okay. So I was working on some view code in a Django project and I noticed something weird. The view started rendering as if the user was no longer logged in. Odd thing was that it was only doing that for that one view. I banged my head for a while and then I realized that I had populated a variable called ‘user’ in the RequestContext for render_to_response. E.g., def my_view(request): ... user_obj = User.objects.get(id = user_id) data = { 'user': user_obj } ... return render_to_response(template, data, context_instance = RequestContext(request)) This caused the default user object that gets set in the request context by django.core.context_processors.auth to be overridden. So stuff like user.is_authenticated stopped working in the templates. This was my first major Django gotcha.. I just tried out Amazon’s Elastic Compute Cloud (EC2) service for the first time today. Its been around for a while and a lot of tech startups have leveraged cloud-based services such as EC2, S3 and others from Amazon to scale up quickly and cheaply. So far, my experience with EC2 has been quite good. I am evaluating it for a couple of projects I am working on. I love the fact that they recently added (beta) support for Windows server. Sometimes, some client may ask for a .NET-based application and now I have the perfect place to deploy such an app. What you need to get started - Create an Amazon AWS account - Get your AWS access keys - Install Elasticfox plugin for Firefox - Install PuTTy - Download the Elasticfox tutorial That’s pretty much it. I followed the instructions from the tutorial and everything worked as expected. However, there was one caveat. I had to install the full PuTTy distribution on my Windows system otherwise stuff like the key pair generation and SSH access would not work at all and Elasticfox would not give you any indication as to why not. This little factoid was not mentioned in the tutorial. I was able to get a Windows server instance up and running in about 15 minutes or so. Thanks Amazon! This is my first blog post, ever. I am kind of a late adopter to the whole blogging thing. So for my first post, I will be testing out the neat syntax highlighting plugin for WordPress. in classic computer science fashion, I will write the quintessential ‘Hello World’ example in some popular programming languages to test this baby out. Python print "Hello World" PHP echo "Hello World!"; C# public class HelloWorld public static void Main() { System.Console.WriteLine("Hello, World!"); } }
http://www.nomadjourney.com/page/2/
CC-MAIN-2018-30
refinedweb
830
65.12
provides high performance, modularity, and scalability. Source code, samples and API documentation are provided with the parser. For portability, care has been taken to make minimal use of templates, no RTTI, no C++ namespaces and minimal use of #ifdefs. WWW: NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. No installation instructions: this port has been deleted. The package name of this deleted port was: xerces_c xerces_c No options to configure Number of commits found: 21 No more supported upstream, consider using xerces-c2 or xerces-c3 Remove more tags from pkg-descr files fo the form: - Name em@i.l or variations thereof. While I'm here also fix some whitespace and other formatting errors, including moving WWW: to the last line in the file. -remove MD5 - s,INSTALLS_SHLIB,USE_LDCONFIG,g - these include security/ sysutils/ textproc/ maintained by ports@ PR: ports/101916 Submitted by: Gea-Suan Lin <gslin_AT_gslin dot org> - Add SHA256 Nuke default LDCONFIG_DIRS. It should have been spelt %%PREFIX%% anyway. - Unbreak on amd64 - While I'm here: pet runConfigure script PR: ports/77023 Submitted by: Johan van Selst <johans(at)stack.nl> BROKEN on amd64: Does not build . Unbreak by adding respect for PTHREAD_LIBS and PTHREAD_CFLAGS. PR: 72916 Submitted by: Simon Barner <barner@in.tum.de> FORBIDDEN on 5.x: does not respect PTHREAD_{CFLAGS,LIBS} Bump PORTREVISION on all ports that depend on gettext to aid with upgrading. (Part 1) SIZEify. . Fix MASTER_SITES and MASTER_SITE_SUBDIR. PR: 58943 Submitted by: Palle Girgensohn <girgen@pingpong.net> De-pkg-comment. Use MASTER_SITE_APACHE*. PR: 47984 Submitted by: Kimura Fuyuki <fuyuki@hadaly.org> Fixed mastersite, directory structure has changed PR: ports/46479 Submitted by: "Bjoern A.Zeeb" <bzeeb+freebsdports@zabbadoz.net> Fix PORTCOMMENTs that were killing INDEX builds. 105 pointy hats to: me Approved by: pat make portlint happy update to 1.7.0 and fix some bugs both in the previous port and in the xerces codebase itself. although this commit is a combination of all three PRs, i didn't take every PR verbatim (and left out some smaller parts of the first two PRs). any mistakes in the merging of these PRs is mine and if the original submitters would like to generate diffs after this commit, i'll look at those as well. Gregory Bond gets credit for spotting some particularly nasty problems in the old port. Remember folks, just because it compiles doesn't mean it works.. PR: ports/36248, ports/37016, ports/37619 Submitted by: Hidekazu Kuroki <hidekazu@pc88.gr.jp>, Daniel Lang <dl@leo.org>, Gregory Bond <gnb@itga.com.au> Add xerces-c, an Apache XML Processor PR: 33313 Submitted by: Alex Kiesel <kiesel@schlund.de> Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 11 vulnerabilities affecting 47 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/textproc/xerces-c/
CC-MAIN-2014-52
refinedweb
477
58.79
I'm about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. 2017年02月19日50分11秒 Personally, I've played with several of the built-in options on an XML-heavy project and have settled on pulldom as the best choice for less complex documents. Especially for small simple stuff, I like the event-driven theory of parsing rather than setting up a whole slew of callbacks for a relatively simple structure. Here is a good quick discussion of how to use the API. What I like: you can handle the parsing in a for loop rather than using callbacks. You also delay full parsing (the "pull" part) and only get additional detail when you call expandNode(). This satisfies my general requirement for "responsible" efficiency without sacrificing ease of use and simplicity. 2017年02月19日50分11秒 ElementTree has a nice pythony API. I think it's even shipped as part of python 2.5 It's in pure python and as I say, pretty nice, but if you wind up needing more performance, then lxml exposes the same API and uses libxml2 under the hood. You can theoretically just swap it in when you discover you need it. 2017年02月19日50分11秒 Dive Into Python has a chapter. Can't vouch for how good it would be though. 2017年02月19日50分11秒 There are 3 major ways of dealing with XML, in general: dom, sax, and xpath. The dom model is good if you can afford to load your entire xml file into memory at once, and you don't mind dealing with data structures, and you are looking at much/most of the model. The sax model is great if you only care about a few tags, and/or you are dealing with big files and can process them sequentially. The xpath model is a little bit of each -- you can pick and choose paths to the data elements you need, but it requires more libraries to use. If you want straightforward and packaged with Python, minidom is your answer, but it's pretty lame, and the documentation is "here's docs on dom, go figure it out". It's really annoying. Personally, I like cElementTree, which is a faster (c-based) implementation of ElementTree, which is a dom-like model. I've used sax systems, and in many ways they're more "pythonic" in their feel, but I usually end up creating state-based systems to handle them, and that way lies madness (and bugs). I say go with minidom if you like research, or ElementTree if you want good code that works well. 2017年02月19日50分11秒 I've used ElementTree for several projects and recommend it. It's pythonic, comes 'in the box' with Python 2.5, including the c version cElementTree (xml.etree.cElementTree) which is 20 times faster than the pure Python version, and is very easy to use. lxml has some perfomance advantages, but they are uneven and you should check the benchmarks first for your use case. As I understand it, ElementTree code can easily be ported to lxml. 2017年02月19日50分11秒 Since you mentioned that you'll be building "fairly simple" XML, the minidom module (part of the Python Standard Library) will likely suit your needs. If you have any experience with the DOM representation of XML, you should find the API quite straight forward. 2017年02月19日50分11秒 I write a SOAP server that receives XML requests, and creates XML responses. (Unfortunately, it's not my project, so it's closed source, but that's another problem). It turned out for me that creating (SOAP) XML documents is fairly simple, if you have a data structure that "fits" the schema. I keep the envelope, since the response envelope is (almost) the same as the request envelope. Then, since my data structure is a (possibly nested) dictionary, I create a string that turns this dictionary into <key>value</key> items. This is a task that recursion makes simple, and I end up with the right structure. This is all done in python code, and is currently fast enough for production use. You can also (relatively) easily build lists as well, although depending upon your client, you may hit problems unless you give length hints. For me, this was much simpler, since a dictionary is a much easier way of working than some custom class. For the books, generating XML is much easier than parsing! 2017年02月19日50分11秒 It depends a bit on how complicated the document needs to be. I've used minidom a lot for writing XML, but that's usually been just reading documents, making some simple transformations, and writing them back out. That worked well enough until I needed the ability to order element attributes (to satisfy an ancient application that doesn't parse XML properly). At that point I gave up and wrote the XML myself. If you're only working on simple documents, then doing it yourself can be quicker and simpler than learning a framework. If you can conceivably write the XML by hand, then you can probably code it by hand as well (just remember to properly escape special characters, and use str.encode(codec, errors="xmlcharrefreplace")). Apart from these snafus, XML is regular enough that you don't need a special library to write it. If the document is too complicated to write by hand, then you should probably look into one of the frameworks already mentioned. At no point should you need to write a general XML writer. 2017年02月19日50分11秒 You can also try untangle to parse simple XML documents. 2017年02月19日50分11秒 I personally think that chapter from Dive into Python is great. Check that out first - it uses the minidom module and is a pretty good piece of writing. 2017年02月19日50分11秒 I recently started using Amara with success. 2017年02月19日50分11秒 Python comes with ElementTree built in library, but lxml extends it in terms of speed and functionality (schema validation, sax parsing, XPath, various sorts of iterators and many other features). You have to install it, but in many places it is already assumed to be part of standard equipment (e.g. Google AppEngine does not allow C-based Python packages, but makes exception for lxml, pyyaml and few others). Your question is about building XML document. With lxml there are many methods and it took me a while to find the one, which seems to be easy to use and also easy to read. Sample code from lxml doc on using E-factory (slightly simplified): The E-factory provides a simple and compact syntax for generating XML and HTML: >>> from lxml.builder import E >>> html = page = ( ... E.html( # create an Element called "html" ... E.head( ... E.title("This is a sample document") ... ), ... E.body( ... E.h1("Hello!"), ... E.p("This is a paragraph with ", E.b("bold"), " text in it!"), ... E.p("This is another paragraph, with a", "\n ", ... E.a("link",link</a>.</p> <p>Here are some reserved characters: <spam&egg>.</p> </body> </html> I appreciate on E-factory it following things Readibility counts. Supports stuff like: e.g.: from lxml import etree from lxml.builder import E lst = ["alfa", "beta", "gama"] xml = E.root(*[E.record(itm) for itm in lst]) etree.tostring(xml, pretty_print=True) resulting in: <root> <record>alfa</record> <record>beta</record> <record>gama</record> </root> I highly recommend reading lxml tutorial - it is very well written and will give you many more reasons to use this powerful library. The only disadvantage of lxml is, that it must be compiled. See SO answer for more tips how to install lxml from wheel format package within fraction of a second. 2017年02月18日50分11秒 I assume that the .Net-way of processing XML builds on ´som version of MSXML and it that case I assume that using for example minidom would make you feel somewhat at home. However, if it is simple processing you are doing any library will probably do. Me too prefers working with ElementTree when dealing with xml in Python, it is a very neat library. 2017年02月19日50分11秒 If you're going to be building SOAP messages, check out soaplib. It uses ElementTree under the hood, but it provides a much cleaner interface for serializing and deserializing messages. 2017年02月19日50分11秒 I would recommend lxml. 2017年02月19日50分11秒 I strongly recommend SAX - Simple API for XML - implementation in the Python libraries. They are fairly easy to setup and process large XML by even driven API, as discussed by previous posters here, and have low memory footprint unlike validating DOM style XML parsers. 2017年02月19日50分11秒 I think you should use lxml for this functionallity 2017年02月19日50分11秒
http://www.91r.net/ask/342.html
CC-MAIN-2017-09
refinedweb
1,497
72.26
First, some background. I like Grails - a lot. It dispenses with a lot of the gunk that makes developing web apps in Java so tedious: XML configuration files, repetitive boilerplate code, tag library descriptors. Grails utilizes proven technologies like Spring, Hibernate, and SiteMesh for its core functionality rather than re-inventing the wheel. And it stole, um, borrowed, lots of great ideas from Ruby on Rails, including the use of a dynamic language, i.e. Groovy, to underpin the whole framework. Grails also has the best testing support of any framework I've used: automatic generation of test classes, an integrated test environment with an in-memory database, and with version 1.1, comprehensive mocking support for unit testing. The Grails team seems determined to eliminate every conceivable hindrance to testing, and that's an admirable goal. However, I don't love absolutely everything about Grails testing. In particular, Grails' mechanism for running unit tests is painfully slow. The Grails test-app command runs both unit and integration tests by default, but with the -unit! Ten seconds may not seem like a lot of time, but if you're trying to practice a "test a little, code a little" style of TDD,. However, when it takes 10 seconds to run a unit test, there's a temptation to skip the tests, stay in the flow, and keep coding. When I finally run the tests, there's always a failure of some kind, and fixing it usually necessitates a substantial reworking of the code. The longer it's been since I ran the tests, the more rework is required.' test-app script. While running the unit tests from within IntelliJ was a improvement, it wasn't any faster than using the command line. Since Grails tests inherit from junit.framework.TestCase, my next tactic was to run the tests with IntelliJ's JUnit test runner: This was a considerable improvement. The tests ran significantly faster - two or three seconds instead of ten - and the results appeared in IntelliJ's Run window, with a summary of failed tests and clickable links in the stack traces. Very nice! and integration tests, and without a Grails test environment, most of the integration tests would fail. I wanted an option that would run just the tests in the test/unit directory, but IntelliJ's built-in test runner couldn't be configured to do that. After much fruitless searching, I finally found the solution on the Groovy website. Groovy has a utility class named AllTestSuite that finds all the tests that match a filename pattern in a given directory and aggregates them in a test suite. Normally, AllTestSuite gets the base directory and the filename pattern from the system properties, but it's easy enough to extend the class and specify the directory and pattern with static variables, like so: import junit.framework.Test public class AllUnitTestSuite extends AllTestSuite { private static final String BASEDIR = "./test/unit" private static final String PATTERN = "**/*Tests.groovy" public static Test suite() { return suite(BASEDIR, PATTERN) } } I put the above code in a file named AllUnitTestSuite.groovy in the project's test/unit directory, ran it with IntelliJ's JUnit test runner, and it worked perfectly. Fast, convenient, and exactly what I want: problem solved! A nice bonus here is that under the covers, AllTestSuite uses a Gant script to collect the tests, so if you're familiar with Ant FileSets, you can tweak the matching pattern to customize your test suites. Helpful hint. Thanks ! You might want to try domainMock-ing to simulate db crud types of things in unit test mode.. I finally figured out how to get that working with intellij. see > regards - chris bedford lead lackey build lackey labs... Thanks, Chris. Actually mockDomain and mockForConstraintsTests are the main reason I looked into using IntelliJ's test runner; I didn't understand why it was taking so long to run tests that didn't need an integration test environment. I haven't run into any problems using mockDomain with IntelliJ, but that might be because I don't use Maven. ;) Thanks, Your posting helped to save me some time. I ran into the same issue. Also, running the tests like this will allow generating code coverage data. Ben
http://blog.marktye.com/2009/03/running-grails-unit-tests-in-intellij.html
CC-MAIN-2014-52
refinedweb
711
62.48
public class Course { public int Id { get; set; } public string Name { get; set; } public decimal Price {get; set;} public int Tax1 {get; set;} public int Tax2 {get; set;} public int Tax3 {get; set;} public virtual Tax Tax {get; set;} ?????? } public class Tax { public int Id {get; set;} public string Name {get; set;} public decimal Percent {get; set;} } My Table of Taxes is as follows: Id Name Percent 1 Federal Tax 5 2 Provincial 12 3 Local 5 Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. In the example shown, User has a BillingAddress navigation property. An association is created between User.BillingAddress and Addresses.AddressId with the following statement: Open in new window Specifically, what I am having a problem with is lnking the Tax1 field in Courses to the Tax1 object whose values I would correspond with the record whose id is 2 in the Taxes table table - public virtual Tax1 Tax1 {get; set;} ?????? The revolutionary project management tool is here! Plan visually with a single glance and make sure your projects get done. Relationships/Associations RBS Associations in EF Code First: Part 6 – Many-valued Associations Open in new window Open in new window Specifically how to implement/say //How to say Foreign Key Course->Tax1 maps to Id in Taxes????? //How to say Foreign Key Course->Tax2 maps to Id in Taxes????? //How to say Foreign Key Course->Tax3 maps to Id in Taxes????? RBS RBS
https://www.experts-exchange.com/questions/28247504/Model-3-Taxes-to-Item.html
CC-MAIN-2018-13
refinedweb
265
62.58
In statistics, binning is the process of placing numerical values into bins. The most common form of binning is known as equal-width binning, in which we divide a dataset into k bins of equal width. A less commonly used form of binning is known as equal-frequency binning, in which we divide a dataset into k bins that all have an equal number of frequencies. This tutorial explains how to perform equal frequency binning in python. Equal Frequency Binning in Python Suppose we have a dataset that contains 100 values: import numpy as np import matplotlib.pyplot as plt #create data np.random.seed(1) data = np.random.randn(100) #view first 5 values data[:5] array([ 1.62434536, -0.61175641, -0.52817175, -1.07296862, 0.86540763]) Equal-Width Binning: If we create a histogram to display these values, Python will use equal-width binning by default: #create histogram with equal-width bins n, bins, patches = plt.hist(data, edgecolor='black') plt.show() #display bin boundaries and frequency per bin bins, n (array([-2.3015387 , -1.85282729, -1.40411588, -0.95540447, -0.50669306, -0.05798165, 0.39072977, 0.83944118, 1.28815259, 1.736864 , 2.18557541]), array([ 3., 1., 6., 17., 19., 20., 14., 12., 5., 3.])) Each bin has an equal width of approximately .4487, but each bin doesn’t contain an equal amount of observations. For example: - The first bin extends from -2.3015387 to -1.8528279 and contains 3 observations. - The second bin extends from -1.8528279 to -1.40411588 and contains 1 observation. - The third bin extends from -1.40411588 to -0.95540447 and contains 6 observations. And so on. Equal-Frequency Binning: To create bins that contain an equal number of observations, we can use the following function: #define function to calculate equal-frequency bins def equalObs(x, nbin): nlen = len(x) return np.interp(np.linspace(0, nlen, nbin + 1), np.arange(nlen), np.sort(x)) #create histogram with equal-frequency bins n, bins, patches = plt.hist(data, equalObs(data, 10), edgecolor='black') plt.show() #display bin boundaries and frequency per bin bins, n (array([-2.3015387 , -0.93576943, -0.67124613, -0.37528495, -0.20889423, 0.07734007, 0.2344157 , 0.51292982, 0.86540763, 1.19891788, 2.18557541]), array([10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])) Each bin doesn’t have an equal width, but each bin does contain an equal amount of observations. For example: - The first bin extends from -2.3015387 to -0.93576943 and contains 10 observations. - The second bin extends from -0.93576943 to -0.67124613 and contains 10 observations. - The third bin extends from -0.67124613 to -0.37528495 and contains 10 observations. And so on. We can see from the histogram that each bin is clearly not the same width, but each bin does contain the same amount of observations which is confirmed by the fact that each bin height is equal.
https://www.statology.org/equal-frequency-binning-python/
CC-MAIN-2021-31
refinedweb
486
69.89
[posted and mailed] (As I mention in the disclaimers just below, I prepared these answers before I had read a draft of the impending C9X standard, so a few of them are dated already.) * * * "Killer" C Interview Questions (And a few Easy Ones, too) Steve Summit Copyright 1997,1998 [DISCLAIMER: This is, on balance, a very difficult test. DO NOT use it in an actual interview situation; you would in all likelihood only embarrass yourself and insult your interviewee. In particular, you would obviously not want to ask a question which you yourself did not know the answer to, but the answers to many of these questions are not obvious or well-known. There are several trick questions here, as well as a number which are explicitly marked "poor". The poor questions are, alas, not uncommon in actual interviews, and are presented here only so that their faults and wrong answers can be presented. This test is intended for study purposes only. Most of the answers can be found in the comp.lang.c FAQ list. The answers here were prepared before I read a draft of the impending C9X standard. I would word several of them differently today in anticipation of that standard, and a few of them will become wrong when that standard is adopted. Acknowledgments: This test is an expanded version of one I prepared for a training class held at the request of Tony McNamara, at a now-defunct company called SCS/Compute (no relation).] 1.1: How can you print a literal % with printf? A: %% 1.2: Why doesn't \% print a literal % with printf? A: Backslash sequences are interpreted by the compiler (\n, \", \0, etc.), and \% is not one of the recognized backslash sequences. It's not clear what the compiler would do with a \% sequence -- it might delete it, or replace it with a single %, or perhaps pass it through as \ %. But it's printf's behavior we're trying to change, and printf's special character is %. So it's a %-sequence we should be looking for to print a literal %, and printf defines the one we want as %%. 1.3: Are the parentheses in a return statement mandatory? A: No. The formal syntax of a return statement is return expression ; But it's legal to put parentheses around any expression, of course, whether they're needed or not. 1.4: How can %f work for type double in printf if %lf is required in scanf? A: In variable-length argument lists such as printf's, the old "default argument promotions" apply, and type float is implicitly converted to double. So printf always receives doubles, and defines %f to be the sequence that works whether you had passed a float or a double. (Strictly speaking, %lf is *not* a valid printf format specifier, although most versions of printf quietly accept it.) scanf, on the other hand, always accepts pointers, and the types pointer-to-float and pointer-to-double are very different (especially when you're using them for storing values). No implicit promotions apply. 1.5: If a machine uses some nonzero internal bit pattern for null pointers, how should the NULL macro be defined? A: As 0 (or (char *)0), as usual. The *compiler* is responsible for translating null pointer constants into internal null pointer representations, not the preprocessor. 1.6: If p is a pointer, is the test if(p) valid? What if a machine uses some nonzero internal bit pattern for null pointers? A: The test is always valid. Since the definition of "true" in C is "not equal to 0," the test is equivalent to if(p != 0) and the compiler then translates the 0 into the appropriate internal representation of a null pointer. 1.7: What is the ANSI Standard definition of a null pointer constant? A: "An integral constant expression with the value 0, or such an expression cast to type (void *)". 1.8: What does the auto keyword mean? When is it needed? A: auto is a storage-class specifier, just like extern and static. But since automatic duration is the default for local variables (and meaningless, in fact illegal, for global variables), the keyword is never needed. (It's a relic from the dawn of C.) 1.9: What does *p++ increment? A: The pointer p. To increment what p points to, use (*p)++ or ++*p. 1.10: What's the value of the expression 5["abcdef"] ? A: 'f'. (The string literal "abcdef" is an array, and the expression is equivalent to "abcdef"[5]. Why is the inside-out expression equivalent? Because a[b] is equivalent to *(a + b) which is equivalent to *(b + a) which is equivalent to b[a]. 1.11: [POOR QUESTION] How can you swap two integer variables without using a temporary? A: The reason that this question is poor is that the answer ceased to be interesting when we came down out of the trees and stopped using assembly language. The "classic" solution, expressed in C, is a ^= b; b ^= a; a ^= b; Due to the marvels of the exclusive-OR operator, after these three operations, a's and b's values will be swapped. However, it is exactly as many lines, and (if we can spare one measly word on the stack) is likely to be more efficient, to write the obvious int t = a; a = b; b = t; No, this doesn't meet the stipulation of not using a temporary. But the whole reason we're using C and not assembly language (well, one reason, anyway) is that we're not interested in keeping track of how many registers we have. If the processor happens to have an EXCH instruction, the compiler is more likely to recognize the possibility of using it if we use the three-assignment idiom, rather than the three-XOR. By the way, the even more seductively concise rendition of the "classic" trick in C, namely a ^= b ^= a ^= b is, strictly speaking, undefined, because it modifies a twice between sequence points. Also, if an attempt is made to use the idiom (in any form) in a function which is supposed to swap the locations pointed to by two pointers, as in swap(int *p1, *p2) { *p1 ^= *p2; *p2 ^= *p1; *p1 ^= *p2; } then the function will fail if it is ever asked to swap a value with itself, as in swap(&a, &a); or swap(&a[i], &a[j]); when i == j. (The latter case is not uncommon in sorting algorithms. The effect when p1 == p2 is that the pointed- to value is set to 0.) 1.12: What is sizeof('A') ? A: The same as sizeof(int). Character constants have type int in C. (This is one area in which C++ differs.) 1.13: According to the ANSI Standard, how many bits are there in an int? A char? A short int? A long int? In other words, what is sizeof(int) ? sizeof(char) ? sizeof(short int) ? sizeof(long int) ? A: ANSI guarantees that the range of a signed char is at least +-127, of a short int is at least +-32767, of an int at least +-32767, and a long int at least +-2147483648. So we can deduce that a char must be at least 8 bits, an int or a short int must be at least 16 bits, and a long int must be at least 32 bits. The only guarantees about sizeof are that 1 = sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) 1.14: If arr is an array, in an ordinary expression, what's the difference between arr and &arr ? A: If the array is of type T, the expression arr yields a pointer of type pointer-to-T pointing to the array's first element. The expression &arr, on the other hand, yields a pointer of type pointer-to-array-of-T pointing to the entire array. (The two pointers will likely have the same "value," but the types are distinct. The difference would be visible if you assigned or incremented the resulting pointer.) 1.15: What's the difference between char *p = malloc(n); and char *p = malloc(n * sizeof(char)); ? A: There is little or no difference, since sizeof(char) is by definition exactly 1. 1.16: What's the difference between these three declarations? char *a = "abc"; char b[] = "abc"; char c[3] = "abc"; A: The first declares a pointer-to-char, initialized to point to a four-character array somewhere in (possibly read-only) memory containing the four characters a b c \0. The second declares an array (a writable array) of 4 characters, initially containing the characters a b c \0. The third declares an array of 3 characters, initially containing a b c. (The third array is therefore not an immediately valid string.) 1.17: The first line of a source file contains the line extern int f(struct x *); The compiler warns about "struct x declared inside parameter list". What is the compiler worried about? A: For two structures to be compatible, they must not only have the same tag name but be defined in the same scope. A function prototype, however, introduces a new, nested scope for its parameters. Therefore, the structure tag x is defined in this narrow scope, which almost immediately disappears. No other struct x pointer in this translation unit can therefore be compatible with f's first parameter, so it will be impossible to call f correctly (at least, without drawing more warnings). The warning alluded to in the question is trying to tell you that you shouldn't mention struct tags for the first time in function prototypes. (The warning message in the question is actually produced by gcc, and the message runs on for two more lines, explaining that the scope of the structure declared "is only this definition or declaration, which is probably not what you want.") 1.18: List several ways for a function to safely return a string. A: It can return a pointer to a static array, or it can return a pointer obtained from malloc, or it can fill in a buffer supplied by the caller. 1.19: [hard] How would you implement the va_arg() macro in <stdarg.h>? A: A straightforward implementation, assuming a conventional stack-based architecture, is #define va_arg(argp, type) (((type *)(argp += sizeof(type)))[-1]) This assumes that type va_list is char *. 1.20: Under what circumstances is the declaration typedef xxx int16; where xxx is replaced with an appropriate type for a particular machine, useful? A: It is potentially useful if the int16 typedef is used to declare variables or structures which will be read from or written to some external data file or stream in some fixed, "binary" format. (However, the typedef can at most ensure that the internal type is the same size as the external representation; it cannot correct for any byte order discrepancies.) Such a typedef may also be useful for allowing precompiled object files or libraries to be used with different compilers (compilers which define basic types such as int differently), without recompilation. 1.21: Suppose that you declare struct x *xp; without any definition of struct x. Is this legal? Under what circumstances would it be useful? A: It is perfectly legal to refer to a structure which has not been "fleshed out," as long as the compiler is never asked to compute the size of the structure or generate offsets to any members. Passing around pointers to otherwise undefined structures is quite acceptable, and is a good way of implementing "opaque" data types in C. 1.22: What's the difference between struct x1 { ... }; typedef struct { ... } x2; A: The first declaration declares a structure tag x1; the second declares a typedef name x2. The difference becomes clear when you declare actual variables of the two structure types: struct x1 a, b; but x2 a, b; (This distinction is insignificant in C++, where all structure and class tags automatically become full- fledged types, as if via typedef.) 1.23: What do these declarations mean? int **a(); int (*b)(); int (*c[3])(); int (*d)[10]; A: declare a as function returning pointer to pointer to int declare b as pointer to function returning int declare c as array of 3 pointers to functions returning int declare d as pointer to array of 10 ints The way to read these is "inside out," remembering that [] and () bind more tightly than *, unless overridden by explicit parentheses. 1.24: State the declaration for a pointer to a function returning a pointer to char. A: char *(*f)(); 1.25: If sizeof(long int) is 4, why might sizeof report the size of the structure struct x {char c; long int i;}; as 8 instead of 5? A: The compiler will typically allocate invisible padding between the two members of the structure, to keep i aligned on a longword boundary. 1.26: If sizeof(long int) is 4, why might sizeof report the size of the structure struct y {long int i; char c;}; A: The compiler will typically allocate invisible padding at the end of structure, so that if an array of these structures is allocated, the i's will all be aligned. 1.27: [POOR QUESTION] If i starts out as 1, what does the expression i++ + i++ evaluate to? What is i's final value? A: This is a poor question because it has no answer. The expression attempts to modify i twice between sequence points (not to mention modifying and inspecting i's value, where the inspection is for purposes other than determining the value to be stored), so the expression is undefined. Different compilers can (and do) generate different results, and none of them is "wrong." 1.28: Consider these definitions: #define Push(val) (*stackp++ = (val)) #define Pop() (*--stackp) int stack[100]; int *stackp = stack; Now consider the expression Push(Pop() + Pop()) 1. What is the expression trying to do? In what sort of program might such an expression be found? 2. What are some deficiencies of this implementation? Under what circumstances might it fail? A: The expression is apparently intended to pop two values from a stack, add them, and push the result. This code might be found in a calculator program, or in the evaluation loop of the engine for a stack-based language. The implementation has at least four problems, however. The Push macro does not check for stack overflow; if more than 100 values are pushed, the results will be unpredictable. Similarly, the the Pop macro does not check for stack underflow; an attempt to pop a value when the stack is empty will likewise result in undefined behavior. On a stylistic note, the stackp variable is global as far as the Push and Pop macros are concerned. If it is certain that, in a particular program, only one stack will be used, this assumption may be a reasonable one, as it allows considerably more succinct invocations. If multiple stacks are a possibility, however, it might be preferable to pass the stack pointer as an argument to the Push and Pop macros. Finally, the most serious problem is that the "add" operation as shown above is *not* guaranteed to work! After macro expansion, it becomes (*stackp++ = ((*--stackp) + (*--stackp))) This expression modifies a single object more than once between sequence points; specifically, it modifies stackp three times. It is not guaranteed to work; moreover, there are popular compilers (one is gcc) under which it *will* *not* work as expected. (The extra parentheses do nothing to affect the evaluation order; in particular, they do not make it any more defined.) 1.29: [POOR QUESTION] Write a small function to sort an array of integers. A: This is a poor question because no one writes small functions to sort arrays of integers any more, except as pedagogical exercises. If you have an array of integers that needs sorting, the thing to do is call your library sort routine -- in C, qsort(). So here is my "small function": static int intcmp(const void *, const void *); sortints(int a[], int n) { qsort(a, n, sizeof(int), intcmp); } static int intcmp(const void *p1, const void *p2) { int i1 = *(const int *)p1; int i2 = *(const int *)p2; if(i1 < i2) return -1; else if(i1 > i2) return 1; else return 0; } (The reason for using two comparisons and three explicit return statements rather than the "more obvious" return i1 - i2; is that i1 - i2 can underflow, with unpredictable results.) 1.30: State the ANSI rules for determining whether an expression is defined or undefined. A: An expression is undefined if, between sequence points, it attempts to modify the same location twice, or if it attempts to both read from and write to the same location. It's permissible to read and write the same location only if the laws of causality (a higher authority even than X3.159) prove that the read must unfailingly precede the write, that is, if the write is of a value which was computed from the value which was read. This exception means that old standbys such as i = i + 1 are still legal. Sequence points occur at the ends of full expressions (expression statements, and the expressions in if, while, for, do/while, switch, and return statements, and initializers), at the &&, ||, and comma operators, at the end of the first expression in a ?: expression, and just before the call of a function (after the arguments have all been evaluated). (The actual language from the ANSI Standard is Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be accessed only to determine the value to be stored. ) 1.30a: What's the difference between these two declarations? extern char x[]; extern char *x; A: The first is an external declaration for an array of char named x, defined elsewhere. The second is an external declaration for a pointer to char named x, also defined elsewhere. These declarations could not both appear in the same program, because they specify incompatible types for x. 1.31: What's the difference between these two declarations? int f1(); extern int f2(); A: There is no difference; the extern keyword is essentially optional in external function declarations. 1.32: What's the difference between these two declarations? extern int f1(); extern int f2(void); A: The first is an old-style function declaration declaring f1 as a function taking an unspecified (but fixed) number of arguments; the second is a prototype declaration declaring f2 as a function taking precisely zero arguments. 1.33: What's the difference between these two definitions? int f1() { } int f2(void) { } A: There is no difference, other than that the first uses the old definition style and the second uses the prototype style. Both functions take zero arguments. 1.34: How does operator precedence influence order of evaluation? A: Only partially. Precedence affects the binding of operators to operands, but it does *not* control (or even influence) the order in which the operands themselves are evaluated. For example, in a() + b() * c() we have no idea what order the three functions will be called in. (The compiler might choose to call a first, even though its result will be needed last.) 1.35: Will the expression in if(n != 0 && sum / n != 0) ever divide by 0? A: No. The "short circuiting" behavior of the && operator guarantees that sum / n will not be evaluated if n is 0 (because n != 0 is false). 1.36: Will the expression x = ((n == 0) ? 0 : sum / n) A: No. Only one of the pair of controlled expressions in a ?: expression is evaluated. In this example, if n is 0, the third expression will not be evaluated at all. 1.37: Explain these three fragments: if((p = malloc(10)) != NULL) ... if((fp = fopen(filename, "r")) == NULL) ... while((c = getc(fp)) != EOF) ... A: The first calls malloc, assigns the result to p, and does something if the just-assigned result is not NULL. The second calls fopen, assigns the result to fp, and does something if the just-assigned result is NULL. The third repeatedly calls getc, assigns the results in turn to c, and does something as long as each just- assigned result is not EOF. 1.38: What's the difference between these two statements? ++i; i++; A: There is no difference. The only difference between the prefix and postfix forms of the autoincrement operator is the value passed on to the surrounding expression, but since the expressions in the question stand alone as expression statements, the value is discarded, and each expression merely serves to increment i. 1.39: Why might a compiler warn about conversions or assignments from char * to int * ? A: In general, compilers complain about assignments between pointers of different types (and are required by the Standard to so complain) because such assignments do not make sense. A pointer to type T1 is supposed to point to objects of type T1, and presumably the only reason for assigning the pointer to a pointer of a different type, say pointer-to-T2, would be to try to access the pointed- to object as a value of type T2, but if the pointed-to object is of type T2, why were we pointing at it with a pointer-to-T1 in the first place? In the particular example cited in the question, the warning also implies the possibility of unaligned access. For example, this code: int a[2] = {0, 1}; char *p = &a; /* suspicious */ int *ip = p + 1; /* even more suspicious */ printf("%d\n", *ip); is likely to crash (perhaps with a "Bus Error") because the programmer has contrived to make ip point to an odd, unaligned address. When it is desired to use pointers of the "wrong" type, explicit casts must generally be used. One class of exceptions is exemplified by malloc: the memory it allocates, and hence the pointers it returns, are supposed to be usable as any type the programmer wishes, so malloc's return value will almost always be the "wrong" type. To avoid the need for so much explicit, dangerous casting, ANSI invented the void * type, which quietly interconverts (i.e. without warning) between other pointer types. Pointers of type void * are therefore used as containers to hold "generic" pointers which are known to be safely usable as pointers to other, more specific types. 1.40: When do ANSI function prototype declarations *not* provide argument type checking, or implicit conversions? A: In the variable-length part of variable-length argument lists, and (perhaps obviously, perhaps not) when no prototype is in scope at all. The point is that it is not safe to assume that since prototypes have been invented, programmers don't have to be careful about matching function-call arguments any more. Care must still be exercised in variable-length argument lists, and if prototypes are to take care of the rest, care must be exercised to use prototypes correctly. 1.41: State the rule(s) underlying the "equivalence" of arrays and pointers in C. A: Rule 1: When an array appears in an expression where its value is needed, the value generated is a pointer to the array's first element. Rule 2: Array-like subscripts (integer expressions in brackets) may be used to subscript pointers as well as arrays; the expression p[i] is by definition equivalent to *(p+i). (Actually, by rule 1, subscripts *always* find themselves applied to pointers, never arrays.) 1.42: What's the difference between these two declarations? extern int f2(char []); extern int f1(char *); A: There is no difference. The compiler always quietly rewrites function declarations so that any array parameters are actually declared as pointers, because (by the equivalence of arrays and pointers) a pointer is what the function will actually receive. 1.43: Rewrite the parameter declaration in f(int x[5][7]) { } to explicitly show the pointer type which the compiler will assume for x. A: f(int (*x)[7]) { } Note that the type int (*)[7] is *not* the same as int **. 1.44: A program uses a fixed-size array, and in response to user complaints you have been asked to replace it with a dynamically-allocated "array," obtained from malloc. Which parts of the program will need attention? What "gotchas" must you be careful of? A: Ideally, you will merely have to change the declaration of the array from an array to a pointer, and add one call to malloc (with a check for a null return, of course) to initialize the pointer to a dynamically-allocated "array." All of the code which accesses the array can remain unchanged, because expressions of the form x[i] are valid whether x is an array or a pointer. The only thing to be careful of is that if the existing code ever used the sizeof operator to determine the size of the array, that determination becomes grossly invalid, because after the change, sizeof will return only the size of the pointer. 1.45: A program which uses a dynamically allocated array is still running into problems because the initial allocation is not always big enough. Your task is now to use realloc to make the "array" bigger, if need be. What must you be careful of? A: The actual call to realloc is straightforward enough, to request that the base pointer now point at a larger block of memory. The problem is that the larger block of memory may be in a different place; the base pointer may move. Therefore, you must reassign not only the base pointer, but also any copies of the base pointer you may have made, and also any pointers which may have been set to point anywhere into the middle of the array. (For pointers into the array, you must in general convert them temporarily into offsets from the base pointer, then call realloc, then recompute new pointers based on the offsets and the new base pointer. See also question 2.10.) 1.46: How can you you use sizeof to determine the number of elements in an array? A: The standard idiom is sizeof(array) / sizeof(array[0]) (or, equivalently, sizeof(array) / sizeof(*array) ). 1.47: When sizeof doesn't work (when the array is declared extern, or is a parameter to a function), what are some strategies for determining the size of an array? A: Use a sentinel value as the last element of the array; pass the size around in a separate variable or as a separate function parameter; use a preprocessor macro to define the size. 1.48: Why might explicit casts on malloc's return value, as in int *ip = (int *)malloc(10 * sizeof(int)); be a bad idea? A: Although such casts used to be required (before the void * type, which converts quietly and implicitly, was invented), they can now be considered poor style, because they will probably muzzle the compiler's attempts to warn you on those occasions when you forget to #include <stdlib.h> or otherwise declare malloc, such that malloc will be incorrectly assumed to be a function returning int. 1.49: How are Boolean true/false values defined in C? What values can the == (and other logical and comparison operators) yield? A: The value 0 is considered "false," and any nonzero value is considered "true." The relational and logical operators all yield 0 for false, 1 for true. 1.50: x is an integer, having some value. What is the value of the expression 0 <= !x && !!x < 2 A: 1. 1.51: In your opinion, is it acceptable for a header file to contain #include directives for other header files? A: The argument in favor of "nested #include files" is that they allow each header to arrange to have any subsidiary definitions, upon which its own definitions depend, made automatically. (For example, a file containing a prototype for a function that accepts an argument of type FILE * could #include <stdio.h> to define FILE.) The alternative is to potentially require everyone who includes a particular header file to include one or several others first, or risk cryptic errors. The argument against is that nested headers can be confusing, can make definitions difficult to find, and can in some circumstances even make it difficult to determine which file(s) is/are being included. 1.52: How can a header file be protected against being included multiple times (perhaps due to nested #include directives)? A: The standard trick is to place lines like #ifndef headerfilename_H #define headerfilename_H at the beginning of the file, and an extra #endif at the end. 1.53: A source file contains as its first two lines: #include "a.h" int i; The compiler complains about an invalid declaration on line 2. What's probably happening? A: It's likely that the last declaration in a.h is missing its trailing semicolon, causing that declaration to merge into "int i", with meaningless results. (That is, the merged declaration is probably something along the lines of extern int f() int i; or struct x { int y; } int i; .) 1.54: What's the difference between a header file and a library? A: A header file typically contains declarations and definitions, but it never contains executable code. (A header file arguably shouldn't even contain any function bodies which would compile into executable code.) A library, on the other hand, contains only compiled, executable code and data. A third-party library is often delivered as a library and a header file. Both pieces are important. The header file is included during compilation, and the library is included during linking. 1.55: What are the acceptable declaration(s) for main()? A: The most common declarations, all legal, are: main() int main() int main(void) int main(int argc, char **argv) int main(int argc, char *argv[]) int main(argc, argv) int argc; char *argv[]; (Basically: the return type must be an implicit or explicit int; the parameter list must either be empty, or void, or one int plus one array of strings; and the function may be declared using either old-style or prototyped syntax. The actual names of the two parameters are arbitrary, although of course argc and argv are traditional.) 1.56: You wish to use ANSI function prototypes to guard against errors due to accidentally calling functions with incorrect arguments. Where should you place the prototype declarations? How can you ensure that the prototypes will be maximally effective? A: The prototype for a global function should be placed in a header file, and the header file should be included in all files where the function is called, *and* in the file where the function is defined. The prototype for a static function should be placed at the top of the file where the function is defined. Since following these rules is only slightly less hard than getting all function calls right by hand (i.e. without the aid of prototypes), the compiler should be configured to warn about functions called without prototypes in scope, *and* about functions defined without prototypes in scope. 1.57: Why must the variable used to hold getchar's return value be declared as int? A: Because getchar can return, besides all values of type char, the additional "out of band" value EOF, and there obviously isn't room in a variable of type char to hold one more than the number of values which can be unambiguously stored in a variable of type char. 1.58: You must write code to read and write "binary" data files. How do you proceed? How will you actually open, read, and write the files? A: When calling fopen, the files must be opened using the b modifier (e.g. "rb", "wb"). Binary data files are generally read and written a byte at a time using getc and putc, or a data structure at a time using fread and fwrite. 1.59: Write the function void error(const char *message, ...); which accepts a message string, possibly containing % sequences, along with optional extra arguments corresponding to the % sequences, and prints the string "error: ", followed by the message as printf would print it, followed by a newline, all to stderr. A: #include <stdio.h> #include <stdarg.h> void error(char *fmt, ...) { va_list argp; fprintf(stderr, "error: "); va_start(argp, fmt); vfprintf(stderr, fmt, argp); va_end(argp); fprintf(stderr, "\n"); } 1.60: Write the function char *vstrcat(char *, ...); which accepts a variable number of strings and concatenates them all together into a block of malloc'ed memory just big enough for the result. The end of the list of strings will be indicated with a null pointer. For example, the call char *p = vstrcat("Hello, ", "world!", (char *)NULL); should return the string "Hello, world!". A: ); return retbuf; } 1.61: Write a stripped-down version of printf which accepts only the %c, %d, %o, %s, %x, and %% format specifiers. (Do not worry about width, precision, flags, or length modifiers.) A: [Although this question is obviously supposed to test one's familiarity with the va_ macros, a significant nuisance in composing a working answer is performing the sub-task of converting integers to digit strings. For some reason, back when I composed this test, I felt it appropriate to defer that task to an "itoa" function; perhaps I had just presented an implementation of itoa to the same class for whom I first prepared this test.] #include <stdio.h> #include <stdarg.h> extern char *itoa(int, char *, int);); } 1.62: You are to write a program which accepts single keystrokes from the user, without waiting for the RETURN key. You are to restrict yourself only to features guaranteed by the ANSI/ISO C Standard. How do you proceed? A: You proceed by pondering the sorrow of your fate, and perhaps by complaining to your boss/professor/psychologist that you've been given an impossible task. There is no ANSI Standard function for reading one keystroke from the user without waiting for the RETURN key. You'll have to use facilities specific to your operating system; you won't be able to write the code strictly portably. 1.63: [POOR QUESTION] How do you convert an integer to binary or hexadecimal? A: The question is poor because an integer is a *number*; it doesn't make much sense to ask what base it's in. If I'm holding eleven apples, what base is that in? (Of course, internal to the computer, an integer is almost certainly represented in binary, although it's not at all unreasonable to think of it as being hexadecimal, or decimal for that matter.) The only time the base of a number matters is when it's being read from or written to the outside world as a string of digits. In those cases, and depending on just what you're doing, you can specify the base by picking the correct printf or scanf format specifier (%d, %o, or %x), or by picking the third argument to strtol. (There isn't a Standard function to convert an integer to a string using an arbitrary base. For that task, it's a straightforward exercise to write a function to do the conversion. Some versions of the nonstandard itoa function also accept a base or radix argument.) 1.64: You're trying to discover the sizes of the basic types under a certain compiler. You write the code printf("sizeof(char) = %d\n", sizeof(char)); printf("sizeof(short) = %d\n", sizeof(short)); printf("sizeof(int) = %d\n", sizeof(int)); printf("sizeof(long) = %d\n", sizeof(long)); However, all four values are printed as 0. What have you learned? A: You've learned that this compiler defines size_t, the type returned by sizeof, as an unsigned long int, and that the compiler also defines long integers as having a larger size than plain int. (Furthermore, you've learned that the machine probably uses big-endian byte order.) Finally, you may have learned that the code you should have written is along the lines of either printf("sizeof(int) = %u\n", (unsigned)sizeof(int)); or printf("sizeof(int) = %lu\n", (unsigned long)sizeof(int)); Section 2. What's wrong with...? 2.1: main(int argc, char *argv[]) { ... if(argv[i] == "-v") ... A: Applied to pointers, the == operator compares only whether the pointers are equal. To compare whether the strings are equal, you'll have to call strcmp. 2.2: a ^= b ^= a ^= b (What is the expression trying to do?) A: The expression is undefined because it modifies the variable a twice between sequence points. What it's trying to do is swap the variables a and b using a hoary old assembly programmer's trick. 2.3: char* p1, p2; A: p2 will be declared as type char, *not* pointer-to-char. 2.4: char c; while((c = getchar()) != EOF) ... A: The variable used to contain getchar's return value must be declared as int if EOF is to be reliably detected. 2.5: while(c = getchar() != EOF) ... A: Parentheses are missing; the code will call getchar, compare the result to EOF, assign the result *of the comparison* to c, and take another trip around the loop if the condition was true (i.e. if the character read was not EOF). (The net result will be that the input will be read as if it were a string of nothing but the character '\001'. The loop would still halt properly on EOF, however.) 2.6: int i, a[10]; for(i = 0; i <= 10; i++) a[i] = 0; A: The loop assigns to the nonexistent eleventh value of the array, a[10], because it uses a loop continuation condition of <= 10 instead of < 10 or <= 9. 2.7: #include <ctype.h> ... #define TRUE 1 #define FALSE 0 ... if(isalpha(c) == TRUE) ... A: Since *any* nonzero value is considered "true" in C; it's rarely if ever a good idea to compare explicitly against a single TRUE value (or FALSE, for that matter). In particular, the <ctype.h> macros, including isalpha(), tend to return nonzero values other than 1, so the test as written is likely to fail even for alphabetic characters. The correct test is simply if(isalpha(c)) 2.8: printf("%d\n", sizeof(int)); A: The sizeof operator returns type size_t, which is an unsigned integral type *not* necessarily the same size as an int. The correct code is either printf("sizeof(int) = %u\n", (unsigned int)sizeof(int)); or printf("sizeof(int) = %lu\n", (unsigned long int)sizeof(int)); 2.9: p = realloc(p, newsize); if(p == NULL) { fprintf(stderr, "out of memory\n"); return; } A: If realloc returns null, and assuming p used to point to some malloc'ed memory, the memory remains allocated, although having overwritten p, there may well be no way to use or free the memory. The code is a potential memory leak. 2.10: /* p points to a block of memory obtained from malloc; */ /* p2 points somewhere within that block */ newp = realloc(p, newsize); if(newp != NULL && newp != p) { int offset = newp - p; p2 += offset; } A: Pointer subtraction is well-defined only for pointers into the same block of memory. If realloc moves the block of memory while changing its size, which is the very case the code is trying to test for, then the subtraction newp - p is invalid, because p and newp do not point into the same block of memory. (The subtraction could overflow or otherwise produce nonsensical results, especially on segmented architectures. Strictly speaking, *any* use of the old value of p after realloc moves the block is invalid, even comparing it to newp.) The correct way to relocate p2 within the possibly-moved block, correcting *all* the problems in the original (including a subtle one not mentioned), is: ptrdiff_t offset = p2 - p; newp = realloc(p, newsize); if(newp != NULL) { p = newp; p2 = p + offset; } 2.11: int a[10], b[10]; ... a = b; A: You can't assign arrays. 2.12: int i = 0; char *p = i; if(p == 0) ... A: Assigning an integer 0 to a pointer does not reliably result in a null pointer. (You must use a *constant* 0.)
http://groups.google.com/group/comp.lang.c/msg/f542bdf51ad05f54
crawl-002
refinedweb
6,744
61.36
Hey folks. I’ve recently got two 28BYJ-48 5V 4 Phase stepper motors with ULN2003 driver boards. The specs on the motor i’m a little confised about are as follows: Step angle: 5.625 x 1/64. Reduction ratio: 1/64. After looking around on the net I’ve come to the conclusion that it has 64 steps/revolution, which is geared down at a ratio of 1:64. So based on this, the motor should have 64 x 64 steps/revolution, which is 4096 steps/revolution. I wrote a very basic piece of code using the Stepper library just to get a feel for how this thing works: #include <Stepper.h> int motorSteps=4096; Stepper test(motorSteps,8,9,10,11); void setup() { // put your setup code here, to run once: } void loop() { // put your main code here, to run repeatedly: test.setSpeed(180); test.step(4096); delay(3000); } This code didn’t do anything. So, I changed the value of motorSteps from 4096 to 64 (to ignore the gear ratio), and the motor spun, but with the parameter in step set to 4096, it rotated two full revolutions. So I’m a little confused how the specs for this motor work with the Stepper library. Should I be using the ungeared motorStep value (it wouldn’t work the other way anyhow…), and why would test.step(4096) spin it around twice? Any light that could be shed on this would be appreciated. Thanks.
https://forum.arduino.cc/t/stepper-motor-basics/159365
CC-MAIN-2022-21
refinedweb
247
72.56
RFID Log I am developing an RFID read/writer based on the one in the hardware section here that actuates our garage door. I have a functioning Ethernet gateway, several function nodes and am using Domoticz as a controller. What I would like to do is write to the Domoticz log when a card is scanned. I would like to keep a record of who comes and goes. I can't seem to find any documentation on how to do this. Any suggestions? I have thought about using a pi and write to the Domoticz log through json but I would rather use the MySensors network. MyMessage TEXTMsg(CHILD_ID_TXT,V_TEXT); // Send UID of RFID tag sendSketchInfo("RFID UID", "0.0.9",false); present(CHILD_ID_TXT, S_INFO, "RFID UID",false); #ifdef MY_DEBUG Serial.print("UID sent to Controller: "); Serial.println(uid_rfid_str); #endif send(TEXTMsg.set(uid_rfid), false); Hope this helps. I use it this way and it works like a charm. Just remember you need to convert the uid to string in order to send it as text to Domoticz. Regards, Martin @martins Thanks so much, I'll give it a try and let you know how it works out. In my node I'm saving (FRAM) the UID along with the name of the person who hold the tag, so I'd like to send not only the UID but the persons name as well. Again thanks so much for pointing me in the right direction. @martins It worked out well, I had to tweak it a bit for my situation, but all is well. Thanks for pointing me in the right direction.
https://forum.mysensors.org/topic/6579/rfid-log
CC-MAIN-2020-50
refinedweb
272
82.65
You can subscribe to this list here. Showing 25 50 100 250 results of 139 Sorry to hear that, Martin... but also happy to hear you have other good things moving forward! Our best to you! Ron ________________________________ From: "changingsong-devel-request@..." <changingsong-devel-request@...> To: changingsong-devel@... Sent: Wed, March 30, 2011 6:04:05 AM Subject: Changingsong-devel Digest, Vol 20, Issue 1 Send Changingsong-devel mailing list submissions to changingsong-devel@... To subscribe or unsubscribe via the World Wide Web, visit or, via email, send a message with subject or body 'help' to changingsong-devel-request@... You can reach the person managing the list at changingsong-devel-owner@... When replying, please edit your Subject line so it is more specific than "Re: Contents of Changingsong-devel digest..." Today's Topics: 1. Closing Project Development (Martin Zibricky) ---------------------------------------------------------------------- Message: 1 Date: Wed, 30 Mar 2011 09:39:32 +0200 From: Martin Zibricky <mzibricky@...> Subject: [cs-devel] Closing Project Development To: changingsong-devel@... Message-ID: <1301470772.30138.169.camel@...> Content-Type: text/plain; charset="UTF-8"! ------------------------------ _______________________________________________ Changingsong-devel mailing list Changingsong-devel@... End of Changingsong-devel Digest, Vol 20, Issue 1 ************************************************* Hi all, as plan for next release I would like to see: - use shared OpenLyrics python library for working with xml files () - create UI form for song editing (not just editing of plain XML) - improve presentation experience - design format for set mode (play list) - other minor improvements and refactoring 0.1 should be released in the mid of July. God bless, Martin Hi all, I would like to announce a new development release of ChangingSong. Source code tarballs: Win32 binary: Best regards Martin Zibricky Release Notes for ChangingSong ============================== Release 0.0.3 18 May 2010 Development release. User visible changes: * add create/delete songs * add searching in songs * add export of presentation to PDF * add lock (freeze) content of presentation * add preview of presentation to control dialog (presentation mode) * add context menu to Presentation window - fullscreen and moving to another screen * add more song examples * win32: ability to read songs with filenames containing non ASCII characters * items in song list are alphabetically sorted (locale aware sorting) * dialog with system information is better organized * Keyboard shortcuts: Ctrl+F search in songs, Ctrl+L lock presentation Development visible changes: * newer library versions required: * Python 2.6 * PyQt >= 4.6 * Qt >= 4.5 * use of more pythonic PyQt API: * no need to do conversion from/to QString and QVariant types * heavy refactoring: * folder structure simplified and reorganized * cslib.io.song and cslib.pr.view simplified * update pyutilib to v3.2 * add Nose v0.11.3 (unit test execution) to svn * add Whoosh v0.3.18 (search and indexing) library to svn * OpenLyrics (used data format) development as a standalone project Known defects: * On Ubuntu 10.04 colors of video may not be properly handled. * Video background will probably never work on Ubuntu 9.10. * Saving of windows position and properties is not properly handled. Hi all, I would like to announce a new development release of ChangingSong. Source code tarballs: Win32 binary: Best regards Martin Zibricky note: there is a known issue with icons not displayed when running win32 binary on windows vista Release Notes for ChangingSong ============================== Release 0.0.2 08 Mar 2010 Development release. User visible changes: * add more icons and buttons (menus, toolbars) * using docking windows * add docking window with a list of songs from folder 'songs' * control dialog incorporated into the main window * add buttons to switch between song editing and control dialog * remove list of recently opened files from file menu * position and size of main window is saved at exit and loaded at startup * presentation window is displayed immediately after startup * when lyrics does not fit into presentation window, the font size is decreased * Keyboard shortcuts: Ctrl+E song editing mode, F11 enable/disable presentation screen Development visible changes: * Adjustment to create standalone executables by PyInstaller (freezing application) * Description of OpenLyrics data format syntax * update 3rd party module configobj to v4.7.2 * widget objects are create mostly only once * QtDesigner *.ui forms are compiled at runtime (no need recompile them by hand) * pyutilib plugin cslib.conf.config is used to read/write app configuration (replacing module cslib.globals) * slightly reduced code complexity Fixed defects: * #85: [win] app does not work on Windows with paths containing non ascii chars * #92: freezing on ubuntu 9.10 - closing control dialog Known defects: * video background doesn't work on Ubuntu 9.10 Jonathan Stafford píše v St 20. 01. 2010 v 09:08 +0800: > I thought could be added to the list of presentation tools on the > ChangingSong website. It is added: It looks really cool. Especially using javascript for adjusting font-size for presentation. This application could be inspiration how to implement font adjusting. Thanks for that link. Martin Hello, I just came across an interesting piece of software called "songserver" that I thought could be added to the list of presentation tools on the ChangingSong website. It seems too use html/css for displaying the lyrics in a fullscreen web browser. Cheers, Jonathan Stafford Hi all, I tried create a win32 standalone executable: Icons used in application might not be displayed. Best regards Martin Hi all, I would like to announce a new development release of ChangingSong. A lot of time has been spent by working on OpenLyrics format. Source code tarballs: Best regards Martin Zibricky Release Notes for ChangingSong ============================== Release 0.0.1 05 Jan 2010 Development release. User visible changes: * Read and display OpenLyrics songs (own data format implemented). * Added basic plain text editor allows xml document editing. * Toolbar for basic actions (load/save song, cut/copy/paste text, run presentation mode). Icons from the Tango project were used. * Keyboard shortcuts: Ctrl+O open a song, Ctrl+S save a song, Ctrl+Q quit application, Ctrl+C copy text, Ctrl+V paste text, Ctrl+X cut text, Ctrl+P run presentation mode Development visible changes: * Update pyutilib to version 3.0.2 * Unit/functionality tests moved to one directory. * Improved execution time of tests * Add automatic tests for text editor. * Saved songs (xml documents) are prettyprinted (xml is automatically formatted and beautified). * Script to convert songs from Opensong to OpenLyrics format. Known defects: * 85: ChangingSong doesn't work when it is placed in path containing non ascii characters. Hi all, I would like to announce the release of OpenLyrics format version 0.6. Since this version the format should be stable for some time. I would recommend to use this version as base for implementation. download (some song examples, xml schema for validation): more details: A comprehensive documentation needs to be written. Thanks to all who have helped. Martin Zibricky Hi all, I have incorporated most latest comments into openlyrics format. You can find latest (v0.5) version at: Version 0.5 is meant as last draft. If there are no major comments in two weeks, I'd like to proclaime v0.5 as "production ready" and tag it as v0.6. The changes in this version are: --------------------------------- * custom verse names: <verse name="custom_name_name"> * custom tempo: <tempo type="custom">steadily</tempo> * stay with only one key (any text): <key>C#</key> * allow any chord notation (any text) * restrict ccli theme 'id' to range 1-999: * theme value can't be empty * namespace changed from: to: * content of an optional element is mandatory, when the element present in xml. ----- If there are any questions, I am willing to answer them. Best regards Martin Zibricky (I've cc'ed the ChangingSong list since I see there's not much discussion happening there, and there's a fair amount happening on OpenLP's mailing list) On Friday 04 December 2009 11:30:25 Martin Zibricky wrote: > Do you agree that having some predefined values is good? If so, which > then? From what I've seen, the "values" vary. I am not aware of an official "list" of tempo names, though there could be one. > When there will be also some predefined values and when not limiting to > only predefined values, we could introduce a new tempo type: > > <tempo type="custom">stedily</tempo> What's the point of the text then? I'd rather have a text type that allows custom values. At the end of the day, the musician is not bound to that value in any case. > id == line number of CCLI theme in file with ccli themes > > That file could be found in svn, together with RelaxNG Schema. > > When using IDs you should be able to easily assign translation to a > theme in the application. Ah, I see. However, this does bring up a point I've been thinking about. I understand that CCLI is a good point to reference things from, but do we really want to be stuck to their data? In the case of the themes, I'd suggest we make the content mandatory, and make the id attribute optional. We don't want to force churches to use the CCLI's list of themes. > When allowing custom verse names, should there be required min. lenght > of a custom name? For example 4 letters? No, I'd make it completely custom. Once again, we don't want to force a particular way of doing things on an application. I know this makes the import process of the application reading the OpenLyrics file difficult, but if we want a "universal" lyrics format, we need to try to be as open as possible. Further thoughts are welcome! -- Raoul Snyman, B.Tech IT (Software Engineering) Saturn Laboratories m: 082 550 3754 e: raoul.snyman@... w: b: blog.saturnlaboratories.co.za Hi all, here are last questions I was thinking about. If there aren't any objections, I'll make in a few days a final draft, based on aswers on following questions. 1) mandatory elements/attributes: - <song xmlns="" version="" createdIn="" modifiedIn="" modifiedDate=""> - one title, lyrics -> one verse -> one line 2) in <tempo type="text"></tempo> there could be values: very fast, fast, moderate, slow, very slow Are those values appropriate? - I think it is enough. It is also possible to use beep per minutes. Should the content of tempo be restricted only to predefined text values or allow to use custom values? - predefined text ease translation of those values in the app 3) <key>C#</key> Should be allowed to use more keys? - I think we should stay with one key. - content could be any text 4) <theme id="5"/> Should the id be restricted to the count of ccli theme list? - it will be restricted to values 1-999 5) <chord name="D"/> Do you think it is possible to create regex to match every chord? - only type 'text' is required - trying to match all possible chord notations is hard and too complex 6) <verse name="v1"> Should be allowed custom one word values for verse name? - still not sure with this - at the moment I would rather to keep only predefined values - not sure how custom verse names are widespread in OpenSong community To just remind: God bless Martin Onatawahtaw píše v Út 17. 11. 2009 v 07:33 -0800: > Why do we want multiple translations of a song in the song file? The purpose of multiple translations is to allow display lyrics in multiple languages. For example in some churches in Germany there are sang many worship songs in english. But older people don't understand english. In this situation this feature is handy. It allows singing songs in english and older people also understand the text. There could be similar scenarios where this feature could be handy. In my opinion the data format is less complicated than using multiple files for song translations when we want the feature displaying lyrics in multiple languages. And with one file i think it will be easier to implement it. The feature of defining a language of a song is optional. The aim is not that the user should put every translation into one song. Is this reasonable for you? I appreciate your comments. Best regards Martin Hi Martin, Here are a couple of comments I have: 1. I do not think ccliNo should be inside of the authorInfo section. I would be more inclined to place it in the <song> section between the </titles> and <authorInfo> tags. 2. Also, this one is more of a question. Why do we want multiple translations of a song in the song file? I would be more inclined to save them as separate files (maybe in separate folders). I am looking more at the sharing aspect. If someone is sharing their song database with 5 or 6 song translations with someone else who only uses one language, it would be far easier to eliminate the entire language folders for languages not used, rather than to remove them from each song. God bless, -Kevin --- On Tue, 11/17/09, Martin Zibricky <mzibricky@...> wrote: > From: Martin Zibricky <mzibricky@...> > Subject: [cs-devel] xml song format - next draft > To: changingsong-devel@... > Date: Tuesday, November 17, 2009, 3:26 > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal > Reports 2008 30-Day > trial. Simplify your report design, integration and > deployment - and focus on > what you do best, core application coding. Discover what's > new with > Crystal Reports now. > _______________________________________________ > Changingsong-devel mailing list > Changingsong-de Hi, finally it's there. I've uploaded ChangingSong tarball to sourceforge in zip and zip formats: Best regards Martin Zibricky Release Notes for ChangingSong ============================== Release Prototype 08 Oct 2009 Initial development release. This release is not yet production ready. It is aimed at developers or people insterested in ChangingSong development. User visible changes: * Read and display OpenSong songs (own format is not yet implemented). * Video/Image/Color background. * System information can be found in menu Help->System Information Development visible changes: * Debug mode is enabled by default. * In debug mode there is displayed a dialog with created html, used for rendering song text. In this dialog it is possible to tune html/css or try WebKit capabilities. * QWebView (WebKit in Qt) is used for text rendering. * Phonon handles video playback. * XML parsing is done by lxml (wrapped libxml2). It seems to be the fastest and at the same time easy to use library. * Added some automatic tests. * Communication between UI and rendering part is handled by a form of message passing (module cslib.pr.route.messages). * For some parts there is experimental used plugin framework PyUtilib. There was reported an error on ubuntu with xine phonon backend. Video was not played. I don't know why but changingsong doesn't work at the moment with xine backend. For changingsong to work, it is necessary to uninstall package 'phonon-backend-xine' and install 'phonon-gstreamer-backend'. Martin Ron Manke píše v So 03. 10. 2009 v 07:26 -0700: > Is there any way to get a Mac package? > I'm willing to help with IU design at some point, but I prefer to use > a Mac package if possible. > Ron Hi Ron, A mac package is not available. Since I don't have any Mac machine I'm not able to test changingsong on mac. But I've written some instructions which could help you to setup mac environment for hacking on changingsong. I haven't tried anything from those instructions but it may work. If you are willing trying that, you could then update those instructions if necessary. (you should tell me your sf.net nick to give you access for wiki editing) Martin Is there any way to get a Mac package? I'm willing to help with IU design at some point, but I prefer to use a Mac package if possible. Ron Hi, in a few days I would love to make a first release. It won't be targeted for users. It will only show that there are some concepts implemented and something is working. To make release of a prototype it should mean: - clean source code - write/update documentation Released source shouldn't be much different than current svn trunk. Regards Martin Hi all, during last weeks I've been working on implementing displaying not only text but also images and videos. There is only basic implementation of Displaying images and videos. So don't expect anything fancy. I've tested it on Windows7, Ubuntu 9.04 and Gentoo. If you would like to test it, check out svn trunk and you could follow instructions how to get all necessary tools and libraries: Windows: Ubuntu: Regards Martin Hi all, yesterday I've finished an example where text is rendered over video. You can find this example in: /trunk/samples/qt4/videobackground This week I'm gonna implement this example to changingsong prototype. Video background is implemented by using just Qt classes, for instance: QGraphicsScene QWebView VideoWidget Regards Martin Hi all, during last week I've done a little comparison of a few python plugin frameworks. I would like to get some feedback if something is missing/wrong/right or if there are any suggestions how could be achieved modular architecture in ChangingSong. Thanks Martin
http://sourceforge.net/p/changingsong/mailman/changingsong-devel/
CC-MAIN-2014-23
refinedweb
2,851
65.01
You can use API keys to restrict access to specific API methods or all methods in an API. This page describes how to restrict API access to those clients that have an API key and also shows how to create an API key. The Extensible Service Proxy (ESP) uses Service ControlAPI to validate an API key and its association with a project's enabled API. If you set an API key requirement in your API, requests to the protected method, class, or API are rejected unless they have a key generated in your project or within other projects belonging to developers with whom you have granted access to enable your API. The project that the API key was created in isn't logged and isn't added to the request header. You can, however, view the Google Cloud project that a client is associated with in Endpoints > Service, as described in Filter for a specific consumer project. For information on which Google Cloud project an API key should be created in, see Sharing APIs protected by API key. By default in gRPC services, all API methods require an API key to access them. You can disable the API key requirement for the entire API or for specific methods. You do this by adding a usage section to your service configuration and configure rules and selectors, as described in the following procedures. Restricting or granting access to all API methods To specify that an API key is not required to access your API: Open your project's gRPC service configuration file in a text editor and find or add a usagesection. In your usagesection, specify an allow_unregistered_callsrule as follows. The wildcard "*"in the selectormeans that the rule applies to all methods in the API. usage: rules: # All methods can be called without an API Key. - selector: "*" allow_unregistered_calls: true Removing API key restriction for a method To turn off API key validation for a particular method even when you've restricted API access for the API: Open your project's gRPC service configuration file in a text editor and find or add a usagesection: In your usagesection, specify an allow_unregistered_callsrule as follows. The selectormeans that the rule applies just to the specified method - in this case, ListShelves. usage: rules: # ListShelves method can be called without an API Key. - selector: endpoints.examples.bookstore.Bookstore.ListShelves allow_unregistered_calls: true Calling an API using an API key Calling an API varies, depending on whether you call from a gRPC client or an HTTP client. gRPC clients If a method requires an API key, gRPC clients need to pass the key value as x-api-key metadata with their method call. Python def run(host, port, api_key, auth_token, timeout, use_tls, servername_override, ca_path): """Makes a basic ListShelves call against a gRPC Bookstore server.""" if use_tls: with open(ca_path, 'rb') as f: creds = grpc.ssl_channel_credentials(f.read()) channel_opts = () if servername_override: channel_opts += (( 'grpc.ssl_target_name_override', servername_override,),) channel = grpc.secure_channel('{}:{}'.format(host, port), creds, channel_opts) else:; } } Go func main() { flag.Parse() // Set up a connection to the server. conn, err := grpc.Dial(*addr, grpc.WithInsecure()) if err != nil { log.Fatalf("did not connect: %v", err) } defer conn.Close() c := pb.NewGreeterClient(conn) if *keyfile != "" { log.Printf("Authenticating using Google service account key in %s", *keyfile) keyBytes, err := ioutil.ReadFile(*keyfile) if err != nil { log.Fatalf("Unable to read service account key file %s: %v", *keyfile, err) } tokenSource, err := google.JWTAccessTokenSourceFromJSON(keyBytes, *audience) if err != nil { log.Fatalf("Error building JWT access token source: %v", err) } jwt, err := tokenSource.Token() if err != nil { log.Fatalf("Unable to generate JWT token: %v", err) } *token = jwt.AccessToken // NOTE: the generated JWT token has a 1h TTL. // Make sure to refresh the token before it expires by calling TokenSource.Token() for each outgoing requests. // Calls to this particular implementation of TokenSource.Token() are cheap. } ctx := context.Background() if *key != "" { log.Printf("Using API key: %s", *key) ctx = metadata.AppendToOutgoingContext(ctx, "x-api-key", *key) } if *token != "" { log.Printf("Using authentication token: %s", *token) ctx = metadata.AppendToOutgoingContext(ctx, "Authorization", fmt.Sprintf("Bearer %s", *token)) } // Contact the server and print out its response. name := defaultName if len(flag.Args()) > 0 { name = flag.Arg(0) } r, err := c.SayHello(ctx, &pb.HelloRequest{Name: name}) if err != nil { log.Fatalf("could not greet: %v", err) } log.Printf("Greeting: %s", r.Message) } Node.js const); } }); }; HTTP clients If you are using the Cloud Endpoints for gRPC' HTTP transcoding feature, HTTP clients can send the key as a query parameter in the same way they do for OpenAPI services. Sharing APIs protected by API key API keys are associated with the Google Cloud project in which they have been created. If you have decided to require an API key for your API, the Google Cloud project that the API key gets created in depends on the answers to the following questions: - Do you need to distinguish between the callers of your API so that you can use Endpoints features such as quotas? - Do all the callers of your API have their own Google Cloud projects? - Do you need to set up different API key restrictions? You can use the following decision tree as a guide for deciding which Google Cloud project to create the API key in. Grant permission to enable the API When you need to distinguish between callers of your API, and each caller has their own Google Cloud project, you can grant users who are members of the calling projects permission to enable the API in their own Google Cloud project. This way, users of your API can create their own API key for use with your API. For example, suppose your team has created an API for internal use by various client programs in your company, and each client program has their own Google Cloud project. To distinguish between callers of your API, the API key for each caller must be created in a different Google Cloud project. You can grant your coworkers permission to enable the API in the Google Cloud project that the client program is associated with. To let users create their own API key: - In the Google Cloud project in which your API is configured, grant each user the permission to enable your API. - Contact the users, and let them know that they can enable your API in their own Google Cloud project and create an API key. Create a separate Google Cloud project for each caller When you need to distinguish between callers of your API, and not all of the callers have Google Cloud projects, you can create a separate Google Cloud project and API key for each caller. Before creating the projects, give some thought to the project names so that you can easily identify the caller associated with the project. For example, suppose you have external customers of your API, and you have no idea how the client programs that call your API were created. Perhaps some of the clients use Google Cloud services and have a Google Cloud project, and perhaps some don't. To distinguish between the callers, you must create a separate Google Cloud project and API key for each caller. To create a separate Google Cloud project and API key for each caller: - Create a separate project for each caller. - In each project, enable your API and create an API key. - Give the API key to each caller. Create an API key for each caller When you don't need to distinguish between callers of your API, but you want to add API key restrictions, you can create a separate API key for each caller in the same project. To create an API key for each caller in the same project: - In either the project that your API is configured in, or a project that your API is enabled in, create an API key for each customer that has the API key restrictions that you need. - Give the API key to each caller. Create one API key for all callers When you don't need to distinguish between callers of your API, and you don't need to add API restrictions, but you still want to require an API key (to prevent anonymous access, for example), you can create one API key for all callers to use.To create one API key for all callers: - In either the project that your API is configured in, or a project that your API is enabled in, create an API key for all caller. - Give the same API key to every caller.
https://cloud.google.com/endpoints/docs/grpc/restricting-api-access-with-api-keys?hl=en
CC-MAIN-2021-17
refinedweb
1,423
63.09
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources. Lift is an advanced, next-generation framework for building highly interactive and intuitive web applications. Lift aims to give you a toolkit that scales with both your needs as a developer and the needs of your applications. Lift includes a range of features right out of the box that set it apart from other frameworks in the marketplace: namely security, statefulness, and performance. Lift also includes a range of high-level abstractions that make day-to-day development easy and powerful. In fact, one of the main driving forces during Lift’s evolution has been to include only features that have an actual production use. You, as the developer, can be sure that the features you find in Lift are distilled from real production code. Lift in Action is a step-by-step exploration of the Lift web framework, and it’s split into two main parts: chapters 1 through 5 introduce Lift and walk you through building a small, sample application, and then chapters 6 through 15 take a deep dive into the various parts of Lift, providing you with a deep technical reference to help you get the best out of Lift. Chapter 1 introduces Lift and sets the scene with regard to how it came into existence. It also covers the various modules of the framework to give you an appreciation for the bigger picture. Chapter 2 shows you how to get up and running with the Scala build tool SBT and start making your first web application with Lift. This chapter focuses on small, incremental steps covering the concepts of development that you’ll need in the rest of the book. Chapter 3, 4, and 5 walk you through the construction of a real-time auction application to cover as many different parts of Lift as possible. This includes creating templates, connecting to a database, and implementing basic AJAX and Comet. Chapter 6 takes a dive into the practical aspects of Lift WebKit, showing you how to work with the sophisticated templating system, snippets, and form building through LiftScreen and Wizard. Additionally, this chapter introduces Lift’s own abstraction for handling application state in the form of RequestVar and SessionVar. This chapter concludes with an overview of some useful extension modules, known as widgets, that ship with the Lift distribution. Chapters 7 focuses on Lift’s SiteMap feature, which allows you to control access and security for particular resources. Chapter 8 covers the internal working of Lift’s HTTP pipeline, detailing the various hooks that are available and demonstrating several techniques for implementing HTTP services. Chapter 9 explores Lift’s sophisticated AJAX and Comet support, demonstrating these technologies in practice by assembling a rock-paper-scissors game. This chapter also covers Lift’s AJAX abstraction called wiring, which allows you to build chains of AJAX interaction with ease. Chapters 10 and 11 cover Lift’s persistence systems, Mapper and Record. Mapper is an active-record style object-relational mapper (ORM) for interacting with SQL data stores, whereas Record is store-agnostic and can be used with any backend system from MySQL to modern NoSQL stores such as MongoDB. Chapter 12 demonstrates Lift’s localization toolkit for building applications that can work seamlessly in any language. This includes the various ways in which you can hook in your ResourceBundles to store localized content. Chapter 13 is all about the enterprise aspects often associated with web application development. Technologies such as JPA are prevalent within the enterprise space, and companies often want to reuse them, so this chapter shows you how to implement JPA with Lift. Additionally, this chapter covers messaging using the Akka framework. Chapter 14 covers testing with Lift and shows you some different strategies for testing snippets. More broadly, it demonstrates how to design code that has a higher degree of decoupling, so your general coding lends itself to testing. Finally, chapter 15 consolidates all that you’ve read in the book and shows you how to take your application into production. This includes an overview of various servlet containers, a demonstration of implementing distributed state handling, and a guide to monitoring with Twitter Ostrich. Primarily, this book is intended to demonstrate how to get things done using Lift. With this in mind, the book is largely slanted toward users who are new to Lift, but who have experience with other web development frameworks. Lift has its own unique way of doing things, so some of the concepts may seem foreign, but I make conceptual comparisons to things you may be familiar with from other popular frameworks or libraries to smooth the transition. If you’re coming to Lift with little or no knowledge of Scala, you should know that Lift makes use of many Scala language features. This book includes a Scala rough guide to get you up and running within the context of Lift as quickly as possible. The book largely assumes that you have familiarity with XML and HTML. Lift’s templating mechanism is 100 percent based on XML, and although it’s straightforward to use, it’s useful to have an understanding of structured XML that makes use of namespaces. Finally, because Lift is primarily a web framework designed for browser-based experiences, JavaScript is inevitably part of the application toolchain. Lift includes a high-level Scala abstraction for building JavaScript expressions, but having an understanding of JavaScript and client-side scripting can greatly improve your understanding of the client-server interactions supplied by Lift. This book includes a wide range of examples and code illustrations from Scala code and HTML templates, to plain text configurations for third-party products. Source code in the listings and in the text is presented in a fixed width font to separate it from ordinary text. Additionally, Scala types, methods, keywords, and XML-based markup elements in text are also presented using fixed width font. Where applicable, the code examples explicitly include import statements to clarify which types and members originate from which packages. In addition, functions and methods have explicitly annotated types where the result type is not clear. Although Scala code is typically quite concise, there are some listings that needed to be reformatted to fit in the available page space in the book. You are encouraged to download the source code from the online repository, in order to see the sample code in its original form (). In addition to some reformatting, all the comments have been removed for brevity. You can also download the code for the examples in the book from the publisher’s website at. Code annotations accompany many of the source code listings, highlighting important concepts. In some cases, numbered bullets link to explanations in the subsequent text. Lift itself is released under the Apache Software License, version 2.0, and all the source code is available online at the official Github repository (). Reading Lift’s source code can greatly speed your efforts at becoming productive in using Lift for your own applications. Purchase of Lift in Action includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the author and from other users. To access the forum and subscribe to it, point your web browser to Action or..
http://my.safaribooksonline.com/book/-/9781935182801/about-this-book/pref03
CC-MAIN-2013-20
refinedweb
1,234
50.16
On Wed, 2004-11-17 at 08:09 -0700, Michaeljohn Clement wrote: > Maybe this is a subtle point, but I really /don't/ want to change > history. What I /do/ want to do is to rename a category, and then > re-use the original name. The fact that the archive namespace is > immutable is an aspect of arch which apparently makes it difficult or > impossible to get the result I want /without/ changing history. If I > have to change history to get the result I want, I may be willing to go > for it, but that's not my desire. Renaming a category is changing history. You're changing what revision $FOO was called when it was checked in; that's part of its history. Now, if you want to just do this for future revisions, that's a whooole different kettle of fish. Tag from foo--dev--0.1 to bar--dev--0.1, and create a new branch foo--new--0.1 (or foo--dev--0.2, or whatever). Now, you have the correct names for future work, but you haven't changed the past.
http://lists.gnu.org/archive/html/gnu-arch-users/2004-11/msg00482.html
CC-MAIN-2016-07
refinedweb
187
75.5
C++ articles, code snippets, musings, etc. from Andy RichIf this is your first time here, you may want to check out my blog introduction. Hot on the heels of my article on interior pointers, comes a much more insightful one by Stan Lippman on the same issue. That happens sometimes. I enjoyed the chat we had on the VC++ 2005 Beta, and I wanted to point that there are two other online chats coming up. One is on upgrading COM apps to .NET, and the other is on the library and runtime enhancements in VC++ 2005. They're lumped together with other chats that might be of interest on this page. On to pinning pointers. Sometimes, the interior pointer simply won't suffice. Say I have a function I really need to access in a native code module. For example, I have a native function that loads an array for me, and I want to use it to load a managed array. Sounds like a complicated task, and it can be. Conventional wisdom tells us that the native function can't get access to the managed array, because it's on the GC heap, and could move around all over the place. I can't use interior pointers because this is native code. So it seems the onlyh choice left is to make another native array, call into the native function, and then loop through the native array, copying the members over from it to the managed array. Clunky, to say the least. There's a better way? You bet. It's called the pinning pointer. In Managed Extensions, we exposed it using the keyword __pin. In the new syntax, we expose it through another smart-pointer-ish type, pin_ptr<>, located in the cli namespace (the namespace formerly known as stdcli::language). What the pinning pointer does is "pin" our managed object down on the GC heap, preventing the garbage collector from moving it around. In addition to this pinning, it gives us what we need; a conversion to a native pointer. Though pinning pointers are cool, they are sometimes not well understood. The best way to think of them is that an object is pinned so long as a pinning pointer points to it. That's important enough a concept that you ought to read that sentence again. I'll wait. What this means is that your pin_ptr object is only pinning something on the GC help while its in scope. When it goes out of scope, it stops pointing to that object, and that object can be moved at any time. That means you can't go saving a native pointer, and expect your pin_ptr to hold it forever. Dangers of pinning pointers. In fact, because of the dangers of misinterpretation, we severely restrict where and how you can use pinning pointers. They can't be involved anywhere temporaries are created, can't be the argument to or the return type from a function, can't be members of a type, and can't be involved in casts, to name a few. But this is C++, and what would C++ be without a way to shoot yourself in the foot. Long ago, I wrote an article that included some warnings about the pinning pointer. The examples are in Managed Extensions, but the concepts are pretty solid. Rather than repeat myself, I'll just link to it, and request that you read it. Especially that example of how to make a quick and easy GC hole. Enough talk, let's see an example. Right.");} Putting it all together can be complicated. Brandon helped me sort it out one day by drawing a helpful diagram, which I'll replicate here. Don't quit your day job. I know, I'm not much of an artist. Think of the arrows as "can convert to." Note that for orthogonality, native pointers can convert to pin and interior pointers. Hey, it's sometimes useful, you'll be glad it's there. That's it for pin pointers. In a future article, I might look at our upcoming for each syntax.
http://blogs.msdn.com/arich/archive/2004/08/27/221588.aspx
crawl-001
refinedweb
687
74.08
What's been happening to Python since J. Bauer's article in Linux Journal #35? Like most free software, Python is being continually developed and enhanced. At the time of the original article, Python was at version number 1.2, and betas of 1.3 were floating around. Since then, version 1.3 has been officially released, only to be replaced by 1.4 in late October. Versions 1.3 and 1.4 have both added new features to the language. The really significant new item in 1.3 was the addition of keyword arguments to functions, similar to Modula-3's. For example, if we have the function definition: def curse(subject="seven large chickens", verb="redecorate", object="rumpus room"): print "May", subject, verb, "your", object then the following calls are all legal: curse() curse('a spaniel', 'pour yogurt on', 'hamburger') curse(object='garage') curse('the silent majority', object='Honda')Arguments not preceded by a keyword are passed in the usual fashion; non-keyword and keyword arguments can be used in the same function call, as long as the non-keyword parameters precede the keyword parameters. By that rule, the following call is a syntax error: curse(object='psychoanalyst', 'a ancient philosopher')and the following call would cause an error at runtime, because an argument is being defined twice: curse('the silent majority', subject='Honda')As a pleasant side effect, adding keyword arguments required optimising function calls, reducing the overhead of a single function call by roughly 20%. Most of the changes in the 1.4 release made Python more useful for numeric tasks. Many of the changes were proposed by the members of the Matrix special interest group (or Matrix-SIG), which has defined a syntax and built a data type for manipulating matrices. (The Python SIGs are small groups of people tightly focused on one application of Python, such as numeric programming or database interfaces; see for more information about the existing SIGs.) One such enhancement is support for complex numbers. The imaginary component of a complex number is denoted by a suffix of “J” or “j”; thus, the square root of -1 is symbolized as 1j . The usual mathematical operations such as addition and multiplication can be performed on complex numbers, of course. >>> 1+2j*2 (1+4j) >>> (1+2j)*2 (2+4j) >>> (1+2j)/(2+1j) (0.8+0.6j) The presence of complex numbers also requires mathematical functions that can perform operations on them. Instead of updating the existing math module, a new module called cmath was added; old software might malfunction if an operation returns a complex value where an error was expected. So math.sqrt(-1) will always raise a ValueError exception, while cmath.sqrt(-1) will return a complex result of 1j. >>> import cmath >>> cmath.sqrt(-1) 1j >>> a=cmath.log(1+2j) >>> print a (0.804718956217+1.10714871779j) >>> cmath.exp(a) (1+2j)For the sake of users comfortable with Fortran's notation, the ** operator has been added for computing powers; it's simply a shorthand for Python's existing pow() function. For example, 10**2 is equivalent to pow(10,2) and returns 100. One minor new function has been requested by several people in comp.lang.python. Python has long had a tuple() function which converts a sequence type (like a string or a list) into a tuple; the usual idiom for converting sequence types to lists was map(None, L). (The function map(F,S) returns a list containing the result of function F, performed on each of the elements of the sequence S. If F is None, as in this case, then no operation is performed on the elements, beyond placing them in a list.) Many people found this asymmetry—tuple() existed, but not list()—annoying. In 1.4, the list() function was added, which is symmetric to tuple(). >>> tuple([1,2,3]) (1, 2, 3) >>> list( (1,2,3,4) ) [1, 2, 3, 4] An experimental feature was included in 1.4 and caused quite a bit of controversy: private data belonging to an instance of a class is a little more private. An example will help to explain the effect of the change. Consider the following class: class A: def __init__(self): self.__value=0 def get(self): return self.__value def set(self, newval): self.__value=newvalPython doesn't support private data in classes, except by convention. The usual convention is private variables have names that start with at least one underscore. However, users of a class can disregard this and access the private value anyway. For example: >>> instance=A() >>> dir(instance) # List all the attributes of the instance ['__value'] >>> instance.get() 0 >>> instance.__value=5 >>> instance.get() 5A more significant problem; let's say you know nothing about A's implementation and try to create a subclass of A which adds a new method that uses a private __value attribute of its own. The two uses of the name will collide. Things are slightly different in 1.4: >>> instance=A() >>> dir(instance) ['_A__value']Where did this new value come from? In 1.4, any attribute that begins with two underscores is changed to have _ and the class name prefixed to it. Let's say you have a class called File, and one method refers to a private variable called __mode the name will be changed to _File__mode. >>> instance.get() 0 >>> instance.__value=5 >>> instance.get() 0 >>> dir(instance) ['_A__value', '__value']Now, this still doesn't provide ironclad data hiding; callers can just refer explicitly to _A__value. However, subclasses of A can no longer accidentally stomp on private variables belonging to A. This feature is still controversial and caused much debate in comp.lang.python when it was introduced. Thus, its status is only experimental, and it might be removed in a future version of the language, so it would be unwise to rely on it. Both the 1.3 and 1.4 releases included some new modules as part of the Python library, and bug fixes and revisions to existing modules in the library. Most of these changes are only of interest to people who've written code for earlier versions of those modules; see the file Misc/NEWS in the Python source distribution for all the details. If you're just coming to the language, these changes aren't really of interest to you. The news isn't just limited to the software. The first two books on Python were published in October: Programming Python, by Mark Lutz, Internet Programming with Python, by Aaron Watters, Guido van Rossum, and James C. Ahlstrom. At least one more book is scheduled for release next year. Two Python workshops have taken place, one at the Lawrence Livermore National Labs in California last May, and another in Washington, D.C. in November. Speakers discussed all sorts of topics: distributed objects; interfacing C++ and Python, or Fortran and Python; and Web programming. See for more information about the workshops and the papers presented. In November 1996, the 5th Python Workshop was held, in association with the FedUnix '96 trade show. The two most common topics were numeric programming and Web-related programming. For numeric work, there's a lot of interest in using Python as a glue language to control mathematical function libraries written in Fortran or C++. Code can be developed quickly in Python, and once the program logic is correct it can be ported to a compiled language for speed's sake. There's also a benefit from using a general programming language like Python, instead of a specialized mathematical language; it's easier to make the numeric code accessible with a GUI written in Tk, or with a CGI interface. Another popular topic was Web-related programming. The Python Object Publisher was an especially interesting system, which enables accessing Python objects via HTTP. To take an example from the Object Publisher presentation, a URL like: causes a Python Car object named Pinto to be located, and its purchase() method will be called with 'Bob' as a parameter. Other presentations discussed generating HTML, writing tools for system administration, and collaborative document processing. Brief notes on the papers are at, with links to HTML or PostScript versions. As you read this, plans for the next workshop are probably in progress, though there's no news at the time of writing; see the Python Web site for the current status. In the past, the meetings have alternated between the Eastern and Western U.S., so workshop #6 will probably be on the West 30 min ago 2 hours 17 min ago 3 hours 50 min ago 5 hours 27 min ago 7 hours 25 min ago 7 hours 42 min ago 8 hours 12 min ago 8 hours 13 min ago 8 hours 13 min ago 11 hours 14 min ago
http://www.linuxjournal.com/article/2068?quicktabs_1=2
CC-MAIN-2013-20
refinedweb
1,476
55.44
i have a to create form some thing like this and display results from database for the values in form i have created and stored database with table name telephone_records consisting of selectcity,match,phone_no ,name, address. how to create a form retrieves data from database and display phone-no,name,address in table by selecting values from above form Hi Prajwal, I first cleaned up your schema with some migrations class CreateCities < ActiveRecord::Migration def up create_table :cities do |t| t.string :name end create_table :phone_records do |t| t.integer :city_id t.string :phone_no t.string :name t.string :address end end end class MapCities < ActiveRecord::Migration def up TelephoneRecord.all.each do |tel| phone_record = PhoneRecord.new({ :name => tel.name, :phone_no => tel.phone_no, :address => tel.address }) phone_record.city = City.find_or_create_by_name(tel.selectcity) phone_record.save end drop_table :telephone_records end end And added the relationships to the models. The filenames, table names and object names are all important with Rails.Tables must be plural e.g. phone_recordsEverything else about a model needs to be singular, city.rb and the model named City. class City < ActiveRecord::Base has_many :phone_records attr_accessible :name end class PhoneRecord < ActiveRecord::Base belongs_to :city attr_accessible :name, :phone_no, :address end Search parameters GET and POST are combined in the params hash so you can access variables from forms. class PhoneRecordsController < ApplicationController def index @phone_records = PhoneRecord.limit(100).all end def search @phone_records = PhoneRecord.where(:city_id => params[:city_id]).limit(100) if params[:search_by] == 'name' @phone_records = @phone_records.where("name LIKE ?", "#{params[:search]}%") elsif params[:search_by] == 'address' @phone_records = @phone_records.where("address LIKE ?", "#{params[:search]}%") elsif params[:search_by] == 'phone' @phone_records = @phone_records.where("phone_no LIKE ?", "#{params[:search]}%") end render :action => :index end end Use explicit paths in your routes Telephonedirectory::Application.routes.draw do resources :phone_records do collection do get 'search' end end root :to => 'PhoneRecords#index', :as => 'listing' end <%= form_tag(search_phone_records_path, :method => "get") do %> <div> <%= label_tag(:city, "City:") %> <%= select_tag(:city_id, options_from_collection_for_select(City.all, :id, :name)) %> </div> <div> <label>Search By:</label> <%= radio_button_tag(:search_by, "name") %> <%= label_tag(:search_by_name, "Name") %> <%= radio_button_tag(:search_by, "address") %> <%= label_tag(:search_by_address, "Address") %> <%= radio_button_tag(:search_by, "phone") %> <%= label_tag(:search_by_phone, "PhoneNo") %> </div> <div> <label>Search:</label> <%= text_field_tag(:search, params[:search]) %> </div> <div> <%= label_tag(:match, "Match:") %> <%= select_tag(:match, options_for_select([['Starts with', 'starts_with']])) %> </div> <div> <%= submit_tag("Search") %> </div> <% end %> And the controller code At the moment the "match" parameter isn't doing anything and all search are "starts_with" but if you wanted to implement other types you can give it a go. Hope it helps, thank a ton for this much i have understood all the logic accept in migrations i am not getting it clearly the idea. Migrations are simply scripts that performs changes to your database schema.Each file can have an up and down method to migrate the db schema backwards or forwards. I create two new tables in your database cities and phone_records.I only create the new phone_records so that it had a proper id column and used the rails conventions. The second migration is just moving the rows from your previous table to the new one. phone_record.city = City.find_or_create_by_name(tel.selectcity) The find_or_create_by helper creates the city if it doesn't exist or returns it. To perform migrations you use rake db:migrate to move forward and rake db:rollback to move back a step. i am trying to design the form as well as table displaying results using twitter bootstrap the problem is when i try to install twitter bootstrap rails gem it says libv8 buiding native extension then error message i am on windows xp sp2 and found that it is not suppoorted so which is the alternate method to do it using bootstrap itself i tried a gem bootstrap sass but when i try along it with formtastic-bootstrap i get an error once again could you let me know there r any other gem using twitter bootstrap and also deals with form_tag command please let me know. You can always download bootstrap manually and add to the assets directory. What's the error you are getting when installing formtastic-bootstrap ? done mark installed it finally ssl connect error i was getting done but ican this form_tag can be used with twitter bootstrap everywhere i am seeing it form_for being used. To apply the bootstrap styles you just need to make the HTML the same as what it expects. load error: bootstrap formastic you said to download it manually in assets folder how to link it with given form and table then i know which commands to get things done manually downloaded it how to link it to app. table design done as it was with html now mark tell me should i change this form in html to use bootstrap form_tag i am unable to get it at the moment. If you can't get the bootstrap formtastic gem don't worry about it. All it does is generate HTML that bootstrap needs, all you need to do is render the same HTML and classes that the forms need. i used bootstrap manually extracted it to assets in app then into js and stylesheets finally was able to get a table design but at the moment i am trying to apply to form_tag but i am not getting where actually classes in form_tag must be defined should i rebuild the entire form into html once again or is it possible to use bootstrap for it directly i searched for solution but everything is with form_for and bootstrap i never found bootstrap dealing with such form_tag command is there a way to el with form_tag and bootstrap got it done ............... even tough this looks simple problem i want to display the results after submitting search button in same page and for also various other search results here record are displayed first itself and then for then table is changed according to search results presently trying a solution for it any idea mark should i change some in controller or view bit of a trouble.
https://www.sitepoint.com/community/t/how-to-build-a-search-form-in-this-case-and-display-results/17129
CC-MAIN-2017-09
refinedweb
997
58.92
Bubble sort is one of the first sorting algorithms presented in Computer Science. It's not a very practical sorting algorithm, but it is useful for presenting the concepts of sorting and teaching computer science students to think algorithmically. If you investigate bubble sort further, you will learn that there have been attempts to improve upon the efficiency of bubble sort. One such algorithm is comb sort. Comb sort attempts to improve bubble sort by eliminating turtles, which are small numbers at the end of the list you're sorting. Turtles cause bubble sort to run slowly, because they require many swaps as they slowly move 1 position at a time with each and every pass of bubble sort. Comb Sort Gaps and Shrink Factor Bubble sort always compares adjacent values while sorting, which causes turtles to only move 1 position at a time to the front. Comb sort attempts to improve the efficiency of bubble sort by comparing values farther apart. This allows smaller numbers at the end of the list to move several positions forward in a single pass, greatly reducing the overall number of swaps. Comb sort refers to these larger distances as gaps, and uses a shrink factor, typically 1.3, to calculate the optimum values for these gaps. As comb sort continues to sort the list, the gaps get smaller and smaller until the gap is 1. With a gap of 1, comb sort is essentially doing a bubble sort. For example, if you have a list of 10 integers to sort using comb sort, the initial gap is the length of the list divided by the shrink factor. gap = 10 / 1.3 = 7 Therefore for the first set of exchanges you will be comparing values 7 positions away from each other. 6 0 9 3 7 5 4 1 8 2 In the list above, you will first compare 6 with 1. Because 1 is less than 6, an exchange will happen and the number 1 ( a turtle ) will quickly move to the front of the list, eliminating many swaps that would have occurred with bubble sort. 1 0 9 3 7 5 4 6 8 2 You will continue to compare 0 with 8 and 9 with 2 in the list above. Since the number 2 is less than 9, it will quickly jump to the front of the list as well. Once you complete all comparisons using a gap of 7, you make the comparisons again using new gaps calculated in a similar manner. gap = 7 / 1.3 = 5 gap = 5 / 1.3 = 3 gap = 3 / 1.3 = 2 gap = 2 / 1.3 = 1 <- bubble sort Eventually comb sort uses a gap of 1, which is bubble sort. Comb Sort in Python Here is an implementation of comb sort in Python using a shrink factor of 1.3. def comb_sort(lst): """ Performs in-place sort using Comb Sort. :param lst: list of integers :return: None >>> lst = [6, 0, 9, 3, 7, 5, 4, 1, 8, 2] >>> comb_sort(lst) >>> assert(lst == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) """ if lst is None or len(lst) < 2: return gap = len(lst) swap_occurred = True while gap > 1 or swap_occurred: gap = int(gap / 1.3) if gap < 1: gap = 1 i = 0 swap_occurred = False while i + gap < len(lst): if lst[i] > lst[i + gap]: lst[i], lst[i+gap] = lst[i+gap], lst[i] swap_occurred = True i += 1 As you can see, comb sort is very easy to implement in Python. Using a list of 10 unsorted integers as shown in the doctest, comb sort will perform comparisons using gaps of 7, 5, 3, 2, and 1. When the gap is 1, comb sort is performing a bubble sort. As with bubble sort, comb sort will continue to compare adjacent values until a pass has successfully completed with no swaps, which means the list is sorted. Conclusion Hopefully you appreciate bubble sort more than before as well as understand that turtles, small values at the end of the unsorted list, make it slow. Comb sort improves upon bubble sort by comparing values at farther distances to eliminate turtles. Comb sort isn't the only sorting algorithm that attempts to improve bubble sort by eliminating turtles. I presented another algorithm, called cocktail sort, that also tries to eliminate turtles to improve bubble sort.
https://www.koderdojo.com/blog/comb-sort-improves-bubble-sort-by-eliminating-turtles
CC-MAIN-2021-39
refinedweb
730
69.11
1 /*2 * @(#)PipedReader.java 1.15 03/12/193 *4 * Copyright 2004 Sun Microsystems, Inc. All rights reserved.5 * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.6 */7 8 package java.io;9 10 11 /**12 * Piped character-input streams.13 *14 * @version 1.15, 03/12/1915 * @author Mark Reinhold16 * @since JDK1.117 */18 19 public class PipedReader extends Reader {20 boolean closedByWriter = false;21 boolean closedByReader = false;22 boolean connected = false;23 24 /* REMIND: identification of the read and write sides needs to be25 more sophisticated. Either using thread groups (but what about26 pipes within a thread?) or using finalization (but it may be a27 long time until the next GC). */28 Thread readSide;29 Thread writeSide;30 31 /**32 * The size of the pipe's circular input buffer.33 */34 static final int PIPE_SIZE = 1024;35 36 /**37 * The circular buffer into which incoming data is placed.38 */39 char buffer[] = new char[PIPE_SIZE];40 41 /**42 * The index of the position in the circular buffer at which the 43 * next character of data will be stored when received from the connected 44 * piped writer. <code>in<0</code> implies the buffer is empty, 45 * <code>in==out</code> implies the buffer is full46 */47 int in = -1;48 49 /**50 * The index of the position in the circular buffer at which the next 51 * character of data will be read by this piped reader.52 */53 int out = 0;54 55 /**56 * Creates a <code>PipedReader</code> so57 * that it is connected to the piped writer58 * <code>src</code>. Data written to <code>src</code> 59 * will then be available as input from this stream.60 *61 * @param src the stream to connect to.62 * @exception IOException if an I/O error occurs.63 */64 public PipedReader(PipedWriter src) throws IOException {65 connect(src);66 }67 68 /**69 * Creates a <code>PipedReader</code> so70 * that it is not yet connected. It must be71 * connected to a <code>PipedWriter</code>72 * before being used.73 *74 * @see java.io.PipedReader#connect(java.io.PipedWriter)75 * @see java.io.PipedWriter#connect(java.io.PipedReader)76 */77 public PipedReader() {78 }79 80 /**81 * Causes this piped reader to be connected82 * to the piped writer <code>src</code>.83 * If this object is already connected to some84 * other piped writer, an <code>IOException</code>85 * is thrown.86 * <p>87 * If <code>src</code> is an88 * unconnected piped writer and <code>snk</code>89 * is an unconnected piped reader, they90 * may be connected by either the call:91 * <p>92 * <pre><code>snk.connect(src)</code> </pre> 93 * <p>94 * or the call:95 * <p>96 * <pre><code>src.connect(snk)</code> </pre> 97 * <p>98 * The two99 * calls have the same effect.100 *101 * @param src The piped writer to connect to.102 * @exception IOException if an I/O error occurs.103 */104 public void connect(PipedWriter src) throws IOException {105 src.connect(this);106 }107 108 /**109 * Receives a char of data. This method will block if no input is110 * available.111 */112 synchronized void receive(int c) throws IOException {113 if (!connected) {114 throw new IOException ("Pipe not connected");115 } else if (closedByWriter || closedByReader) {116 throw new IOException ("Pipe closed");117 } else if (readSide != null && !readSide.isAlive()) {118 throw new IOException ("Read end dead");119 }120 121 writeSide = Thread.currentThread();122 while (in == out) {123 if ((readSide != null) && !readSide.isAlive()) {124 throw new IOException ("Pipe broken");125 }126 /* full: kick any waiting readers */127 notifyAll(); 128 try {129 wait(1000);130 } catch (InterruptedException ex) {131 throw new java.io.InterruptedIOException ();132 }133 }134 if (in < 0) {135 in = 0;136 out = 0;137 }138 buffer[in++] = (char) c;139 if (in >= buffer.length) {140 in = 0;141 }142 }143 144 /**145 * Receives data into an array of characters. This method will146 * block until some input is available. 147 */148 synchronized void receive(char c[], int off, int len) throws IOException {149 while (--len >= 0) {150 receive(c[off++]);151 }152 }153 154 /**155 * Notifies all waiting threads that the last character of data has been156 * received.157 */158 synchronized void receivedLast() {159 closedByWriter = true;160 notifyAll();161 }162 163 /**164 * Reads the next character of data from this piped stream.165 * If no character is available because the end of the stream 166 * has been reached, the value <code>-1</code> is returned. 167 * This method blocks until input data is available, the end of168 * the stream is detected, or an exception is thrown. 169 *170 * If a thread was providing data characters171 * to the connected piped writer, but172 * the thread is no longer alive, then an173 * <code>IOException</code> is thrown.174 *175 * @return the next character of data, or <code>-1</code> if the end of the176 * stream is reached.177 * @exception IOException if the pipe is broken.178 */179 public synchronized int read() throws IOException {180 if (!connected) {181 throw new IOException ("Pipe not connected");182 } else if (closedByReader) {183 throw new IOException ("Pipe closed");184 } else if (writeSide != null && !writeSide.isAlive()185 && !closedByWriter && (in < 0)) {186 throw new IOException ("Write end dead");187 }188 189 readSide = Thread.currentThread();190 int trials = 2;191 while (in < 0) {192 if (closedByWriter) { 193 /* closed by writer, return EOF */194 return -1;195 }196 if ((writeSide != null) && (!writeSide.isAlive()) && (--trials < 0)) {197 throw new IOException ("Pipe broken");198 }199 /* might be a writer waiting */200 notifyAll();201 try {202 wait(1000);203 } catch (InterruptedException ex) {204 throw new java.io.InterruptedIOException ();205 }206 }207 int ret = buffer[out++];208 if (out >= buffer.length) {209 out = 0;210 }211 if (in == out) {212 /* now empty */213 in = -1; 214 }215 return ret;216 }217 218 /**219 * Reads up to <code>len</code> characters of data from this piped220 * stream into an array of characters. Less than <code>len</code> characters221 * will be read if the end of the data stream is reached. This method 222 * blocks until at least one character of input is available. 223 * If a thread was providing data characters to the connected piped output, 224 * but the thread is no longer alive, then an <code>IOException</code> 225 * is thrown.226 *227 * @param cbuf the buffer into which the data is read.228 * @param off the start offset of the data.229 * @param len the maximum number of characters read.230 * @return the total number of characters read into the buffer, or231 * <code>-1</code> if there is no more data because the end of232 * the stream has been reached.233 * @exception IOException if an I/O error occurs.234 */235 public synchronized int read(char cbuf[], int off, int len) throws IOException {236 if (!connected) {237 throw new IOException ("Pipe not connected");238 } else if (closedByReader) {239 throw new IOException ("Pipe closed");240 } else if (writeSide != null && !writeSide.isAlive()241 && !closedByWriter && (in < 0)) {242 throw new IOException ("Write end dead");243 }244 245 if ((off < 0) || (off > cbuf.length) || (len < 0) ||246 ((off + len) > cbuf.length) || ((off + len) < 0)) {247 throw new IndexOutOfBoundsException ();248 } else if (len == 0) {249 return 0;250 }251 252 /* possibly wait on the first character */253 int c = read(); 254 if (c < 0) {255 return -1;256 }257 cbuf[off] = (char)c;258 int rlen = 1;259 while ((in >= 0) && (--len > 0)) {260 cbuf[off + rlen] = buffer[out++];261 rlen++;262 if (out >= buffer.length) {263 out = 0;264 }265 if (in == out) {266 /* now empty */267 in = -1; 268 }269 }270 return rlen;271 }272 273 /**274 * Tell whether this stream is ready to be read. A piped character275 * stream is ready if the circular buffer is not empty.276 *277 * @exception IOException If an I/O error occurs278 */279 public synchronized boolean ready() throws IOException {280 if (!connected) {281 throw new IOException ("Pipe not connected");282 } else if (closedByReader) {283 throw new IOException ("Pipe closed");284 } else if (writeSide != null && !writeSide.isAlive()285 && !closedByWriter && (in < 0)) {286 throw new IOException ("Write end dead");287 }288 if (in < 0) {289 return false;290 } else {291 return true;292 }293 }294 295 /**296 * Closes this piped stream and releases any system resources 297 * associated with the stream. 298 *299 * @exception IOException if an I/O error occurs.300 */301 public void close() throws IOException {302 in = -1;303 closedByReader = true;304 }305 }306 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/java/io/PipedReader.java.htm
CC-MAIN-2017-04
refinedweb
1,412
64.1
A tool to help keep track of tests, specially for you - developer. Project description Install Setting up test assistant for your project is so easy. It’s just 3 step process. Install django-tests-assistant from pypi: pip install django-tests-assistant Give path to the search indexes: TEST_ASSISTANT_WHOOSH_PATH = "/path/to/whoosh/index/directory" Edit settings.py of your project and add following at the end: from assistant.settings import * Edit your urls.py and add a url mapping in your urlpatterns: url(r'tests/', include('assistant.urls')), That is it & enjoy :) Oh, don’t forget to do a python manage.py syncdb Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-tests-assistant/
CC-MAIN-2018-51
refinedweb
126
60.61
Using Sphinx to document a module using Sage Hi all I wrote a module using sage (first line : from sage.all import *) My module contains quite a lot of docstrings with doctests that I want to be exported in html. Following the documentation of Sphinx, I can create the source directory, but make html does not work : "No module named sage.all" Well. I understand why, but I don't see how to get further. The Sage's documentation says how to sphinxify one docstring, but what I want is a html file with all my doctrings, that is, as far as I understood, the result of that : .. automodule:: mymodule :members: I'm sure that it is possible : the whole documentation of Sage itself is made like that :) In one word, I want to do "docstrings of my module -> html". Any help is accepted Thanks Laurent I've wanted to do this several times too, and I think others have asked about it on the sage-devel list. I've seen answers there, but I can't ever find the threads later when I need them! Hopefully a good answer will show up here :)
https://ask.sagemath.org/question/7996/using-sphinx-to-document-a-module-using-sage/?answer=12187
CC-MAIN-2018-17
refinedweb
194
67.49
.<script language='javascript' type='text/javascript'> </script> One possible definition is: A closure is a method whose free variables have been given values by an enclosing lexical scope. With this definition, a Java programmer uses closures every day! Let us examine the following source code: void test1(int x) { z = x*y*5; } The variables z and y are free variables and are usually given values by the surrounding class definition like this: class Test1 { int z, y; void test1(int x) { z = x*y*5; } } Test1 recv = new Test1(); We say that the closure captures the values, i.e. it binds the free variables to actual storage locations. In this case, the closure is created and the capture happens, at the time of the call: recv.test1(17); But when programmers discuss closures, this is not what they have in mind. They refer to the situation where the free variables are given values first by a stack frame and then later by the object instance. Using the proposed BGGA syntax, the method: { int x ==> z = x*y*5; } is created and invoked within the source code of the method test2 below: class Test2 { int z; void test2(int xx) { int y = 2; { int x ==> z = x*y*5; }.invoke(xx); } } Obviously, the free variable y is given its value by the local stack frame and the free variable z is given its value, as before, by the object instance. Remember that it is only when this capture occurs that the actual closure is created. Before the capture, they are simply methods with free variables, one with a name (test1) and one without a name. To differentiate between the two, I will call the former: instance-only-closure and the latter (as expected): closure. To further analyze closures in relation to method handles I will make use of three important principles. The first two (duality and correspondence) are well known. The third (invokability) has not been described before, as far as I know. The Object/Closure duality principle If a closure is a first-class object then it conforms to the object/closure duality principle. As soon as a closure is a first-class object, it can be used as method arguments, return values and stored into global variables, i.e. it can escape from its lexical context. It turns out (and this is a known, even humorous fact) that such an escaped closure (captured by stack frame) is equivalent to an object. If you introduce first-class closures that can be captured by a stack frame, then you will inevitably introduce a secondary way of constructing objects. A simple example using (almost) BGGA syntax could be: Of course, the Java Virtual Machine offers no way of holding on to the stack frame after the function has finished executing. Therefore Java closures must be implemented using objects that contain the captured variables. Symmetrically, Scheme is a language with lexical closures but no explicit object support. Therefore objects are implemented using closures in Scheme. * Squeak(Smalltalk) uses closures for almost everything and is an outstanding example of syntactical brevity. Though the latter does not necessarily follow from the former. For example the closure: [ :x | z:=x*y*5 ] is created and invoked within the source code of the function test3 below: Smalltalk closures are first-class objects, ie conforms to the duality principle. In the example on the left-hand side the creation and invokation takes place without any intermediate variable. On the right-hand side, the closure is stored into a variable before it is invoked. Smalltalk has no control flow syntax at all. The code to be executed when a boolean is true is supplied as a closure to the boolean object using the message ifTrue. Theoretically the same ifTrue method could be added to the Java Boolean object. * Ruby on the other hand, has seven different ways of expressing closures. It is worth noting that the braces {} are not used for other control flow syntax. When you see them, then you know its a Ruby closure of some sort. Ruby has closures that are first-class objects: def test7(xx) y = 2 c = proc { |x| @z=x*y*5 } c.call(xx) end But Ruby has also closures that are not first-class objects: When a closure is supplied to a Ruby procedure like the above example, the supplied closure is not an object. It can only be invoked using the yield keyword. As the Ruby collection classes show, many use cases for closures do not require the closure to be an object. This is an example in Ruby of how to create a new array of strings where all strings from the original array, are appended with a given postfix: postfix = "_end" a = [ "a", "b" ] b = a.collect {|x| x + postfix } Ruby therefore does not always conform to the duality princple. * JavaScript makes frequent use of closures. An example in JavaScript form would be: z=0; function test9(xx) { y = 2; c = function(x){ z = x*y*5; }; c(xx); } The Javascript closures are always valid objects and therefore JavaScript conforms to the object/closure duality principle. Also Javascript closures are even mutable, as any other JavaScript object. This creates another twist on the duality within Javascript as can be seen here: function Car(name) { this.name = name; this.showme = function() { alert("Car1 "+this.name); } } function newCar(name) { self = function() {}; self.name = name; self.showme = function() { alert("Car2 "+self.name); } return self; } function test10() { a = new Car("Volvo"); a.showme(); b = newCar("Saab"); b.showme(); } The correspondence principle Tennent's correspondence principle as applied to closures is best explained with a simple example in Squeak. In the program on the right-hand side, an arbitrary sequence of statements are wrapped within a closure that is immediately invoked. The two programs behave identically because Squeak(Smalltalk) conforms to the correspondence principle. Thus the correspondence principle for closures states that if an expression or a proper sequence of statements is wrapped with a closure that is immediately invoked, then the program should still behave the same. To be proper all parentheses, braces, brackes and other paired syntactical constructs within the sequence must balance. To be correct, Tennent did not discuss closures when he formulated the correspondence principle. He used it to analyze the relation between variable definitions and procedure parameters. A greatly simplified example would be, if your new shiny language can define variables of type int and type long but procedure parameters can only be of type int, then you have a problem! Therefore the correspondence principle litmus test for closures should also verify that the following transforms work identically to the original source. The correspondence principle seems rather straight forward, until the discourse focuses on whether other language constructs could exist in a free state and therefore be captured by a closure and given values by a lexical environment. Typically these constructs are return/break/continue. If the language supports first-class continuations, then one way to resolve it is to view the return/break/continue as variables holding on to continuations. Then they can clearly be free, since they are variables. This is not at all unreasonable, for example Parrot uses a continuation-based calling convention where return is indeed a continuation. I said that Squeak(Smalltalk) has no syntax for control flow. This was a simplification, it has in fact ^ to force an early return. The two examples below behave identically. This was expected since Squeak(Smalltalk) has no other control flow syntax other than closures and an early return must therefore reside inside a closure and force a return from the lexical environment. Squeak(Smalltalk) therefore conforms fully to the correspondence principle. For Ruby, the keyword return sometimes return from the lexical scope and sometimes simply terminate the closure itself. Paul Cantrell's walk through is excellent, and I will not repeat it here. It suffices to say is that Ruby does not always conform to the correspondence principle. JavaScript does not fully conform to the correspondence principle either since return within a closure simply terminates the closure. The Big Closure Question Everyone Should Know The Answer To! There are several good reasons for adding closures to Java. However before we venture into the third design principle for closures we need to re-examine one of the reasons for closures that is common, but also, unfortunately, wrong. The fallacious argument is that if you want to improve the syntax for binding asynchronous user interface callbacks to action code, then you should add closures to your language. For example, anonymous inner classes, a limited form of closure, was added to Java 1.1 and has been used almost exclusively for this purpose ever since. Every closure proposal has therefore been evaluated on how they improve this syntax in regard to the ActionListener, but closures should not be used for this purpose. Does Squeak(Smalltalk) (the language that has no other syntax than closures for control flow) use closures to bind asynchronous user interface callbacks to the action code? Click here to reveal answer! Please read a bit from Mark Guzdial: Squeak Object Oriented Design with Multimedia Applications, Chapter 5, page 10. You will see the following code: button := ClockButton make: 'Hours +' at: ((position x) @ ((position y)+100) extent: 100@50) for: aModel triggering: #addHour. The source code creates a clock button object that when pressed will send the message #addHour to aModel. This is not a closure, I call this a stored message, aka Squeak(Smalltalk) symbol, aka c++ method pointers, aka Objective-C selector aka JSR292 MethodHandle. This ability to bind the view to the model directly is in the Squeak(Smalltalk) universe called pluggable views/widgets. In the OpenStep universe, this is called the Target-Action paradigm. Other implementations are available in QT and GTK as signals and slots, C# delegates, etc. They all boil down to: 1) storing a receiver and a message 2) at a later time send the message to the receiver. Why is it better to use two variables, a receiver and a stored message instead of a closure? Let us examine a very common use case. Your application creates a save button within its constructor and then wants to bind the button to the saveState method in your app. I.e. when the save button is pressed, then the saveState method is executed. To the left we do this using a closure and to the right using a stored message with the MethodHandle syntax proposed here. You might say that there is not much of a difference. But it is. The closure solution carries the weight of the lexical scope, both philosophically and practically. For example in Java there will always be an extra class for each closure. When the button is pressed, I really need access to the stack frame of the constructor! This is simply wrong since we almost never need access to the constructor stack frame at the time when the button is actually pressed! You simply want the button to execute saveState in the model with no further ado. Practically we can of course optimize closures that do not actually make use of the lexical surroundings into something more efficient. This is a perfectly valid solution as long as the language has no stored messages. But messaging is the essence of object oriented programming! This means that if you have the choice between an (optimized)closure and a receiver/stored message pair, you should choose the latter for UI callbacks since they do not carry the philosophical backpack of lexical surroundings. Here are some more examples of how messages are stored and later sent to receiving objects in different OO languages: Squeak(Smalltalk): How to execute: retval := aModel calculate using a stored message. recv := aModel. msg := #calculate. retval := recv perform: msg. Ruby How to execute: retval = aModel.calculate using a stored message. recv = aModel msg = :calculate retval = recv.send msg Javascript How to execute: retval aModel.calculate() using a stored message. recv = aModel msg = "calculate" retval = recv[msg]() In fact, the designers behind FCM and BGGA have already realized that it is convenient to have syntax to refer directly to a method without any apparent reference to the current lexical scope. Both BGGA and FCM uses the word method reference for this construct. A method reference like: MyApp#saveState() will be transformed into the closure { MyApp m ==> m.saveState() } this is identical to the stored message (aka MethodHandle) #MyApp.saveState. The term method reference can be confusing within an OO context. Generally a message is sent to an object that decides (based on the message) which method to execute. A method reference should therefore always be non-virtual/static/non-lazy. The Closure/Stored message invokability principle It was not a coincidence that it was so easy to implement a stored messages using a closure. Stored messages and closures are different yet related (invokability) much in the same way that objects and closures are different and yet related (duality). A language conforms to the invokability principle if you can: - Create a closure that will send a given message to a given receiver. - Create a message, that when sent to a closure, will cause the closure to be invoked. - The closure invoke and the stored message send can be performed by generic code that does not know the exact type of the receiver,message,potential parameters and return value. Invokability is necessary in the ClockButton example above to allow the use of a closure wrapping the receiver for example to curry addHour with a value (aka the InsertArgument transform). For example like this: button := ClockButton make: 'Hours +' at: ((position x) @ ((position y)+100) extent: 100@50) for: [aModel addHour: 17] triggering: #value.But the ClockButton example is only a special case of arbitrary code execution where we register a receiver,message,parameters and pass on the return value to some other code. This can be used to generate new and efficient code at runtime, without resorting to bytecode generation. The closure-jsr-proposal says: When closures are integrated into an object-oriented type system with subtyping, the types used to represent them obey a relationship that enables them to be used quite flexibly. [...] This enables a flexibility available in untyped languages such as Scheme, Ruby, and Smalltalk and which is achieved only with extreme awkwardness in Java today. If a language conforms to the invokability principle, then you have achieved exactly this flexibility! Let us examine what it looks like in different OO languages. Squeak(Smalltalk): Turn a recv/stored message pair into a closure: closure := [ recva perform: msga ] Turn a closure into a recv/stored message pair: recvb := closure msgb := #value Ruby: Turn a recv/stored message pair into a closure: closure = proc { recva.send msga } Turn a closure into a recv/stored message pair: recvb = closure msgb = :call Javascript: Turn a recv/stored message pair into a closure: closure = function(){ recva[msga]() } Turn a closure into a recv/stored message pair: recvb = { invoke:closure }; <-- Note! msgb = "invoke" Since they are dynamic languages with no static typing visible in the source, therefore the variable types that hold on to the closures and the receivers/messages cannot reveal any type information. Clearly Squeak(Smalltalk) and Ruby conform to the invokability principle. But JavaScript does not! This can be seen in the JavaScript example above. We have to wrap the closure in a new object that has a suitable key "invoke" so that we can use that key, as a message. Obviously the ability for a closure to be invoked resides outside of the normal object properties/messages. We can now summaries the differences between the languages when analyzed using the three principles. Since all the languages are useful for programming, the fact that they do not all conform to all principles, does not mean that they are broken. Instead we should see this as an indication that the OO features of Ruby and JavaScript were not consistently thought through when they were designed. (If you like, please comment on other OO languages and I will update this table.) Since we have a chance to affect how we add closures and stored messages to Java we really want to make sure that we conform as much as possible to these principles. How to apply these principles to Java? Assuming that we have added MethodHandles to Java. How will the MethodHandle relate to what we have discussed so far? 1) A MethodHandle is a stored message: MethodHandle msg = #Test1.test1; (or simply #test1 if within the scope of Test1) and the message is sent to a receiver like this: Test1 recv = new Test1(); msg.invoke<void>(recv); (FCM and BGGA proposed: Test1#test1() to create a stored message.) 2) A MethodHandle is an instance-only-closure: MethodHandle c = #recv.test1; and the closure is invoked like this: c.invoke<void>(); (FCM proposed: recv#test1() to create an instance-only-closure.) 3) A MethodHandle is a closure: MethodHandle c = { ==> z=x+y; }; 4) A MethodHandle is even a method reference: One might silently reflect over if such a versatile object should have such a mundane name. Duality Using a MethodHandle to hold on to closures solves a violation of the duality principle in the current BGGA proposal. Admittedly, the violation lies within the human perception and not in the technical implementation. The syntax for a closure type in the current BGGA proposal does not look like an object. In fact it looks like a piece of code. This is a serious problem for Java programmers since they expect object types to begin with an uppercase character that are then followed by lowercase characters. Using a MethodHandle as the return value from closure creation will resolve this problem. Every closure proposal expects the compiler to be able to track the signature of the closure at compile time and proposes their own syntax for doing so. If a MethodHandle is used to hold on to a closure, then we can for example append the proposed syntactical representation of a MethodType to the MethodHandle. The exact syntax is unimportant, this is merely a thought experiment. MethodHandle<void>() c1 = { ==> z = y*2; }; if (!c1.type().equals(#<void>())) throw new Error(); MethodHandle<void>(int) c2 = { int x ==> z = x*y; }; if (!c2.type().equals(#<void>(int))) throw new Error(); MethodHandle<double>(int,float) c3 = { int x, float y ==> (double)(x*y) }; if (!c3.type().equals(#<double>(int,float))) throw new Error(); MethodHandle<Object>(String,int) c4 = { String s, int x ==> "("+s+x+")" }; if (!c4.type().equals(#<String>(String,int))) throw new Error(); MethodHandle<String throws IOException>(Reader) c5 = { Reader r ==> r.readLine() }; if (!c5.type().equals(#<String throws IOException>(Reader))) throw new Error(); MethodHandle c6 = (MethodHandle)c5; MethodHandle<Object>(Object) c7 = (MethodHandle<Object>(Object))c6; MethodHandle<double restricted>() c8 = { => x*1.2 }; Instead of having a MethodHandle without any suffix, we could use something similar to MethodHandle<?>(?...) to refer to a MethodHandle for which we have no compile time information. As long as this does not cause any (or at least not too much) cognitive dissonance in respect to normal generics. To maintain compatibility with legacy library methods that use interfaces, it should be easy to cast a MethodHandle to any interface with a single compatible method. This can be done with only a single hidden wrapper class per interface, or even by modifying the behavior of the bytecode where we could allow (for all compatible signatures) checkcast from a MethodHandle to an interface to succeed and invokeinterface on a MethodHandle to work. boolean running = true; // Thread executing closure... Runnable r1 = (Runnable){ ==> while(running) { doSomething(); } }; // Thread executing instance-only-closure... Runnable r2 = (Runnable)#this.getGoing; Thread t1 = new Thread(r1); t1.start(); Thread t2 = new Thread(r2); t2.start(); ... .. running = false; Correspondence If MethodHandles are used for closures, then they must conform to Tennent's correspondence principle. Therefore it has to allow for co/contra variance in the return value/arguments. Without co/contra variance, the following transform will not even execute. Also a MethodHandle.invoke cannot be declared to always throw a checked Exception (as in the current proposal) since that would prevent exception transparency of the closure. Invokability The intriguing solution to put both closures and stored messages into the same object (the MethodHandle), exercises both the duality principle and the invokability princple at the same time. Invokability within a typed source code language requires that the type systems for closures and stored messages are compatible similarly to how variables and parameters must be compatible according to the correspondence principle. Having a single object type for both closures and stored messages, solves this problem automatically. However to conform to the invokability principle we must also make sure that we can generate a closure (1), generate a message (2) and verify (3) where neither the closure, nor the receiver/message pair reveal type information. Java (proposed syntax): How to execute: int retval = aModel.calculate() using a stored message. Object recva = aModel; MethodHandle<int>(Object) msga = #MyModel.addHour. int retval = msga.invoke<int>(recva); Turn a recv/stored message pair into a closure with no type information: MethodHandle<Object>() closure = { ==> msga.invoke<int>(recva) } Turn a closure into a recv/stored message pair with no type information: Object recvb = closure; MethodHandle<Object>(Object) msgb = #MethodHandle<Object>().invoke Which is again put to use: int retval = msgb.invoke<int>(recvb); It is not possible to conform to the invokability principle without implicit downcast from Object to the correct parameter type. The current BGGA prototype supports this. To return an untyped value that can be used further, we also need boxing and unboxing, both for parameters and the return value. The BGGA prototype does not currently support this. Thus: Revisiting UI callbacks We can now revisit the UI example above where a pair consisting of a receiver and a stored message was used to register a callback from an UI button. But such a pair is indeed an instance-only-closure (bound MethodHandle). It should therefore be possible to use a single MethodHandle as the parameter to onPressed. I.e. the declaration of onPressed is: void onPressed(MethodHandle<void>() action) { ... } By doing so, both the following uses of onPressed will be possible. We can use a stored message (the left hand side example), or if we need to curry the callback, we can use a closure (the right hand side example). Closures as a resource for dynamic language runtime developers If closures are MethodHandles, then it is possible to create the java.dyn transforms with more syntactical brevity since we can use the restricted closure to our advantage! The optimizer can inline invokations on MethodHandles as soon as the MethodHandle variable is final and bound. (It can inline non-final MethodHandles as well, but then it will need to profile the execution first.) Since all variables captured by a restricted closure are final this will guarantee that the invoked target will be inlined quickly. For example, if you have the invokedynamic call site: InvokeDynamic.dosomething<int>(47); then you can in the boostrap, create an add argument transform on the fly: callsite.setTarget( { int x => target.invoke<int>(100, x) } ); The target variable will be final within the restricted closure and can therefore easily be inlined. If we revisit the different implementations of MethodHandle transforms, here an appendArgument transform. We can see that the implementation of the transforms can be even shorter with the help of closures. Conclusion MethodHandles are on their way into Java. If we make sure that they can be used both for closures and stored messages then they will contribute significantly to the Java language itself. Fredrik Öhrström - Closures for the Java Programming Language (BGGA) - Concise Instance Creation Expressions (CICE) - Clear, Consistent, and Concise Syntax (C3S) for Java - First Class Methods (FCM) - Closures And Objects Are Equivalent - Brian Goetz: Java theory and practice: The closures debate - Klaus Kreft and Angelika Langer: Understanding the closure debate - Alex Blewitt: When is a closure not a closure? - Abelson & Sussman: Structure and Interpretation of Computer Programs - Mark Guzdial: Squeak Object Oriented Design with Multimedia Applications, Chapter 5 - Steve Burbeck: Applications Programming in Smalltalk-80(TM) - Alan Kay: Messaging is more important than Objects - John Rose: Better Closures - John Rose: Closures without Function Types - Rémi Forax: Closures with Function Types - Rémi Forax: Closure Literal and Method Reference - Neal Gafter: Tennents Correspondence Principle - Neal Gafter: A Presentation of BGGA Closures - Neal Gafter: Use cases for closures - Zdeněk Troníček: Method references (version 2008-03-17) - Stephen Colebourne: Comparison of BGGA, CICE, FCM - Howard Lovatt : Comparing Inner Class/Closure Proposals (C3S,FCM,CICE,BGGA) - Howard Lovatt : An Alternative to Closure Conversion and to Restricted Closures - Elliotte Rusty Harold: Homework for Closures - Douglas Crockford: Tennent's correspondence principle. - Thread discussing Tennent's correspondence principle ActionListeners are not a valid reason for closures Though it is of course important to be backwards-compatible with ActionListeners.
https://blogs.oracle.com/ohrstrom/entry/using_methodhandles_to_reconci
CC-MAIN-2014-15
refinedweb
4,164
53.81
I have a website that allows the user to upload a file multiple times for processing. At the moment I have a single file input but I want to be able to remember the users choice and show it on the screen. What I want to know how to do is after a user selects a file I will remember their choice and redisplay the file input with the file pre-selected on reload of the page. All I need to know is how to remember and repopulate a file input. I am also open to approaches that don't use a file input (if that is possible). I am using JQuery Ok, you want to "Remember and Repopulate File Input", "remember their choice and redisplay the file input with the file pre-selected on reload of the page".. And in the comment to my previous answer you state that you're not really open to alternatives: "Sorry but no Flash and Applets, just javscript and/or file input, possibly drag and drop." I noticed while browsing (quite some) duplicate questions (1, 2, 3, etc.), that virtually all other answers are along the lines of: "No you can't, that would be a security-issue", optionally followed by a simple conceptual or code example outlining the security-risk. However, someone stubborn as a mule (not necessarily a bad thing up to a certain level) might perceive those answers as: "No, because I said so", which is indeed something different then: "No, and here are the specs that dis-allow it". So this, is my third and last attempt to answer your question (I guided you to the watering-hole, I lead you to the river, now I'm pushing you to the source, but I can't make you drink). Edit 3: What you want to do was actually once described/'suggested' in RFC1867 Section 3.4:. And indeed, the HTML 4.01 spec section 17.4.1 specifies that: User agents may use the value of the value attribute as the initial file name. (By 'User agents' they mean 'browsers'). Given the facts that javascript can both modify and submit a form (including a file-input) and one could use css to hide forms/form-elements (like the file-input), the above statements alone would make it possible to silently upload files from a user's computer without his intention/knowledge. It is clearly extremely important that this is not possible, and as such, (above) RFC1867 states in section 8 security Considerations: It is important that a user agent not send any file that the user has not explicitly asked to be sent. Thus, HTML interpreting agents are expected to confirm any default file names that might be suggested with <INPUT TYPE=file. However, the only browser (I'm aware of) that ever implemented this features was (some older versions of) Opera: it accepted a <input type="file" value="C:\foo\bar.txt> or value set by javascript ( elm_input_file.value='c:\\foo\\bar.txt';). When this file-box was unchanged upon form-submit, Opera would pop-up a security-window informing the user of what file(s) where about to be uploaded to what location (url/webserver). Now one might argue that all other browsers were in violation of the spec, but that would be wrong: since the spec stated: " may" (it did not say " must") ".. use value attribute as the initial file name". And, if the browser doesn't accept setting the file-input value (aka, having that value just be 'read-only') then the browser also would not need to pop-up such a 'scary' and 'difficult' security-pop-up (that might not even serve it's purpose if the user didn't understand it (and/or was 'conditioned' to always click 'OK')). Let's fast-forward to HTML 5 then.. Here all this ambiguity is cleared up (yet it still takes some puzzling): Under 4.10.7.1.18 File Upload state we can read in the bookkeeping details: - The value IDL attribute is in mode filename. ... - The element's value attribute must be omitted. So, a file-input's value attribute must be omitted, yet it also operates in some kind of 'mode' called 'filename' which is described in 4.10.7.4 Common input element APIs: The value IDL attribute allows scripts to manipulate the value of an input element. The attribute is in one of the following modes, which define its behavior: skipping to this 'mode filename':StateError exception. Let me repeat that: "it must throw an InvalidStateError exception" if one tries to set an file-input value to a string that is not empty !!! (But one can clear the input-field by setting it's value to an empty string.) Thus, currently and in the foreseeable HTML5 future (and in the past, except Opera), only the user can populate a file-input (via the browser or os-supplied 'file-chooser'). One can not (re-)populate the file-input to a file/directory with javascript or by setting the default value. Now, suppose it was not impossible to (re-)populate a file-input with a default value, then obviously you'd need the full path: directory + filename(+ extension). In the past, some browsers like (most notable) IE6 (up to IE8) did reveal the full path+filename as value: just a simple alert( elm_input_file.value ); etc. in javascript AND the browser also sent this full path+filename(+ extension) to the receiving server on form-submit. Note: some browsers also have a 'file or fileName' attribute (usually sent to the server) but obviously this would not include a path.. That is a realistic security/privacy risk: a malicious website(owner/exploiter) could obtain the path to a users home-directory (where personal stuff, accounts, cookies, user-portion of registry, history, favorites, desktop etc. is located in known constant locations) when the typical non-tech windows-user will upload his files from: C:\Documents and Settings\[UserName]\My Documents\My Pictures\kinky_stuff\image.ext. I did not even talk about the risks while transmitting the data (even 'encrypted' via https) or 'safe' storage of this data! As such, more and more alternative browsers were starting to follow one of the oldest proven security-measures: share information on a need-to-know basis. And the vast majority of websites do not need to know the file-path, so they only revealed the filename(+ extension). By the time IE8 was released, MS decided to follow the competition and added an URLAction option, called “Include local directory path when uploading files”, which was set to 'disabled' for the general internet-zone (and 'enabled' in the trusted zone) by default. This change created a small havoc (mostly in 'optimized for IE' environments) where all kinds of both custom code and proprietary 'controls' couldn't get the filename of files that were uploaded: they were hard-coded to expect a string containing a full path and extract the part after the last backslash (or forward slash if you were lucky...). 1, 2 Along came HTML5, and as you have read above, the 'mode filename' specifies: On getting, it must return the string "C:\fakepath\" followed by the filename of the first file in the list of selected files, if any, or the empty string if the list is empty. and they note that This "fakepath" requirement is a sad accident of history. The following function extracts the filename in a suitably compatible manner: function extractFilename(path) { if (path.substr(0, 12) == "C:\\fakepath\\") return path.substr(12); // modern browser var x; x = path.lastIndexOf('/'); if (x >= 0) // Unix-based path return path.substr(x+1); x = path.lastIndexOf('\\'); if (x >= 0) // Windows-based path return path.substr(x+1); return path; // just the filename } Note: I think this function is stupid: the whole point is to always have a fake windows-path to parse.. So the first 'if' is not only useless but even invites a bug: imagine a user with an older browser that uploads a file from: c:\fakepath\Some folder\file.ext (as it would return: Some folder\file.ext)... I would simply use: function extractFilename(s){ // returns string containing everything from the end of the string // that is not a back/forward slash or an empty string on error // so one can check if return_value==='' return (typeof s==='string' && (s=s.match(/[^\\\/]+$/)) && s[0]) || ''; } (as the HTML5 spec clearly intended). Let's recap (getting the path/file name): c:\fakepath\to the filename when getting the file-input's value Thus, in the recent past, currently and in the foreseeable HTML5 future one will usually only get the file-name. That brings us to the last thing we need to examine: this 'list of selected files' / multiple-files, that leads us to the third part of the puzzle: First of all: the 'File API' should not be confused with the 'File System API', here is the abstract of the File System API: This specification defines an API to navigate file system hierarchies, and defines a means by which a user agent may expose sandboxed sections of a user's local filesystem to web applications. It builds on [FILE-WRITER-ED], which in turn built on [FILE-API-ED], each adding a different kind of functionality. The 'sandboxed sections of a user's local filesystem' already clearly indicates that one can't use this to get a hold of user-files outside of the sandbox (so not relevant to the question, although one could copy the user-selected file to the persistent local storage and re-upload that copy using AJAX etc. Useful as a 'retry' on failed upload.. But it wouldn't be a pointer to the original file that might have changed in the mean-time). Even more important is the fact that only webkit (think older versions of chrome) implemented this feature and the spec is most probably not going to survive as it is no more actively maintained, the specification is abandonned for the moment as it didn't get any significant traction Let's continue with the 'File API', it's abstract tells us: is in the File Upload state [HTML] . - A Blob interface, which represents immutable raw binary data, and allows access to ranges of bytes within the Blob object as a separate Blob. - URL scheme for use with binary data such as files, so that they can be referenced within web applications. So, FileList can be populated by an input field in file-mode: <input type="file">. That means that all of the above about the value-attribute still applies! When an input field is in file-mode, it gets a read-only attribute files which is an array-like FileList object that references the input-element's user-selected file(s) and is(/are) accessible by the FileList interface. Did I mention that the files-attribute of the type FileList is read-only (File API section 5.2) ? : The HTMLInputElement interface [HTML] has a readonly attribute of type FileList... Well, what about drag and drop? From the mdn-documentation - Selecting files using drag and drop The real magic happens in the drop() function: function drop(e) { e.stopPropagation(); e.preventDefault(); var dt = e.dataTransfer; var files = dt.files; handleFiles(files); } Here, we retrieve the dataTransfer field from the event, then pull the file list out of it, passing that to handleFiles(). From this point on, handling the files is the same whether the user used the input element or drag and drop. So, (just like the input-field type="file",) the event's dataTransfer attribute has an array-like attribute files which is an array-like FileList object and we have just learned (above) that the FileList is read-only.. The FileList contains references to the file(s) that a user selected (or dropped on a drop-target) and some attributes. From the File API Section 7.2 File Attributes we can read:Date. and there is a size attribute: F.size is the same as the size of the fileBits Blob argument, which must be the immutable raw data of F. Again, no path, just the read-only filename. Thus: (elm_input||event.dataTransfer).filesgives the FileList Object. (elm_input||event.dataTransfer).files.lengthgives the number of files. (elm_input||event.dataTransfer).files[0]is the first file selected. (elm_input||event.dataTransfer).files[0].nameis the file-name of the first file selected valuethat is returned from an input type="file"). What about this 'URL scheme for use with binary data such as files, so that they can be referenced within web applications', surely that can hold an private reference to a file that a user selected? From the File API - A URL for Blob and File reference we can learn that: This specification defines a scheme with URLs of the sort: blob:550e8400-e29b-41d4-a716-446655440000#aboutABBA. These are stored in an URL store (and browsers should even have their own mini HTTP-server aboard so one can use these urls in css, img src and even XMLHttpRequest. One can create those Blob URLs with: var myBlobURL=window.URL.createFor(object);returns a Blob URLthat is automatically revoked after it's first use. var myBlobURL=window.URL.createObjectURL(object, flag_oneTimeOnly);returns a re-usable Blob URL(unless the flag_oneTImeOnly evaluates to true) and can be revoked with window.URL.revokeObjectURL(myBlobURL). Bingo you might think... however... the URL Store is only maintained during a session (so it will survive a page-refresh, since it is still the same session) and lost when the document is unloaded. From the MDN - Using object URLs: The object URL is a string identifying the File object. Each time you call window.URL.createObjectURL(), a unique object URL is created, even if you've created an object URL for that file already. Each of these must be released. While they are released automatically when the document is unloaded, if your page uses them dynamically, you should release them explicitly by calling window.URL.revokeObjectURL() That means, that even when you store the Blob URL string in a cookie or persistent local storage, that string would be useless in a new session! That should bring us to a full circle and the final conclusion: It is not possible to (re-)populate an input-field or user-selected file (that is not in the browsers sandboxed 'Local storage' area). (Unless you force your users to use an outdated version of Opera, or force your users to use IE and some activeX coding/modules (implementing a custom file-picker), etc) Some further reading: JavaScript: The Definitive Guide - David Flanagan, chapter-22: The filesystem api How to save the window.URL.createObjectURL() result for future use? How long does a Blob persist? how to resolve the C:\fakepath?
https://codedump.io/share/KMVA776KFB2f/1/remember-and-repopulate-file-input
CC-MAIN-2018-09
refinedweb
2,479
59.33
Creating Timer Jobs in SharePoint 2010 that Target Specific Services Summary: Learn how to create a Microsoft SharePoint 2010 timer job that is associated with a particular SharePoint service in SharePoint 2010. Last modified: October 18, 2011 Applies to: Business Connectivity Services | Open XML | SharePoint Designer 2010 | SharePoint Foundation 2010 | SharePoint Online | SharePoint Server 2010 | Visual Studio In this article Overview of Timer Jobs in SharePoint 2010 Preparing to Create a SharePoint Timer Job Creating a SharePoint Timer Job Enabling Configuration for Your SharePoint Timer Job Deploying Your Timer Job Testing and Debugging Your SharePoint Timer Job Conclusion About the Author Additional Resources Author: Bryan Phillips Editors: WROX Tech Editors for SharePoint 2010 Articles Contents Overview of Timer Jobs in SharePoint 2010 Preparing to Create a SharePoint Timer Job Creating a SharePoint Timer Job Enabling Configuration for Your SharePoint Timer Job - Testing and Debugging Your SharePoint Timer Job - - - Microsoft SharePoint 2010 timer jobs perform much of the back-end work that is required to maintain your SharePoint farm. Timer jobs are executable tasks that run on one or more servers at a scheduled time. They can be configured to run exactly one time, or on a recurring schedule. They are similar to Microsoft SQL Server Agent jobs, which maintain a SQL Server installation by backing up databases, defragmenting database files, and updating database statistics. SharePoint uses timer jobs to maintain long-running workflows, to clean up old sites and logs, and to monitor the farm for problems. Depending on your edition of SharePoint and any installed third-party products, you can have many timer jobs in your farm, or just a few. Timer jobs have several advantages. They can run periodically and independently of users who are accessing your SharePoint sites, they can offload long-running processes from your web front-end servers (which increases the performance and responsiveness of your pages), and they can run code under higher privileges than the code in your SharePoint site and application pages. You can view the timer jobs in your farm by using the Job Definitions page in SharePoint 2010 Central Administration. To access the Job Definitions page, click All Programs, Microsoft SharePoint 2010 Products, SharePoint 2010 Central Administration. On the Central Administration site, click the Monitoring link. Finally, click the Review Job Definitions link in the Timer Jobs section of the Monitoring page. The list of timer job definitions in your farm is displayed, as shown in Figure 1. Because SharePoint timer jobs run in the background, they perform their tasks behind the scenes, even if no users are accessing your SharePoint sites. The Windows SharePoint Services Timer service runs the timer jobs in your farm. The service must be enabled and running on each server in your farm. The service enables the various SharePoint timer jobs to configure and maintain the servers in the farm. If you stop the Windows SharePoint Services Timer service on a server, you also stop all SharePoint timer jobs running on that server; for example, jobs that index your SharePoint sites, import users from Active Directory, and perform many other processes that affect the performance and usability of SharePoint. Before you can create your SharePoint timer job, you must install Microsoft Visual Studio 2010 (Professional, Premium, or Ultimate edition) on Windows Vista, Windows 7, or Windows Server 2008. SharePoint 2010 must be installed on the development computer. After you install the required software, create a new SharePoint project in Visual Studio by selecting File, New, Project, which displays the New Project dialog box, shown in Figure 2. In the dialog box, ensure that .NET Framework 3.5 is selected in the drop-down list at the top of the dialog box, and expand the list of project templates in the left pane of the dialog box until 2010 is displayed under SharePoint. Select 2010 to display a list of SharePoint 2010 project templates in the right pane of the dialog box. Select Empty SharePoint Project from the list of templates, specify the project information at the bottom of the dialog box, and then click OK. You use the new project to develop your new timer job, package it for deployment to SharePoint, and then debug it. Immediately after you click OK in the New Project dialog box, the SharePoint Customization Wizard dialog box appears, as shown in Figure 3. Type the URL of your SharePoint site in the text box, select Deploy as a farm solution, and then click Finish. You must select the Deploy as a farm solution radio button because timer jobs require a higher level of trust to execute than sandboxed solutions. After you click Finish in the SharePoint Customization Wizard dialog box, Visual Studio creates and opens the project, as shown in Figure 4. Now that the project is created, you can start adding the classes that are required to form the basis of your SharePoint timer job. The next section outlines how to create those classes. All timer jobs, including those installed with SharePoint, are created and executed by using the SPJobDefinition class. To create a new SharePoint timer job, you must first add a class to your project that inherits from the SPJobDefinition class. To add the class to your project Right-click the project and then choose Add, Class from the context menu to open the Add New Item dialog box. Specify a name for the class, and then click Add. Visual Studio opens the new class in the text editor. Change the visibility of the class to public, and make the class inherit from the SPJobDefinition class. The following code snippet shows an example of a class that inherits from the SPJobDefinition class. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using Microsoft.SharePoint; namespace MonitoringJob { public class MonitoringJob : SPJobDefinition { public MonitoringJob() : base() { } public MonitoringJob(string jobName, SPService service) : base(jobName, service, null, SPJobLockType.None) { this.Title = jobName; } public override void Execute(Guid targetInstanceId) { // Put your job's code here. } } } In the code snippet, the MonitoringJob class inherits from the SPJobDefinition class, defines one non-default constructor, and overrides the Execute method of the base class. The non-default constructor is required because the default constructor of the SPJobDefinition class is for internal use only. When you create your constructor, you must pass the values for the four parameters described in Table 1 to the base class constructor. In the preceding code snippet, null is passed as the value for server because this job is not associated with a specific server. SPJobLockType.None is passed as the value for lockType to enable SharePoint to run multiple instances of the job simultaneously. Table 2 lists the possible SPJobLockType values and their descriptions. After you create the constructors, you must override the Execute method of the SPJobDefinition class and replace the code in that method with the code that your job requires. If your code can run without further configuration, you are finished creating this class. Otherwise, you must add configuration screens and classes to your project, as shown in the next section. The next section also discusses the actual code from the sample that belongs in the Execute method. To store configuration data for the job, create classes to contain that configuration and store it in SharePoint. First, the class must inherit from the SPPersistedObject class. Next, the fields in the class must be public, marked with the [Persisted] attribute, and have a data type that is either built-in (Guid, int, string, and so on), inherits from SPAutoSerializingObject, or is a collection type that contains one of the built-in types or a type inheriting from SPAutoSerializingObject. Only fields are saved, not properties, as you might be used to when you serialize objects to XML. The following code snippet shows the two classes that are used to configure the MonitoringJob class. using Microsoft.SharePoint.Administration; public class MonitoringJobSettings : SPPersistedObject { public static string SettingsName = "MonitoringJobSettings"; public MonitoringJobSettings() { } public MonitoringJobSettings(SPPersistedObject parent, Guid id) : base(SettingsName, parent, id) { } [Persisted] public string EmailAddress; } In the code snippet, the MonitoringJobSettings class is used to configure the MonitoringJob class. It inherits from SPPersistedObject and has a single field named EmailAddress that is marked with the [Persisted] attribute. The EmailAddress field is used for the recipient of emails that are sent. One detail to note is that the MonitoringJobSettings class has two constructors defined; a default constructor, which is required for all serializable classes, and a second constructor that calls a constructor on its base class. This second constructor passes in the name of the SPPersistedObject, an instance of an SPPersistedObject that acts as the "parent" of the MonitoringJobSettings object, and a Guid that is used to assign it a unique identifier in SharePoint. Because MonitoringJob is associated with a specific SPService object, the same SPService object becomes the parent of the MonitoringJobSettings object when it is saved to SharePoint. There is more information about how this works later in this section. After you create the classes for the job and its configuration, you can use the following code snippet to save the configuration of the job to SharePoint. Later, you add this code to an event handler that executes when the feature that contains your timer job is activated. // Get an instance of the SharePoint Farm. SPFarm farm = SPFarm.Local; // Get an instance of the service. var results = from s in farm.Services where s.Name == "SPSearch4" select s; SPService service = results.First(); // Configure the job. MonitoringJobSettings jobSettings = new MonitoringJobSettings(service, Guid.NewGuid()); jobSettings.EmailAddress = "myemail@demo.com"; jobSettings.Update(true); In the code snippet, a reference to the SPService object that represents the SharePoint Search service is obtained and passed to the constructor of the MonitoringJobSettings class together with a unique Guid. After that, the EmailAddress property of the class is configured, and the class's Update method is called to persist the object to SharePoint. Passing true to the Update method specifies that you want SharePoint to overwrite any existing saved configuration. Otherwise, an exception is thrown. To retrieve your configuration, get an instance of the SPService, call its GetChild method, and then pass in the class type that you are retrieving, together with the name of the setting. The following code snippet shows part of the implementation of the Execute method of the SPJobDefinition-derived class. The MonitoringJobSettings class was retrieved from SharePoint by calling the GetChild method of the job's parent SPService object. If nothing was previously saved to SharePoint, the call to the GetChild method returns null. After you have an instance of your job's configuration, the rest of the code in your Execute method can run. Now that you have created the job and configuration classes, you must add a SharePoint Feature to enable your SharePoint administrators to use the functionality in SharePoint. A SharePoint Feature is a set of provisioning instructions written in XML that tell SharePoint what to do when the feature is activated. You can see a list of example features by clicking the Site Features or Site Collection Features link on the Site Settings page of your site. To add a feature to your project Right-click the Features folder and choose Add Feature from the context menu.Figure 5. New feature added Type a user-friendly title and description in the top two text boxes. The values that you specify for the title and the description are displayed on the Web Application Features page in SharePoint Central Administration. Because this feature registers the job with one of the web applications in your farm, set Scope to WebApplication. Click the Save button in the toolbar. Now that you have created and configured the feature, you must add an event receiver to the feature so that you can add the code that is required to register your job. An event receiver is a class that runs code when certain events occur in SharePoint. In this case, you add an event receiver to run code when the feature is activated or deactivated. To add an event receiver, right-click your feature and then choose Add Event Receiver from the context menu. After you add the event receiver, you must uncomment the FeatureActivated and FeatureDeactivating methods. You can remove the other methods because they are not used. Next, you must add code in the FeatureActivated method to register the job with SharePoint. Finally, you add code in the FeatureDeactivating method to unregister the job. The following code example shows how to register and unregister the MonitoringJob object. public override void FeatureActivated( SPFeatureReceiverProperties properties) { // Get an instance of the SharePoint farm. SPFarm farm = SPFarm.Local; // Get an instance of the service. var results = from s in farm.Services where s.Name == "SPSearch4" select s; SPService service = results.First(); // Remove job if it exists. DeleteJobAndSettings(service); // Create the job. MonitoringJob job = new MonitoringJob( MonitoringJob.JobName, service); // Create the schedule so that the job runs hourly, sometime // during the first quarter of the hour. SPHourlySchedule schedule = new SPHourlySchedule(); schedule.BeginMinute = 0; schedule.EndMinute = 15; job.Update(); // Configure the job. MonitoringJobSettings jobSettings = new MonitoringJobSettings( service, Guid.NewGuid()); jobSettings.EmailAddress = "myemail@demo.com"; jobSettings.Update(true); } public override void FeatureDeactivating( SPFeatureReceiverProperties properties) { // Get an instance of the SharePoint farm. SPFarm farm = SPFarm.Local; // Get an instance of the service. var results = from s in farm.Services where s.Name == "SPSearch4" select s; SPService service = results.First(); DeleteJobAndSettings(service); } private void DeleteJobAndSettings(SPService service) { // Find the job and delete it. foreach (SPJobDefinition job in service.JobDefinitions) { if (job.Name == MonitoringJob.JobName) { job.Delete(); break; } } // Delete the job's settings. MonitoringJobSettings jobSettings = service.GetChild<MonitoringJobSettings>( MonitoringJobSettings.SettingsName); if (jobSettings != null) { jobSettings.Delete(); } } In the preceding code example, the FeatureActivated method gets an instance of the SPFarm object by accessing the local static property of the SPFarm class. After that, the SPFarm class's Services property is enumerated to obtain an SPService instance to represent the SharePoint Search service. Next the FeatureActivated method calls the DeleteJobAndSettings method to remove the job if it already exists. The job might already exist if the feature was previously deployed, but experienced a problem during deactivation. Afterwards, an instance of the job definition is created by passing in the name of the job and an instance of the SPService class. After the job is created, you must set its Schedule property to an instance of the one of the SPSchedule classes described in Table 3. In the preceding code example, the SPHourlySchedule class is used to run the job every hour. The properties that begin with Begin and End specify the earliest and latest time, respectively, that the job can start. The Timer service randomly selects a time during that interval to start the job. After you set the Schedule property of your job, call the Update method of the job to register it with SharePoint. The final step in the method is to create an instance of the MonitoringJobSettings class, set its properties, and then save it by calling its Update method. In the FeatureDeactivating method, the DeleteJobAndSettings method is called to delete the previously registered job. The DeleteJobAndSettings method uses the JobDefinitions property to get a list of the jobs registered for that SPService object and deletes the job the feature previously created. The method also gets the job's configuration and deletes it. At this point, you are ready to test your code. In the next section, you learn how to test and debug your SharePoint job. When you debug SharePoint code, you typically set the build type of the project to debug, and then press F5 to debug the project. Visual Studio compiles your code, packages the resulting assembly and XML files into a SharePoint solution package (.wsp) file, and then deploys the solution package to SharePoint. After the solution package is deployed, Visual Studio activates the features that you created. To debug the timer job To debug a timer job in SharePoint, you must attach to the process that is behind the SharePoint Timer Service. To attach to the SharePoint Timer Service, select Debug and then choose Attach To Process from the menu bar. In the Attach To Process dialog box, make sure that the check boxes at the bottom of the dialog box are both checked, and then select OWSTIMER.EXE from the Available Processes list.Figure 6. Attach To Process dialog box Click Attach to finish attaching to the SharePoint Timer Service. Now you can add breakpoints in your job code. To make the job run immediately, you can issue a Windows PowerShell command that causes the SharePoint Timer Service to run your job immediately. To open Windows PowerShell, click All Programs, click Microsoft SharePoint 2010 Products, and then choose SharePoint 2010 Management Shell. The Windows PowerShell console opens with the SharePoint namespaces already registered. Type the following command on one line, and then press Enter to schedule your job for immediate execution. Get-SPTimerJob "jobname" -WebApplication "url" | Start-SPTimerJob In the command line, jobname is the name of your project and url is the URL of your web application. The Get-SPTimerJob command gets the job definition for your job, and the pipe (|) sends the job definition to the Start-SPTimerJob command, which in turn schedules it to run. The SharePoint Timer Service usually takes less than 30 seconds to finish running. If it does not execute, make sure that the debugger is not paused in Visual Studio. Timer jobs give you the flexibility to offload your long-running or scheduled processes from your Internet Information Services (IIS) sites. Timer jobs are powerful because they can be configured to run on a specific schedule and can include any functionality that you must have to create world-class, SharePoint-based solutions for your enterprise. Bryan Phillips is a senior partner at Composable Systems, LLC, and a Microsoft Most Valuable Professional in SharePoint Server. He is a co-author of Professional Microsoft Office SharePoint Designer 2007 and Beginning SharePoint Designer 2010 and maintains a SharePoint-related blog. Bryan has worked with Microsoft technologies since 1997 and holds the Microsoft Certified Trainer (MCT), Microsoft Certified Solution Developer (MCSD), Microsoft Certified Database Administrator (MCDBA), and Microsoft Certified Systems Engineer (MCSE) certifications. The following were tech editors on Microsoft SharePoint 2010 articles from Wrox: Matt Ranlett is a SQL Server MVP who has been a fixture of the Atlanta .NET developer community for many years. A founding member of the Atlanta Dot Net Regular Guys, Matt has formed and leads several area user groups. Despite spending dozens of hours after work on local and national community activities, such as the SharePoint 1, 2, 3! series, organizing three Atlanta Code Camps, working on the INETA board of directors as the vice president of technology, and appearing in several podcasts such as .Net Rocks and the ASP.NET Podcast, Matt recently found the time to get married to a wonderful woman named Kim, whom he helps to raise three monstrous dogs. Matt currently works as a senior consultant with Intellinet and is part of the team committed to helping people succeed by delivering innovative solutions that create business value. Jake Dan Attis. When it comes to patterns, practices, and governance with respect to SharePoint development, look no further than Jake Dan Attis. A transplant to the Atlanta area from Moncton, Canada, Dan has a degree in Applied Mathematics, but is 100% hardcore SharePoint developer. You can usually find Dan attending, speaking at, and organizing community events in the Atlanta area, including code camps, SharePoint Saturday, and the Atlanta SharePoint User Group. When he's not working in Visual Studio, Dan enjoys spending time with his daughter Lily, watching hockey and football, and sampling beers of the world. Kevin Dostalek has over 15 years of experience in the IT industry and over 10 years managing large IT projects and IT personnel. He has led projects for companies of all sizes and has participated in various roles including Developer, Architect, Business Analyst, Technical Lead, Development Manager, Project Manager, Program Manager, and Mentor/Coach. In addition to these roles, Kevin also managed a Solution Delivery department as a Vice President for a mid-sized MS Gold Partner from 2005 through 2008 and later also served as a Vice President of Innovation and Education. In early 2010 Kevin formed Kick Studios as a company providing consulting, development, and training services in the specialized areas of SharePoint and Social Computing. Since then he has also appeared as a speaker at numerous user group, summit, and conference type events across the country. You can find out more about Kevin on his blog, The Kickboard. Larry Riemann has over 17 years of experience architecting and creating business applications for some of the world’s largest companies. Larry is an independent consultant who owns Indigo Integrations and does SharePoint consulting exclusively through SharePoint911. He is an author, writes articles for publication and occasionally speaks at conferences. For the last several years he has been focused on SharePoint, creating and extending functionality where SharePoint leaves off. In addition to working with SharePoint, Larry is an accomplished .Net Architect and has extensive expertise in systems integration, enterprise architecture and high availability solutions. You can find him on his blog. Sundararajan Narasiman is a Technical Architect with Content Management & Portals Group of Cognizant Technology Solutions, Chennai, with more than 10 years of Industry Experience. Sundararajan is primarily into the Architecture & Technology Consulting on SharePoint 2010 Server stack and Mainstream .NET 3.5 developments. He has passion for programming and also has interest for Extreme Programming & TDD. For more information, see the following resources:
https://msdn.microsoft.com/en-us/library/hh528519.aspx
CC-MAIN-2015-32
refinedweb
3,606
53.81
In file included from ../../include/ldb.h:51, from ../../tests/test_ldb_qsort.c:26: /usr/include/tevent.h:1440:8: error: unknown type name pid_t 1440 | pid_t *pid, | ^~~~~ /usr/include/tevent.h:1519:8: error: unknown type name pid_t ------------------------------------------------------------------- This is an unstable amd64 chroot image at a tinderbox (==build bot) name: 17.0_musl-20200316-165821 ------------------------------------------------------------------- Please see the tracker bug for details. gcc-config -l: [1] x86_64-gentoo-linux-musl-9.3.0 * clang version 10.0.0 Target: x86_64-gentoo-linux-musl Thread model: posix InstalledDir: /usr/lib/llvm/10/bin /usr/lib/llvm/10 10.0.0 Available Python interpreters, in order of preference: [1] python3.8 [2] python3.7 [3] python3.6 [4] python2.7 (fallback) Available Ruby profiles: [1] ruby24 (with Rubygems) [2] ruby25 (with Rubygems) * Available Rust versions: [1] rust-1.41.1 * [2] rust-bin-1.42.0 timestamp of HEAD at this tinderbox image: /var/db/repos/gentoo Wed Mar 25 05:38:36 UTC 2020 /var/db/repos/musl Sun Mar 22 15:02:57 UTC 2020 emerge -qpvO sys-libs/ldb [ebuild N ] sys-libs/ldb-2.1.1 USE="ldap lmdb -doc -python -test" PYTHON_SINGLE_TARGET="python3_6 -python3_7 -python3_8" Created attachment 625696 [details] emerge-info.txt Created attachment 625698 [details] emerge-history.txt Created attachment 625700 [details] environment Created attachment 625702 [details] etc.portage.tbz2 Created attachment 625704 [details] logs.tbz2 Created attachment 625706 [details] sys-libs:ldb-2.1.1:20200325-064357.log Created attachment 625708 [details] temp.tbz2 Created attachment 625880 . alpine disable a test but not seemingly this one: (In reply to Fabio Scaccabarozzi from comment #8) > Created attachment 625880 [details, diff] . No need to ifdef, you can just include the header. diff --git a/tevent.h b/tevent.h index 3c3e3cc..011e1ad 100644 --- a/tevent.h +++ b/tevent.h @@ -31,6 +31,7 @@ #include <stdint.h> #include <talloc.h> #include <sys/time.h> +#include <sys/types.h> #include <stdbool.h> struct tevent_context; *** Bug 821691 has been marked as a duplicate of this bug. *** Workaround here too:
https://bugs.gentoo.org/show_bug.cgi?id=714680
CC-MAIN-2022-27
refinedweb
339
55.81
Presenter First is a pattern often used at Atomic. It allows you to drive your development from the business logic down. We recently tried this approach on our project in Backbone.js using CoffeeScript. With a few helpers, our presenter tests are simple to write and our models and views are nice and separated. We’ve also been pulling out our template expansion into a separate class. Instead of the standard Presenter First triplets we have quads: Presenters, Models, Views, and Expanders. Here’s a small example of how we use our Presenters: We use Backbone’s built in events for wiring up the business logic. namespace "MyWidget", (exports) -> class exports.Presenter constructor: (@view, @model) -> @view.bind "openClicked", => @view.showLoading() @model.reload() @model.bind "reloaded", => @view.render @model In this example, we’re just using a standard Backbone model. We often have view-models that are composed of Backbone models. namespace "MyWidget", (exports) -> class exports.Model extends Backbone.Model reload: => @fetch success: => @trigger "reloaded" We have a very thin view that is in charge of DOM manipulation and event binding. namespace "MyWidget", (exports) -> class exports.View extends Backbone.View events: "click .open": "openClicked" "click .save": "saveClicked" openClicked: => @trigger "openClicked" saveClicked: => @trigger "saveClicked" render: (model) => $(@el).html(MyWidget.Expander.expand(model)) showLoading: => # code for loading, turn on spinner, hide other stuff etc The Expander is in charge of mapping data from our model to the jquery-expand directive. namespace "MyWidget.Expander", (exports) -> template = JST["thingers/widget"] exports.expand = (model) -> template({ '$.name': "#{model.get('first_name')} #{model.get('last_name')}" }) So there you have it: Presenter First in Backbone.js using CoffeeScript. 1 Comment Shawn, I really enjoyed reading through your blog posts, and was wondering if you would be interested in having some of your content featured on DZone.com. If so, please contact me for more info. Thanks! Chris
https://spin.atomicobject.com/2012/01/03/presenter-first-in-backbone-js/
CC-MAIN-2017-47
refinedweb
307
61.83
Hardware SetupSimply connect a jumper wire from the J1 RPi-Connect header in the upper left corner of the chipKIT Pi. Here I’ve connected the third pin down on the bottom side of the header which is GPIO4 as per the diagram below. Software SetupIn this application example, when the GPIO4 line is driven HIGH (3.3V) by the Raspberry Pi, the chipKIT Pi will read this and turn on the motor. When the GPIO4 line is driven LOW (0V) the chipKIT Pi will turn off the motor. Let’s setup the chipKIT Pi sketch first. The sketch is setup as it was in the previous example only adding Digital Pin 4 as an input that will receive the signal from the Raspberry Pi. In the loop(), the code first checks digital pin 4, if it reads a LOW/False/0V signal it calls the function allStop() which simply applies the brake signal on pin 8. If it reads a HIGH/True/5V signal it will call the turn() function that disengages the brake for 5 seconds, then reapplies the brake. The full sketch is below: //Include the SoftPWMServo Library #include <SoftPWMServo.h> int Pi_GPIO = 0; void setup() { //set up channel B on the Arduino Motor Control Shield pinMode(13, OUTPUT); //Pin 13 controls direction pinMode(8, OUTPUT); //Pin 8 controls the brake pinMode(4, INPUT); //Input line from Raspberry PI GPIO } void loop() { /); //Read the inputs from the Raspberry Pi Pi_GPIO = digitalRead(4); //Generate motor output based on inputs from the Pi if (Pi_GPIO == 0){ allStop(); } else if (Pi_GPIO == 1){ turn(); } } void turn(){ //Set for directionA digitalWrite(13,HIGH); // First we disengage the brake for Channel B digitalWrite(8,LOW); //Let's run the motor for about 5 seconds delay(5000); //brake the motor digitalWrite(8,HIGH); //give the motor a chance to settle delay(500); } void allStop(){ //brake the motor digitalWrite(8,HIGH); //give the motor a chance to settle delay(500); } Installing GPIO Package for Python - Open a terminal window and download the latest version of the GPIO Package for Raspberry Pi. At the time of this post, the latest version was RPi.GPIO-0.5.3a: - Next step, open python within the terminal window (note Python 3 is used here): - Instead of typing the full RPi.GPIO name every time, we will set it up so that using just GPIO does the same thing: - Next, we need to define the type of numbering we will use $ sudo apt-get update $ sudo apt-get -y install python-rpi.gpio $ sudo python >>> import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM)— Broadcom GPIO numbering GPIO.setmode(GPIO.BOARD)— board pin numbering >>> GPIO.setmode(GPIO.BCM) >>> GPIO.setup(4, GPIO.OUT) Note that if all went as planned the motor connected should now start turning.Note that if all went as planned the motor connected should now start turning. >>> GPIO.output(4, True) The complete terminal code is show below:The complete terminal code is show below: >>> GPIO.output(11, False) pi@raspberrypi ~ $ sudo python3 Python 3.2.3 (default, Mar 1 2013, 11:53:50) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import RPi.GPIO as GPIO >>> GPIO.setmode(GPIO.BCM) >>> GPIO.setup(4, GPIO.OUT) >>> GPIO.output(4, False) >>> GPIO.output(4, True) >>> GPIO.output(4, False) VN:F [1.9.22_1171]DC Motor Control using Raspberry Pi, chipKIT Pi and the Arduino Motor Control Shield,
http://chipkit.net/motor-control-raspberry-pi-chipkit-pi-arduino-motor-control-shield/2/
CC-MAIN-2017-34
refinedweb
578
62.38
I am having a string template containing $variables which needs to be replaced. String Template: "hi my name is $name.\nI am $age old. I am $sex" The solution which i tried verifying does not work in the java program. Further, I referred to where i could not check if the pattern works for java. But, while going through one of the tutorials I found that "$ Matches end of line". what's the best way to replace the tokens in the template with the variables? import java.util.HashMap; import java.util.Map; import java.util.regex.Matcher; import java.util.regex.Pattern; public class PatternCompiler { static String text = "hi my name is $name.\nI am $age old. I am $sex"; static Map<String,String> replacements = new HashMap<String,String>(); static Pattern pattern = Pattern.compile("\\$\\w+"); static Matcher matcher = pattern.matcher(text); public static void main(String[] args) { replacements.put("name", "kumar"); replacements.put("age", "26"); replacements.put("sex", "male"); StringBuffer buffer = new StringBuffer(); while (matcher.find()) { String replacement = replacements.get(matcher.group(1)); if (replacement != null) { // matcher.appendReplacement(buffer, replacement); // see comment matcher.appendReplacement(buffer, ""); buffer.append(replacement); } } matcher.appendTail(buffer); System.out.println(buffer.toString()); } } You are using matcher.group(1) but you didn't define any group in the regexp ( ( )), so you can use only group() for the whole matched string, which is what you want. Replace line: String replacement = replacements.get(matcher.group(1)); With: String replacement = replacements.get(matcher.group().substring(1)); Notice the substring, your map contains only words, but matcher will match also $, so you need to search in map for "$age".substring(1)" but do replacement on the whole $age.
https://codedump.io/share/kVguiHJw8V7C/1/how-to-replace-tokens-in-java-using-regex
CC-MAIN-2017-17
refinedweb
278
54.79
SP). We can read, write and add data to a file and perform some simple operations (format, rename, retrieve information, etc.) Introducing the SPIFFS (SPI Flash File System) SPIFFS (for Serial Peripheral Interface Flash File System) is a file system developed by Peter Andersson (project page on GitHub) that can run on any NOR flash or SPI flash. The library developed for ESP8266 modules includes most of the functionalities with some additional limitations due to the limitations of microcontrollers: - there is no file tree. The files are placed flat in the file area. Instead, it is possible to use the “\” character in the file name to create a pseudo tree.there is no file tree. The files are placed flat in the file area. Instead, it is possible to use the “\” character in the file name to create a pseudo tree. - this is the second important limitation. The ‘\0’ character is reserved and automatically added at the end of the file name for compatibility with C language character strings. Warning, the file extension generally consumes 4 out of the 31 useful characters.this is the second important limitation. The ‘\0’ character is reserved and automatically added at the end of the file name for compatibility with C language character strings. Warning, the file extension generally consumes 4 out of the 31 useful characters. - in case of error no error message will appear during compilation or at runtime if the limit of 32 characters is exceeded. If the program does not work as expected, be sure to check the file name.in case of error no error message will appear during compilation or at runtime if the limit of 32 characters is exceeded. If the program does not work as expected, be sure to check the file name. Other useful limitations to know: - Space (s) or accented character (s) must not be used in the file name - There is no queue - The writing time is variable from one file to another - SPIFFS is for small flash memory devices, do not exceed 128MB of storage - There is no bad block detection mechanism Discovery of the SPIFFS.h library, API and available methods The SPIFFS.h library is a port of the official library for Arduino which is installed at the same time as the ESP32 SDK. The proposed methods are almost identical to the FS.h library for ESP8266. The following methods are not available To access the file system, all you have to do is declare it at the start of the sketch #include "SPIFFS.h" How to format a file name (path)? SPIFFS does not manage the tree. However, we can create a pseudo tree using the “/” character in the file name without exceeding the limit of 31 useful characters. The file path must always start with the character “/”, for example /fichier.txt The methods (API) of the SPIFFS.h library This method mounts the SPIFFS file system and must be called before any other FS method is used. Returns true if the file system was mounted successfully. It is advisable to mount the file system in the setup void setup() { // Launch SPIFFS file system if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); } } Format the file system. Returns true if formatting was successful. Attention, if files are present in the memory area, they will be irreversibly deleted. if (!SPIFFS.begin(true)) { Serial.println("An Error has occurred while mounting SPIFFS"); return; } bool formatted = SPIFFS.format(); if ( formatted ) { Serial.println("SPIFFS formatted successfully"); } else { Serial.println("Error formatting"); } Open a file path must be an absolute path starting with a forward slash (eg /dir/file_name.txt). option is a string specifying the access mode. It can be - “r” read, read only - “r +” read and write. The pointer is positioned at the start of the file - “w” write, write. The existing content is deleted. The file is created if it does not exist - “w +” opens the file for reading and writing. The file is created if it does not exist, otherwise it is truncated. The pointer is positioned at the start of the file - “a” append, opens a file adding data. The file is created if it does not exist. The pointer is positioned at the end of the file if it already exists - “a +” append, opens a file adding data. The file is created if it does not exist. The pointer is positioned at the start of the file for reading and at the end of the file for writing (appending) Returns the File object. To check if the file was opened successfully, use the Boolean operator. Once the file is open, here are the methods that allow you to manipulate it This function behaves like the fseek function of the C language. Depending on the value of mode, the pointer is positioned in the file like this SeekSet position is set to offset bytes from the start SeekCur current position is moved by offset bytes SeekEnd position is set to shift bytes from end of file The function returns true if the position was set successfully Returns the current position in the file in bytes. Returns the size of the file in bytes. Please note, it is not possible to know the size of a folder File file = SPIFFS.open("/test.txt"); if(!file){ Serial.println("Failed to open file for reading"); return; } Serial.print("File size: "); Serial.println(file.size()); file.close(); Returns the name of the file in a constant in the format const char * Close the file Folder operations There is no difference between file and folder. The isDirectory() method lets you know if the file is a folder. It is not possible to know the size of a folder Open following folder Returns true if a file with a given path exists, false otherwise. Returns the total number of bytes used by the SPIFFS file system. Returns the space used by the specified file in bytes Deletes the file based on its absolute path. Returns true if the file was deleted successfully. Renames the file from pathFrom to pathTo. The paths must be absolute. Returns true if the file was renamed successfully. Unmounts the filesystem How to transfer files to the SPIFFS memory area? It is possible to directly upload files to the SPIFFS file system using the plugin for the Arduino ESP32 Sketch Data Upload IDE. To do this, simply create a folder named data at the same level as the main Arduino project file. It is better to avoid creating subfolders. This is because the SPIFFS file system does not manage the file tree. During the transfer, the files will be “flat”, ie the file will take the access path as name. To learn more, read this tutorial which explains everything in detail. Retrieve information from the SPIFFS and list of files Here is a small example of code which allows you to retrieve information from the memory area as well as the list of files found in the memory area. #include "SPIFFS.h" void listFilesInDir(File dir, int numTabs = 1); void setup() { Serial.begin(112500); delay(500); Serial.println(F("Inizializing FS...")); if (SPIFFS.begin()){ Serial.println(F("SPIFFS mounted correctly.")); }else{ Serial.println(F("!An error occurred during SPIFFS mounting")); } // Get all information of SPIFFS unsigned int totalBytes = SPIFFS.totalBytes(); unsigned int usedBytes = SPIFFS.usedBytes(); Serial.println("===== File system info ====="); Serial.print("Total space: "); Serial.print(totalBytes); Serial.println("byte"); Serial.print("Total space used: "); Serial.print(usedBytes); Serial.println("byte"); Serial.println(); // Open dir folder File dir = SPIFFS.open("/"); // List file at root listFilesInDir(dir); } void listFilesInDir(File dir, int numTabs) { while (true) { File entry = dir.openNextFile(); if (! entry) { // no more files in the folder break; } for (uint8_t i = 0; i < numTabs; i++) { Serial.print('\t'); } Serial.print(entry.name()); if (entry.isDirectory()) { Serial.println("/"); listFilesInDir(entry, numTabs + 1); } else { // display zise for file, nothing for directory Serial.print("\t\t"); Serial.println(entry.size(), DEC); } entry.close(); } } void loop() { } Open the Serial Monitor to view the occupancy, the available space is the SPIFFS files stored on the flash memory. Inizializing FS... SPIFFS mounted correctly. File system info. Total space: 1374476byte Total space used: 502byte /test.txt 11 How to write to a file programmatically with SPIFFS.h We saw how to create a file from a computer and then upload it from the Arduino IDE. The SPIFFS.h library provides several simple methods for accessing and handling files from an Arduino program. You can use any of the methods listed above. Add this code just after the file.close(); line file = SPIFFS.open("/test.txt", "w"); if(!file){ // File not found Serial.println("Failed to open test file"); return; } else { file.println("Hello From ESP32 :-)"); file.close(); } What does this code do? This time, we open the file with the option “w” to indicate that we want to modify the file. Previous content will be erased To write to a file, you can use the print() or println() methods. The println() method adds a newline. We will use it to create a data table for example. Here, we update the previous content file.println("Hello From ESP32 :-)"); Upload to see what’s going on How to add data to a file programmatically? To add data to a file, just open a file with the “a” (append) option to append data to the end of the file. If the file does not exist, it will be automatically created. Here is a small example that records a counter every second. void loop(){ File file = SPIFFS.open("/counter.txt", "a"); if(!file){ // File not found Serial.println("Failed to open counter file"); return; } else { counter+=1; file.println(counter); file.close(); } delay(1000); } Updates 02/09/2020 First publication of the post -
https://diyprojects.io/esp32-get-started-spiff-library-read-write-modify-files/
CC-MAIN-2021-10
refinedweb
1,633
66.74
Cheri -- A Builder Framework Cheri is a framework for creating builder applications (those that create hierarchical, tree-like, structures). It includes a number of builders based on the framework, as well as a builder-builder tool for easily creating simple builders. Cheri also comes with a demo application, Cheri::JRuby::Explorer, that is built using two of the supplied builders (Cheri::Swing and Cheri::Html). This version (0.0.7) is an early beta release. Some features are still not fully developed (though we're getting close). So do expect some bugs, especially in Cheri::JRuby::Explorer (CJX), which is very much a work in progress. I note some known problems in the CJX section below. Documentation will be forthcoming over the coming days, so watch the Cheri pages at RubyForge for updates: Quick Start Cheri builders are mixin modules; to use one, you include it in a class. The builder's functionality is available to instances of that class, and any subclasses (unless the including class is Object ? inclusion in Object / at the top level is supported, but discouraged; inheritance is disabled in that case). require 'rubygems' require 'cheri/swing' ... include Cheri::Swing All Cheri builders implement a cheri method (the proxy method), which plays two roles, depending on how it's called. When called with a block, the cheri method enables Cheri builder syntax within its scope for all included builders. When called without a block, it returns a CheriProxy object that can act as a receiver for builder methods, for any included builder. @frame = cheri.frame('Hello!') #=> JFrame (for Cheri::Swing) cheri { @frame = frame('Hello!') } The cheri method is also used to set global Cheri options. Currently only one global option, alias, is defined: cheri[:alias=>[:cbox,:check_box,:tnode,:default_mutable_tree_node]] cheri.cbox #=> JCheckBox (Cheri::Swing) cheri.tnode #=> DefaultMutableTreeNode (Cheri::Swing) Each built-in Cheri builder also supplies its own proxy method (in addition to the cheri method): swing for Cheri::Swing (and also awt, since Cheri::Swing includes Cheri::AWT), html for Cheri::Html, and xml for Cheri::Xml. These methods play the same dual scoping/proxy roles as the cheri method, but apply only to their respective builders. (Each also provides additional functionality; see the sections on individual builders for details.) The builder-specific proxy methods also serve to disambiguate overloaded builder method names: swing.frame #=> javax.swing.JFrame awt.frame #=> java.awt.Frame html.frame #=> HTML frame (Cheri::Html::EmptyElem) Cheri::Swing To include: require 'rubygems' require 'cheri/swing' ... include Cheri::Swing Note that inclusion at the top level is not recommended. Options: swing[:auto] swing[:auto=>true] #=> Enables auto mode (no swing/cheri block required) Cheri::Swing (which includes Cheri::AWT) includes methods (Ruby-cased class names) for all javax.swing, javax.swing.border and java.awt classes, plus many in javax.swing.table, javax.swing.tree, java.awt.image and java.awt.geom. You can extend Cheri::Swing with other classes/packages (including 3rd party, or your own!) using the Cheri builder-builder's build_package method. Cheri::Swing (and any other builder based on Cheri::Java) also provides easy-to-use on_xxx methods to implement event listeners. Any event listener supported by a class (through an addXxxListener method) can be accessed from Cheri::Swing using an on_xxx method (where xxx is the Ruby-cased event-method name). Because it is so widely used in Swing, the ActionListener#actionPerformed event method is aliased as on_click: @frame = swing.frame('Hello') { size 500,500 flow_layout on_window_closing {|event| @frame.dispose} ('Hit me') { on_click { puts 'button clicked' } } } The cherify and cheri_yield methods can be used to incorporate objects created outside the Cheri::Swing framework ( cherify), or to re-introduce objects created earlier within the framework ( cheri_yield): class MyButton < javax.swing.JButton ... end ... a_button = MyButton.new ... @frame = swing.frame('Hello') { size 500,500 flow_layout cherify(a_button) { on_click { puts 'button clicked' } } } @frame = swing.frame('Hello') { menu_bar { @file_menu = menu('File') { menu_item('Exit') {on_click {@frame.dispose } } } } } # => add a new item later: cheri_yield(@file_menu) { menu_item('Open...') { on_click { ... } } } The Cheri builder-builder can be used to extend Cheri::Swing in a couple of ways. Individual classes can be included using the build statement, while entire packages can be included using the build_package statement. Note that you may need to supply connection logic if the incorporated classes use methods other than add to connect child objects to parent objects; see file /lib/cheri/builder/swing/connecter.rb for many examples. // Java: package my.pkg; public class MyParent extends javax.swing.JComponent { ... public void addChild(MyChild child) { ... } } ... public class MyChild { ... } # JRuby: require 'cheri/swing' ... include Cheri::Swing ... # easy-to-reference names; could use include_package instead MyParent = Java::my.pkg.MyParent MyChild = Java::my.pkg.MyChild # example specifying each class; 'custom' names may be specified MyBuilder = Cheri::Builder.new_builder do extend_builder Cheri::Swing build MyParent,:pappy build MyChild,:kiddo type MyParent do connect MyChild,:addChild end end include MyBuilder @frame = swing.frame('My test') { ... panel { pappy { kiddo { ... } } } } # example specifying package; default naming MyBuilder = Cheri::Builder.new_builder do extend_builder Cheri::Swing build_package 'my.package' type MyParent do connect MyChild,:addChild end end include MyBuilder @frame = swing.frame('My test') { ... panel { my_parent { my_child { ... } } } } You can also use the builder-builder just to add conection logic to Cheri::Swing, as not every possible connection type is defined. See the Cheri::JRuby::Explorer (CJX) code (under lib/cheri/jruby/explorer) for extensive examples of Cheri::Swing usage. Cheri::Xml To include: require 'rubygems' require 'cheri/xml' ... include Cheri::Xml Note that inclusion at the top level is not recommended. Options: xml[:any] xml[:any=>true] #=> Any tag name inside xml {} will be accepted xml[:accept=>[:aaa,:bbb,:nnn]] #=> only specified tag names accepted (see builder-builder example below for alternative approach) xml[:format] xml[:format=>true] #=> output formatted with line-feeds only xml[:indent] #=> output indented by 2 spaces per level xml[:indent=>nnn] #=> output indented by nnn spaces per level xml[:margin=>nnn] #=> output indented by margin (in addition to :indent) xml[:esc] xml[:esc=>true] #=> output will be escaped (off by default for performance) xml[:ns=>:xxx] #=> declare xxx as a namespace prefix xml[:ns=>[:xxx,:yyy,:zzz...]] #=> declare xxx,yyy,zzz as namespace prefixes xml[:alias=>[:alias1,:name1,:alias2,:name2...] #=> declare tag aliases xml[:attr=>[:alias1,:attr1...]] #=> declare attribute aliases Options specified using xml apply to all threads for an instance. Options specified using xml(opts) apply only to the current thread/scope: # example xml[:any=>true,:indent=>3,:esc=>false] @out = xml { # nothing escaped at this level aaa{ bbb { xml(:esc=>true) { # everything escaped in this scope ddd { ... } eee { ... } }}}} The result of an xml block will be one of several types of object, depending on the tags used and how they are invoked. The result object can be coerced to a String, directly by calling its #to_s method, or indirectly by using << to append it to a String or IO stream. The #to_s method also takes an optional String/stream parameter; for streams, this is the most efficient way to render the XML. # example xml[:any,:indent] @result = xml{ aaa(:an_attr='a value',:another=>'value 2') { bbb { ccc } } } puts @result #=> XML @result.to_s #=> XML a_string << @result #=> appends XML a_stream << @result #=> appends XML @result.to_s(a_string) #=> appends XML more efficiently @result.to_s(a_stream) #=> appends XML more efficiently # result: <?xml version="1.0" encoding="UTF-8"?> <aaa another="value 2" an_attr="a value"> <bbb> <ccc /> </bbb> </aaa> To omit the XML declaration, use xml as the receiver for the initial element: xml.aaa{bbb} # result <aaa> <bbb /> </aaa> Alias element names that are lengthy, or can't be used directly in Ruby: xml[:alias=>[:cls,:class]] xml.aaa{cls} # result <aaa> <class /> </aaa> Declare namespace prefixes, and apply them directly (using myns.tag or myns::tag), or apply them to all elements in a scope: xml[:alias=>[:env,:Envelope,:hdr,:Header,:body,:Body]] xml[:ns=>:soap] xml { soap { env(:xxx=>'yyy') { hdr body }}} # result <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope <soap:Header /> <soap:Body /> </soap:Envelope> Use no_ns to turn off a namespace, or specify a different namespace: xml[:alias=>[:env,:Envelope,:hdr,:Header,:body,:Body]] xml[:ns=>[:soap,:xx]] xml { aaa { soap { env { hdr body { no_ns { bbb xx::ccc ddd xx {eee; fff} }}}}}} # result <?xml version="1.0" encoding="UTF-8"?> <aaa> <soap:Envelope> <soap:Header /> <soap:Body> <bbb /> <xx:ccc /> <ddd /> <xx:eee /> <xx:fff /> </soap:Body> </soap:Envelope> </aaa> Use the Cheri builder-builder to define more explicit element relationships: require 'cheri/xml' my_content_elems = [:aaa,:bbb,:ccc] my_empty_elems = [:xxx,:yyy] MyBuilder = Cheri::Builder.new_builder do extend_builder Cheri::Xml build Cheri::Xml::Elem,my_content_elems build Cheri::Xml::EmptyElem,my_empty_elems symbol :aaa { connect :bbb,:ccc } symbol :bbb { connect :xxx } symbol :ccc { connect :yyy } # raise error to prevent non-connects from silently failing type Cheri::Xml::XmlElement do connect Cheri::Xml::XmlElement do |parent,child| raise TypeError,"can't add #{child.sym} to #{parent.sym}" end end end include Cheri::Xml include MyBuilder Cheri::Html Documentation TBD Options: html[:format] html[:format=>true] #=> output formatted with line-feeds only html[:indent] #=> output indented by 2 spaces per level html[:indent=>nnn] #=> output indented by nnn spaces per level html[:margin=>nnn] #=> output indented by margin (in addition to :indent) html[:esc] html[:esc=>true] #=> output will be escaped (off by default for performance) Cheri builder-builder Documentation TBD Cheri::JRuby::Explorer (CJX) CJX is a Swing application written entirely in (J)Ruby using the Cheri::Swing and Cheri::Html builders. It enables you to easily browse classes/modules, configuration/environment settings, and, if ObjectSpace is enabled, any objects in a JRuby instance. A small DRb server component can be installed in other JRuby instances, enabling you to browse them as well. (Note that I have been trying to get the DRb server component working in C/MRI Ruby as well, but have run up against threading/IO conflicts. Suggestions welcome!) The CJX client requires JRuby 1.0.0RC3 or later. To run it (after installing the Cheri gem): require 'rubygems' require 'cheri/jruby/explorer' Cheri::JRuby::Explorer.run Alternatively, you can load and run it in one step: require 'rubygems' require 'cheri/cjx' This will take several seconds to load and start -- performance will be one area of ongoing improvement. Once it loads, it should be fairly clear what to do. Some known issues: Browsing the class hierarchy is very slow right now -- this actually slowed down in the past couple of days when I switched from HTML to straight Swing layout, the opposite of what I expected to happen. There are lots of layout issues; neither HTML (JEditorPane) nor BoxLayout provide exactly what I'm looking for. Will probably have to bite the bullet and go to GridBagLayout. Ugh. Global variables are currently shown, um, globally, when many of them should be shown per thread. This will be fixed in a later version, which will include a Thread section with other goodies as well (thread-local vars, status, etc.). To install the CJX DRb server component in an instance (assuming the Cheri gem is installed): require 'rubygems' require 'cheri/explorer' Cheri::Explorer.start nnnn #=> where nnnn is a port number Note that for the server, you require 'cheri/explorer', not 'cheri/jruby/explorer'. Also note that the above actually does work in C/MRI Ruby, but requests to the server then hang in CJX, unless you join the thread: Cheri::Explorer.thread.join After that, you can browse just fine in CJX, but you can't do anything more in the C-Ruby instance, so it's kind of pointless. Again, if anyone with some DRb experience (of which I have none) can offer any suggestions, I'd appreciate it. The Rest Please visit the Cheri site for more documentation, I'll be continually adding to it in the coming days. Bill Dortch (cheri dot project aaat gmail dot com) 19 June 2007
https://www.rubydoc.info/gems/cheri/0.5.0/frames
CC-MAIN-2020-50
refinedweb
1,978
55.13
3 - Data communication - ASP.NET Web Services (ASMX) Petr Vozak — Feb 16, 2009 It is one of the easy ways. All you need to do is to implement a simple web method which will return requested data. Please follow these steps to create and use the web service to get the requested data: 1) Right click on your web project folder and choose Add New Item option. Select Web Service as type of the new item and name it as CMSWebService.asmx: 2) Following files will be automatically created by Visual Studio: /CMSWebService.asmx /App_Code/CMSWebService.cs 3) Define method to find users according to the search expression, modify CMSWebService.cs as follows. Notice that it calls internally our helper method from ServiceHelper class which was defined in the previous step: 4) Ensure your web service is enabled. Just type the full URL path to the CMSWebService.asmx file within your web application and then click on GetUsers method, following window will be displayed: 5) Ensure your web service is working. Just type some search expression to find requested users. You will be displayed with the results in XML format. Notice the names of the XML elements and attributes. It helps you to understand the definition of the helper methods in ServiceHelper class: 6) Let the Visual Studio to create proxy class for your web service - Right click on the SilverlightDemo project and choose Add Service Reference option: 7) Add Service Reference dialog is opened. Locate your new CMSWebService and specify namespace for its proxy class: 8) Web service was referenced and initialized: - Reference to your web service was added to the SilverlightDemo project. - Proxy class CSMWebService.CMSWebServiceSoapClient was created for easier access to the web service. - Web service default configuration file (ServiceReferences.ClientConfig) was created, however you can configure service settings in code behind as well. 9) Locate Page.xaml.cs file in SilverlightDemo project and add you web service as the first data communication method. Implementation is quite easy because of the generated proxy class. This article is part of the Data communication in Silverlight 2.0 tutorial. Share this article on Twitter Facebook LinkedIn Google+ Petr VozakGoogle Plus Technology Partnership Product Owner at Kentico. He works with technology partners to enrich the Kentico product offering and bring greater value to those using our products. Comments
https://devnet.kentico.com/articles/3---data-communication---asp-net-web-services-(asmx)-1
CC-MAIN-2018-05
refinedweb
388
56.76
Petscii Petscii is a tiny PHP library which converts text to PETSCII (PET Standard Code of Information Interchange) format, a character set based on ASCII. It has been used in Commodore Business Machines - the 8-bit home computers like VIC-20, C64, C128, CBM-II, Commodore Plus/4, C16 and C116. With this package, you can prepare your website to be fully compatible with web browsers available for Commodore 64 or Commodore 128 connected to internet via one of existing ethernet cartridges: 64NIC+, The Final Ethernet, RR-Net or other. You can even surf the web using an emulator. This package is used on website. Supported browsers Petscii package has been tested on a few Commodore 64 browsers. Please let me know if you're using other ones, or you've encountered any issues. Also, please let me know if some of the links below are no longer available. Contiki web browser Online HTML web browser available with a web server and other tools. Set up disk configurator for version 2.5 available here. Contiki 3 is available here. Singular browser Online HTML web browser. About the browser here. Set up disk available here. HyperLink Online HTML web browser. Not tested yet, but detected by PETSCII. More info here. FairlightML Offline HTML viewer also known as 64'er htmlreader. Version 0.99 is available here. You can download files using WGET (available i.a. on Contiki floppy disk). Entering World Wide Web with emulator If you don't have physical ethernet card, you can try Vice64 emulator for Windows (with WinPcap installed). You will find details in this Commodore Server blog entry. Features - All non-ASCII characters are converted to basic ASCII-96 - Can detect if browser is running in PETSCII mode (HTTP user agent check) - Pound Sterling character will be converted to the responding PETSCII character CHR$(92)for Contiki browser - All variations of break line tag (e.g. <br />) will be converted to <br>( FairlightML browser cannot detect other variations) Installation To install via Composer, just: composer require commocore/petscii This package uses Composer autoloader, regular boot looks this way (for more details, see Composer documentation): require_once '../vendor/autoload.php'; And can be imported this way: use Commocore\Petscii\Petscii; Usage $content = 'Commodore 64<br />The Commodore 64 is an 8-bit home computer introduced in January 1982.<br />'; $petscii = new Petscii(); $content = $petscii->render($content); echo $content; As you see, you don't have to check if browser supports PETSCII. HTTP user agent is recognized automatically, and if a browser doesn't support PETSCII, text content will be displayed without any changes. To trim break lines (or other characters), just provide them in string as the second parameter (uses PHP's trim() mask): $content = $petscii->render($content, '<br>'); Note: you don't have to define other variations of <br> in trim mask (e.g. <br />) as all variations of break line will be converted to <br> by default. To check if browser is supporting PETSCII: if ($petscii->isPetsciiBrowser()) { // I'm PETSCII! } To return browser class detected by HTTP user agent: $browserClass = $petscii->getDetectedBrowser(); Or, to get only the class name (without namespace): $className = substr(strrchr(get_class($petscii->getDetectedBrowser()), "\\"), 1); Testing Docker (with Docker Compose) has been used for testing purposes. Setup - First, you need to build images by executing docker/build.shscript. - Then, you can execute make allin the main directory for the first time to install PHP dependencies and run containers, or use make composerthen make start. Executing tests You will find all available make commands in Makefile. To test the whole project just make test. For the test coverage, if make phpunit-code-coverage-html is used, results will be saved to coverage/ folder in the HTML format. Testing available characters To see characters available in a particular browser, you can create test page where you can generate list of all 256 characters this way: echo $petscii->getTestPage(); Notes Underscore character compatibility As Contiki browser is not using underscore character (instead left arrow CHR$(95) is displayed), and there is no equivalent available for this browser, you can ignore it, or remove underscores from text beforehand. This character is fully supported e.g. in FairlightML HTML viewer and Singular browser.
https://bitbucket.org/Commocore/petscii/src
CC-MAIN-2018-30
refinedweb
705
55.24
S/MIME, or Secure/Multipurpose Internet Mail Extensions is the de facto standard for encrypting and signing mail. You can encrypt mail to keep prying eyes off of it. Signing though, is much more common as it addresses the issue of non-repudiation in many organizations, or giving people a way to make sure that the email that they think you sent really came from you. It was also available in GPG plug-ins for mail, back in the day. But S/MIME used to really be for people who thought the government was out to get them, work for government agencies, just liked to be kinda’ nerdy or actually had something to hide. But is email security overkill? After a bunch of people get their Google Apps accounts exposed from phishing attacks I’d argue not. I use it for various situations but not all. That may just change in Lion, because while S/MIME has been built into OS X for some time in the form of the smime command it will be much easier to use in OS X as of Lion and now available in iOS 5. First, get a certificate from one of these providers (my favorite is Verisign, but Comodo is free): Once you have downloaded the certificate files from the sites you can easily install them by double-clicking them, which imports them into the login keychain. Many organizations are going to want to script this process. To import the certificates, use the security command. Here we’ll import a Comodo p7 cert: security import -/Downloads/CollectCCC.p7s -f pkcs7 Once imported, the certs can be escrowed by control-clicking on the cert in Keychain Access and exporting as .pem files. For organizations that want users to import their certs off of a site, the certs can be curl’d down for user-specific entries and intermediaries and certificates imported: curl -o /tmp/mycert.crt Which brings up a final point. If you give certificates to users, rather than having them download and load up their own, you will have control over whether or not keys get escrowed and if so, how. When just using signing, you may not care. But when messages are being encrypted, many organizations will have regulatory or eDiscovery situations that require the escrowing of keys to be able to unlock the contents of messages that are encrypted. For this reason, the some will need to export the certificate that was imported. Of course, if you escrow private keys for certificates then can the receiver ever know for certain you sent the message? I guess that comes down to process. If you require two people to turn a key at the same time when the sun shines through this one special crystal and makes the tomb glow red, then you may be able to keep people out. But then there are conspiracies and we’re back to preparing our tin foil as head gear… Anyway, mail has supported smime for some time, as can be seen in this O’Reilly article from 8 years ago. There’s also an smime command line tool that goes pretty far back. Importing certificates into iOS is about as easy as importing them into OS X, but you can also distribute certificates using mobileconfig files, which I wrote an article on awhile ago. One can assume that the Profile Manager feature announced in OS X Server will allow you to deploy these over MDM, but then we might just have to wait until fall to see what that’s all about… Posted In: Mac OS X, Mac OS X Server, Mac Security Tags: Command line, import certificate, iOS 5, Lion, MAC, os x, S/MIME I know I’ve written up telling OS X to show you invisible files, but what if you don’t want to make all invisible files show up, just make one file or folder go invisible, or for that matter, visible. Well, it’s easier than you might think. Apple has bundled a nice little command called chflags into the OS. To use it to hide a file, simply type chflags followed by hidden and then the folder. For example, let’s say you wanted to hide your ~/Library folder. Just run the following to hide it: chflags hidden ~/Library And then let’s say you wanted to unhide it ’cause you realized that it’s one of those folders best left visible: chflags nohidden ~/Library You can also use the SetFile command (both are located in /usr/bin, although chflags is included by default whereas SetFile is installed with the OS X Developer Tools). SetFile has a -a option and can set the v or V attribute to make a file shown or hidden respectively. Run the following command to make this same folder invisible: SetFile -a V ~/Library Or the following to make it visible: SetFile -a v ~/Library Oh, you can always throw a dot in front of a filename to hide it, but that’s not nearly as much fun… Posted In: Mac OS X, Mac Security, Mass Deployment Tags: Apple, hide folders, invisible, Mac OS X, os x, show folders, visible It’s summer! And at many schools that means that the kids are gone and it’s time to start imaging again. And imaging means a lot of rebooting holding down the N key. But wait, you have ARD access into all those computers. And you have automated imaging tools. This means you can image the whole school from the comfort of your cabin out by the lake. Just use ARD and a little automation and you’ll be fishing in no time! If you haven’t used the bless command to restart a client to NetBoot server then you’re missing out. The bless command is used to set the boot drive that a system will use. It comes with a nifty –netboot option. Define the –server and (assuming you have one nbi) you can reset the boot drive by sending a “Unix command” through ARD: bless --netboot --server bsdp://192.168.210.9; restart I added the restart for posterity. This is something everyone with an automated imaging environment really needs to put into their ARD command templates! Now, that all works fantastic in a vanilla environment. But in more complex environments you will need potentially more complex incantations of these commands. Well, Mike Bombich wrote all this up awhile back and so I’ll defer to his article on nvram and bless here to guide you through any custom settings you’ll need. It’s a quick read and really helpful. What else are you gonna’ do while you’re fishing anyway… BTW, if you have more than three beers, please put the MacBook down. And if you don’t, at least close both terminal and ARD. And email. And iChat. Actually, just close the machine now… Posted In: Mac OS X, Mac OS X Server, Mass Deployment, Network Infrastructure Tags: bless, NetBoot, nvram, subnets I find there are a lot of commands I run routinely. Some of which are pretty long strings that are thrown together in order to find what can, at times, be a small piece of information. Or, I might routinely log into a server and want to trim down the command required to do so. Let’s take an example of this in using the open command to vnc into a server. The command to open a server in this fashion would be (assuming a server name of mail.mygroup.mycompany.com, a username of krypted and a password of mypass): open vnc://krypted:mypass@mail.mygroup.mycompany.com For this exercise we’re going to be saving the above command into a file in clear text and so we are not going to actually embed the password. We’re going to use the alias command to create an alias, which can then be called on as a normal command, called vncmail. This way, that’s all we have to type in a terminal window to execute the string from the command above. Do this by using alias then the command you would like to have, followed by an equals sign (assuming bash here, btw) and then a quoted command: alias vncmail='open vnc://mail.mygroup.mycompany.com' Once you close your bash shell this alias will disappear. So let’s make it permanent by placing it into the .bash_profile file in your home directory. First, if it’s not there, we’ll create the .bash_profile: touch ~/.bash_profile Then add the alias line from above into the ~/.bash_profile file. Then make sure this file roams using a mobile home for your admin account. Then, whichever system you sit at, you can quickly VNC, SSH or even ‘dscl . read /Users/localadmin’ or whatever. Lots of stuff you can do with aliasing commands. One of my favorite is ‘/Applications/Utilities/Network Utility.app/Contents/Resources/stroke ODServer 389 389’ to do a quick port scan of an LDAP server over port 389 (or 636 if you’re using SSL). Anyway, hope this saves you as much time as it’s saved me over the years! Posted In: Mac OS X, Mac OS X Server, Ubuntu, Unix Safari can subscribe to RSS feeds; so can Mail. Podcast Producer is an RSS or XML feed as are the feeds created by blog and wiki services in Mac OS X Server. And then of course, RSS and ATOM come pre-installed with practically every blogging and wiki tool on the market. Those doing mass deployment and scripting work can make use of automatically connecting users to and caching information found in these RSS feeds. If you have 40,000 students, or even 250 employees, it is easier to send a script to those computers than to open the Mail or Safari client on each and subscribe to an RSS feed. Additionally, pubsub offers what I like to call Yet Another Scripting Interface to RSS (my acronym here is meant to sound a bit like yessir). Pubsub caches the feeds both within the SQLite database and in the form of XML files. Because pubsub caches data onto the client it can be parsed more quickly than using other tools, allowing a single system to do much more than if a feed were being accessed over the Internet. Using pubsub We’ll start by looking at some simple RSS management from the command line to aid in a quest at better understanding of the underpinnings of Mac OS X’s built-in RSS functionalities. The PubSub framework stores feeds and associated content in a SQLite database. Interacting with the database directly can be a bit burdensome. The easiest way to manage RSS from Mac OS X is using a command called pubsub. First off, let’s take a look at all of the RSS feeds that the current user is subscribed to by opening terminal and simply typing pubsub followed by the list verb: pubsub list You should then see output of the title and url of each RSS feed that mail and safari are subscribed to. You’ll also see how long each article is kept in the expiry option and the interval with which the applications check for further updates in the refresh option. You can also see each application that can be managed with pubsub by running the same command with clients appended to the end of it (clients are how pubsub refers to applications whose subscriptions it can manage): pubsub list clients To then just look at only feeds in Safari: pubsub list client com.apple.safari And Mail: pubsub list client com.apple.mail Each of the above commands will provide a URL for the feed. This url can be used to show each entry, or article in the feed. Extract the URL and then you can use the list verb to see each feed entry, which Apple consistently calls episodes both within PubSub, in databases and on the Podcast Producer server side of things but yet somehow calls an entry here (consistency people). To see a list of entries for a given URL: pubsub list Episodes will be listed in 40 character hex keys, similar to other ID space mechanisms used by Apple. To then see each episode, or entry, use the list verb, followed by entry and then that key: pubsub list entry 5fcef167d77c8c00d7ff041a869d45445cc4ae42 To subscribe to a pubsub, use the –client option to identify which application to subscribe in along with the subscribe verb, followed by the URL of the feed: pubsub --client com.apple.mail subscribe To unsubscribe, simply use pubsub followed by the unsubscribe verb and then the url of the feed: pubsub unsubscribe Ofline Databases and Imaging While these can be run against a typical running system, they cannot be run against a sqlite database that is sitting in all of your users home folders nor can they be run against a database in a user template home on a client. Therefore, to facilitate imaging, you can run sqlite3 commands against database directly. The database, stored in ~/Library/PubSub/Database/Database.sqlite3. To see the clients (the equivalent of `pubsub list clients`): sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'SELECT * FROM clients' To see each feed: sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'SELECT * FROM feeds' To see each entry: sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'SELECT * FROM entries' To see the column headers for each: sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Clients)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Feeds)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Subscriptions)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Entries)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Enclosures)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Authors)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(Contents)'; sqlite3 /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'PRAGMA TABLE_INFO(SyncInfo)'; To narrow an ID down to a specific row within any of these searches add a WHERE followed by the column within the table you’d like to search. For example, if we wanted to only see the article with the identifier of 5b84e609317fb3fb77011c2d26efd26a337d5d7d sqlite3 --line /Volumes/Image/Username/Library/PubSub/Database/Database.sqlite3 'SELECT * FROM entries WHERE identifier="5b84e609317fb3fb77011c2d26efd26a337d5d7d"' Note: Sqlite3 can use the –line option to show each entry in an XML feed per line. Dumping pubsub to be Parsed By Other Tools Pubsub can also be used as a tool to supply feeds and parse them. You can extract conversations only matching specific patterns and text or email yourself that they occurred without a lot of fanfare. You can also dump the entire feed’s cached data by specifying the dump verb without the entry or identifier but instead the URL: pubsub dump Once dumped you can parse the XML into other tools easily. Or to dump specific entries to XML for parsing by another tool using syntax similar to the list entry syntax: pubsub dump entry 5fcef167d77c8c00d7ff041a869d45445cc4ae42 Because these feeds have already been cached on the local client and because some require authentication and other expensive (in terms of script run-time) processes to aggregate or search, looking at the files is an alternative way of doing so. Instant refreshes can also be performed using pubsub’s refresh verb followed by a URL: pubsub refresh Also, feeds are cached to ~/Library/PubSub/Feeds, where they are nested within a folder with the name of the unique ID of the feed (row 2 represents the unique ID whereas row 1 represents the row). Each episode, or post can then be read by entry ID. Yhose entries are basic xml files. You can also still programatically interface with RSS using curl. For example: curl --silent "{server}.myschool.org/search/cpg?query=%22random+curse+word%22&catAbbreviation=cpg&addThree=&format=rss" | grep "item rdf:about=" | cut -c 18-100 | sed -e "s/"//g" | sed -e "s/>//g" Posted In: Mac OS X, Mac OS X Server, Mass Deployment Tags: atom, feed, Mac OS X Server, manage subscriptions, mass deploy, Podcast Producer, pubsub, rss, I can’t stand it when I open terminal and go to cd into a directory I know to exist only to be confused by why using the tab doesn’t autocomplete my command. For those that don’t know, when you are using any modern command line interface, when you’re indicating a location in a file system, the tab key will autocomplete what you are typing. So let’s say you’re going to /System. I usually just type cd /Sys and then use the tab to autocomplete. In many cases, the first three letters, followed by a tab will get you there and you can therefore traverse deep into a filesystem in a few simple keystrokes. But then there’s all this case weirdness with a lot of the more Apple-centric stuff in the file system. For example, when it’s FileSystem vs. Filesystem vs. filesystem. This makes sense when using a partitioning scheme that allows for case-based namespace collisions, but not in HFS+ (Journaled), the default format used with Mac OS X. So I find myself frequently editing the .inputrc file. This file can be used to do a number of cool tricks in a terminal session, but the most useful for many is to take the case sensitivity away from tab auto-completes, effectively de-pony-tailing the sensitive pony-tail boy. To do so, create the hidden .inputrc file in your home folder: touch ~/.inputrc Then open it with your favorite text editor and add this line: set completion-ignore-case on Then Fast User Switching, when enabled, allows users to leave one session open and hop to another user account. Great for training, testing and impressing friends (ok, so maybe it won’t impress your friends, but the thumb trick is getting old). To enable Fast User Switching, open the Accounts System Preference pane and click on Login Options. Then check the box for Show fast user switching menu. By default you’ll then see your user name in the menu bar. To do this from the command line: defaults write /Library/Preferences/.GlobalPreferences MultipleSessionEnabled -bool 'YES' To then disable it from the command line: defaults write /Library/Preferences/.GlobalPreferences MultipleSessionEnabled -bool 'NO' What’s really cool though, is once enabled, you can switch users with a script as well, using the command line options available with CGSession, located in the user.menu item at /System/Library/CoreServices/Menu Extras/User.menu/Contents/Resources/CGSession. /System/Library/CoreServices/Menu Extras/User.menu/Contents/Resources/CGSession -switchToUserID 501 Or to simply go to a login screen: /System/Library/CoreServices/Menu Extras/User.menu/Contents/Resources/CGSession -suspend Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment Tags: cgsession, defaults read, defaults write, disable fast user switching, Enable Fast User Switching with a script, enable user switching, switch users with a script, terminal: We’d like to share some exciting news with you about iCloud — Apple’s.
http://krypted.com/2011/06/
CC-MAIN-2015-48
refinedweb
3,221
59.03
30 September 2012 15:57, Nikodemus Siivola <nikodemus@...> wrote: > to return (:SPECIAL :SPECIAL) looks a bit hairier. Ok, it wasn't that hard after all. Committing after freeze, unless I find an issue with it. Cheers, -- Nikodemus On 24 September 2012 14:13, Lars Brinkhoff <lars@...> wrote: > This may be another instance of Similar, but different. The issue is that the walker doesn't record lexical variables correctly -- or rather, it records them but then throws that information out so the symbol macro shines through. The fix to that is easy, but getting (defmacro v (x &environment e) (sb-cltl2:variable-information x e)) (let ((form '(symbol-macrolet ((x :bad)) (let* ((x :good) (y (v x))) (declare (special x)) y)))) (list (eval form) (eval (sb-cltl2:macroexpand-all form)))) to return (:SPECIAL :SPECIAL) looks a bit hairier. No need to hold back 1.1 for this, though, since this is definitely not a regression, but a long-standing bug. Cheers, -- Nikodemus.
https://sourceforge.net/p/sbcl/mailman/sbcl-bugs/?viewmonth=201209&viewday=30
CC-MAIN-2017-09
refinedweb
161
64.91
For those whom it may concern. I'm trying to determine the status of the PM's Java's counterpart Javajunkies. I started a post in The Coffee Grinder in an attempt to see what level of interest exists out there. Never been to the site. I do agree that javasoft.com's forums are mostly filled with "write my code for me" kind of stuff, so I don't read that. I'm glad to see PerlMonks source in use, but it's sad for the language that touts Servlets, jsp, etc, to be running a java forum using Perl. Java people dislike Perl (from what I've seen), far more so than Perl folks dislike Java. Why? Usually it's that bias towards "clean OO & readability" and not being able to see the forrest for the trees. When people have questions on perlmonks, I doubt that people spend more than 10 minutes writing something down on the norm. Dead on. Perl/Tk questions can take 5 minutes to answer. Whole Perl/Tk apps can take 30 minutes to write. Swing? Those questions and programs can take hours and days! And this is coming from someone who was very good with Java in college. So, why would a java community have a problem existing? Java is readable, but it is readable like an encyclopedia is readable. That is, you must do a LOT of reading, and a lot of studying. I don't find that conducive to algorithmic thought, and frankly, I find shiny things like functional programming and anonymous data structures to be too elegant to avoid. And you can't really post snippets, since anything non-trivial will take a page or two of code. So you, guess what, just point to online docs. Idioms? No interesting java idioms exist. Another problem the java community has is the "100% pure java" mentality. Essentially, there is no CJAN (correct me if I'm wrong) full of awesome java jar files to do various tasks, so everyone ends up using the standard API. Innovation is shot, since the first thing you think is "Darn, no function to do that. I guess I'll give up".. In all, I guess I'm saying that the language is so monolithic, a community around it won't contribute to the knowledge of those involved as much. I know, if I really wanted, I could make some popular modules on CPAN and change the way people do work. In java? Unlikely. I don't think there is a place in java for a programmer to evolve, he is only a consumer of Sun API's. So, all hope of a java community is doomed because of the monolithic nature of the language, we have greater hope for communities around Perl (us!), Lisp, Python, Ruby, and the other more flexible languages that the Java zealots often frown upon. Why? Because they are more fun. I think you will find a lot of people saying they like java, but very few people being able to say why they like it (in a convincing way). They think Java was the first that invented a way of XYZ (example OOP, clean GUI programming, etc), when in reality, it is not the first, nor is it the best. Whole Perl/Tk apps can take 30 minutes to write. Swing?. , I could make some popular modules on CPAN and change the way people do work. In java? few people being able to say why they like it (in a convincing way). Java forces me to be organized in terms of encapsulation. I like the fact I have permissions from stupid coworkers doing stupid things. In perl, it's a case of don't do that, or you'll hurt yourself. When someone hurts themselves at work, people we lose money. I rather not encounter someone wet behind the ears doing stupid things and then have to deal with it. You can't do odd things like declare things public, then make them private. If the interface is designed to have something public, it's public and that's it. Don't undermine the design by changing it. If you don't do strange JINI things, it's really cross compatable. I work on my mac and unix boxes at work and deploy to a windows desktop with minimal testing. There are bugs in the various JVM's, but they are rare. I like the fact it's also verbose. I don't have to worky about someone not understanding ($#{$x[0]}; pm randomly inserted the striked out part.) what I've written and vice versa. Regarding "stupid coworkers", I prefer to either educate them or not work with them. I consider myself a professional, I expect the people who work with me to act like ones as well. "Swing is the same way". No, it's much less productive. Try a complex layout in Swing, and the same in Perl/Tk. The Perl/Tk code will be 5x shorter (or more) and you can write it in much less time. I have spent literally hours fighting with the horrible evils (and horrible syntax) of "GridBagLayout". I do agree that encapsulation is a good thing, but I don't like my hand forced. Some times, objects are not needed, and what you really have is code and a datastructure. Or a datastructure of objects. Objects everywhere is usually overkill and undermines simplicity, often making the code *more* complex. I am very disciplined as a coder, and I prefer to enforce my own disciplines. I do not need a language and it's totalitarian conventions slapping my hand. Contrary to popular belief, good C code can be written. It just takes *good* coders. Java allows mediocre coders to appear to be talented because they are hiding behind OOP that allows huge interconnects between hundreds of files. Where a procedural coder can quickly be hit for not having a design, an OO java coder can usually say "of coruse I have design, look, UML!" and the design is often just a bunch of random objects. and that's PC. Anyhow, long story short, Java and encapsulation don't make good coders. Good coders write well in all languages. Java is just decent at keeping bad coders from really showing it, and I think that's more dangerous than broken code. You don't know who in your organization is good and who is bad. Updated: Gramatical cleanup's and Ant comment at end This is awkward because I agree with much of what sporty and flyingmoose have said on the topic, but I think the dichotomy of the two languages is best seen at the enterprise level. Perl suffers from lack of enterprise level acceptance for pretty much the same reasons that Java is accepted. Never the twain shall meet? Possibly, though I was pleased to see that Java 1.5 has finally been released as I believe it will be much more developer friendly.. and besides the regex capability is now much more Perlish. But on a much more interesting note for PM'ers is the work the JBoss group did to PHP's Post-Nuke. Seems to me a marriage of Perl with J2ee would good advertisement for both languages While I've seen the Java zealotry that was mentioned in the post I believe it's just simply the "Java is my hammer" effect, as most good developers that I know weigh the tool to the job without predjudice.. ( ok just a little *g* ). The fact is in terms of maturity Java is still young in comparison to Perl or others. This youth explains the type/quality of existing documentation and support. In regards to ..there is no CJAN comment. The fact is that Java builds in the namespace inherently. It is only the limited number of quality contributions that limits it at this point. Concerning the time required to develop or present Java problems in a fashion more conducive to an algorithmic mind, much of this has to do with the confusion around and in the wasteland of Java IDE's. From what I've seen comming from the developers that have moved into the eclipse IDE I have seen the quality ease of dialoge improving at a fast pace, as much of what use to be just enviornmental issues are more coherently and easily relagated to 'plug-ins'. (Gotta say though that ant is a poor excuse for a job that is better done by Perl) -just another .02 Enterprise solutions = App servers? If so, I don't think we need those. Those are more of a marketing trend. 500-1GB memory behemoths that are very slow, finicky, and notoriously painful to work on. Too many layers of middle-ware, IMHO... I am excited about the Java 1.5 features (and the trend to make it more functional/friendly -- so when I use it I don't get mad) but some platforms don't get good ports anymore (evil SCO, etc) and unfortunatley where I work, we must continue to support those. It would be cool if these syntatic and gramatical features (which are not OS-specific) weren't written into the JVM, but as modules that could be supported under any JVM. Of course, this is a pipe dream. Anyhow, Perl is not my only hammer. C and C++ are also frequent hammers of mine. But honestly, Perl isn't just a hammer. It's a whole tool-case. Java is more like a drinking straw than a hammer, and well... there are few apps I have seen that are written *BEST* in java. Maybe I can explain it this way -- if a language is good for both high-level and low-level programming then I'll like it. Java is high-level. Assembler is low-level. This is why I stay away from Java and Assembler as much as possible. I want both, and I don't want a language that fights me. Interesting.. I did the same thing here creating a Perl/Ant build and deploy framework. Especially when it came down to manipulating the descriptors it was the best tool for the job. It's been working like clockwork for the last couple of years I was involved in the zvmOS/linux/WebSphere project testing here.. (Putting Linux/WebSphere on the mainframe). It's not ready quite yet, but I'm thinking if they'd have gone with a 2.6 linux kernel it would have been a winner. This is where utilization of an appserver starts to make sense. Popping Appservers on line (on demand) just by imaging a VM allows all the encapsulation pay off. By virtue of just changing a few parameters you've scaled on (or off) your entire delivery tier. I don't think you could just do that with mod-perl applications without having to dink with a ton of properties to allow inter communication with the newly created Server. Having said that, java puts off a lot of people. Java, while it can perform, is very verbose. You cannot do terse things like complex sorting in say, 3 lines. In java, you create a comparator, and then you pass it to a sort function that does not accept anything but basic arrays. While creating the comparator allows for reuse and what not, it takes time to write. With perl and some other languages, you can express these things in fewer characters while losing only some of the readability -- but that's something one overcomes with experience. If I'm writing say, something that deals with some complex form of matrix manipulations on various data, in perl, because it's not bound by data types like java, you can write something and debug quickly. Taking that solution and translating it to java could be argued faster than developing it alone in java. When people have questions on perlmonks, I doubt that people spend more than 10 minutes writing something down on the norm. Perl is like that. Doing a map on keys of a hash is really short and sweet. If you do something wrong, it's obvious to those better in perl, and is quickly fixable. In java, you have to compile and test. Lastly, because it's compiled, it takes time to deal with things. I would equate the tediousness of dealing with compiling a bunch of sources, to setting up and deploying someone else's broken mod_perl. It takes time to deal with. It's not hard, but plain tedious. I think PM is a place where people can exchange ideas on algorithms in a convenient way. Java junkies fails at this. The java questions I tend to have, is not because I don't know how to do neat tricks in java. There are few to be had. It's because of bad documentation, ala Apache Axis and using HTTP auth using dynamic proxies. It wound up being 1 line of java I didn't know and was documented nowhere. With perl, there are lots of tricks that can save time (writing or executing). Unfortunately, in perl, it's equally easy to trip up, i.e. $x=/(abc)/; if($1) {... } type errors. It doesn't make perl bad. It makes it more difficult, but that's because it's just that expresive. Java junkies may not take off like perlmonks has (over 4 years going?). But I've been wrong before. :) Is that why Java is the most popular language on the planet? You cannot do terse things like complex sorting in say, 3 lines. I sense that you have no intention in being funny, but such "terse things" is exactly what turns off many people to Perl. You need to do your homework. Is that why Java is the most popular language on the planet? Define "popular". I'd bet there are far more lines of COBOL out there. Probably Fortran, too, which is still actively used for heavy-number-cruching in scientific applications (do you have any idea how fast a Fortran program running on MS-DOS 6 goes on a GHz system?). I sense that you have no intention in being funny, but such "terse things" is exactly what turns off many people to Perl. And not being able to do "terse things" is exactly what turns other people off to Java. Not just Perl programmers, either, but a lot of C++ diehards. ----I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident. -- Schemer : () { :|:& };: Note: All code is untested, unless otherwise stated Is that why Java is the most popular language on the planet? Having said that, java puts off a lot of people. Having said that, java puts off a lot of people. Also, declaring how popular a language is, w/o some sorta statistics or backup, since that's what you are claiming, isn't very convincing. I can retort easily, "ASM is most used." But we'd get into a "yes it is." "no it's not". I sense that you have no intention in being funny, but such "terse things" is exactly what turns off many people to Perl. You need to do your homework. You cannot do terse things like complex sorting in say, 3 lines. Please, don't read more into my posts than what I've literally said. If I wanted to say all, or 40%, or some specific number, I would have. This, to me, is like saying that contractions are more efficient, but that's precisely why people don't like them. Computer languages are always going to be more terse than the exact descriptions of what the code would do. What's wrong with maximising what you can do with X lines of code, if that code's going to require understanding anyway? I realize you may be talking about this in the hypothetical, that this doesn't turn you off to Perl -- but you say it as though you sympathize. Can you explain this to me? ----------------------- You are what you think. I think that the idea behind Javajunkies is commendable, though I fear that the forums from Sun and JavaRanch may be leaving it with little energy. I felt that Java was an abomination for the longest time, but today I will limit myself to regarding EJBs as an abomination, checked exceptions as the result of some sort psychotic episode, and the rest of the Java platform being tolerable to good. It seemed that Java was trying hard to do everything, and thus ended up being the wrong tool for every job. Since about version 1.3, though, it has improved very much, and I like the open-source nature of many major Java projects, particularly Eclipse, JUnit, and Ant. Java's performance has gotten much better over the years, too. I have less and less bad to say about it as time goes by, and I'll take Java over VB any day of the week. Besides, I suspect that Java must have something going for it since the humble Xah Lee seems to dislike it almost as much as he dislikes Perl. Perl still enjoys Most Favored Language Status with me, of course. Java still seems quite poopy and artless by comparison. ;) A compiled Java file (.class), is just the byte code to be executed by JVM. And Perl, with it's complex syntax, when is parsed, a bytecode is also created, optimized, and than executed. But we can't forget that Perl is 10 times more fast to run than Java, including or not the parsing time of a Perl source, including or not the compilation time of a Java file. Soo, the approach of Java to has it's code compiled to execut it, let's say, that in theory is good to make things faster. But in the practice we see easy that the "fast" point was forgot in Java. Java is much more about OO than any other thing, and Perl is much more about make things done. Graciliano M. P. "Creativity is the expression of the liberty".] And I agree with your point, Java is more about large scale systems. Well, Java has his good points, and Perl too, we just need to know what is best for what, since everything has it's good and bad side. But how would I feel upon discovering the Cabal conspiracy to hide the brutal truth: perlmonks just looks like it uses Everything; really it's written in a much-enhanced version of the HQ9+ language. Well if I'm looking for Java support I'd like to know that the people behind the machinery can at least manage a web site utilizing the language. It's a matter of credibility I don't see how that matters. If you were looking for Fortran help, would you look for a website written in Fortran? There are plenty more participants capable of providing the help you need beyond those that wrote the website.
http://www.perlmonks.org/index.pl?node_id=326761
CC-MAIN-2016-30
refinedweb
3,197
73.17
otTcpEndpoint Struct ReferenceAPI > IPv6 Networking > TCP > TCP This structure represents a TCP endpoint. #include < include/openthread/tcp.h> This structure represents a TCP endpoint. An TCP endpoint acts an endpoint of TCP connection. It can be used to initiate TCP connections, and, once a TCP connection is established, send data to and receive data from the connection peer. The application should not inspect the fields of this structure directly; it should only interact with it via the TCP API functions whose signatures are provided in this file. The documentation for this struct was generated from the following file: - include/openthread/ tcp.h
https://docs.silabs.com/openthread/latest/structotTcpEndpoint
CC-MAIN-2022-40
refinedweb
102
55.03
Views: 2354 how to start working in netbeans? Aliasing means that more than one reference is tied to the same object, as in the preceding example. The problem with aliasing occurs when someonewrites to that object. If the owners of the other references aren’t expecting that object to change, they’ll mehr pateesa gud keep it up & thanks for sharing Attention Students: You don’t need to go any other site for this assignment/GDB/Online Quiz solution, Because All discussed data of our members in this discussion are going from here to other sites (Other sites Admins posted with fake IDs at their sites :p ). You can judge this at other sites yourself. So don’t waste your precious time with different links. THanks Mehr Pateesa for shearing copy and paste from... Please Discuss here about this assignment.Thanks Our main purpose here discussion not just Solution We are here with you hands in hands to facilitate your learning and do not appreciate the idea of copying or replicating solutions. kindly upload on mediafire.. can't download from this site. // CS508 import java.lang.Thread; public class CS508 { public static void main( String[] args ) { System.out.println(); // create each thread with a new targeted runnable Thread thread1 = new Thread( new PrintTask( "pc130200019" ) ); Thread thread2 = new Thread( new PrintTask( "H. M. Imran" ) ); Thread thread3 = new Thread( new PrintTask( "Student1" ) ); Thread thread4 = new Thread( new PrintTask( "Student2" ) ); Thread thread5 = new Thread( new PrintTask( "Student3" ) ); Thread thread6 = new Thread( new PrintTask( "Student4" ) ); Thread thread7 = new Thread( new PrintTask( "Student5" ) ); System.out.println(); // start threads and place in runnable state thread1.start(); // invokes run method thread2.start(); // invokes run method thread3.start(); // invokes run method thread4.start(); // invokes run method thread5.start(); // invokes run method thread6.start(); // invokes run method thread7.start(); // invokes run method System.out.println(); } // end main } // end class CS508 ______________________________________ // SimpleThread class sleeps for a random time from 0 to 5 seconds import java.util.Random; public class SimpleThread implements Runnable { private final int sleepTime; // random sleep time for thread private final String taskName; // name of task private final static Random generator = new Random(); public SimpleThread( String name ) { taskName = name; // set task name // pick random sleep time between 0 and 5 seconds sleepTime = generator.nextInt( 5000 ); // milliseconds } // end SimpleThread constructor // method run contains the code that a thread will execute public void run() { try // put thread to sleep for sleepTime amount of time { System.out.printf( "%s will sleep for %d Milliseconds.\n", taskName, sleepTime ); Thread.sleep( sleepTime ); // put thread to sleep } // end try catch ( InterruptedException exception ) { System.out.printf( "%s %s\n", taskName, "terminated prematurely due to interruption" ); } // end catch // print task name System.out.printf( "%s thread has finished\n", taskName ); } // end method run } // end class SimpleThread not compiling. saying cs508 should be public class,while it is already ,what can i do.? which software will be used to run the program?? net beans. but u can run java files from cmd too.. just set ur JDK environment variable. © 2021 Created by + M.Tariq Malik. Promote Us | Report an Issue | Privacy Policy | Terms of Service
https://vustudents.ning.com/group/cs508modernprogramminglanguages/forum/topics/cs508-assignment-no-04-solution-discussion-spring-2013
CC-MAIN-2021-49
refinedweb
519
66.74
Some purpose of hosting phishing sites. We reached out to Nic.at several times regarding these issues but Nic.at refused to take action against the malicious domains. In accordance with Spamhaus' SBL Listing Policy we then issued a listing of Nic.at IP space, for providing Spam Support Services. That same day we received a statement from Nic.at telling us that the reported domain names had been suspended. Finally there was some good news for the internet and the phishers moved away from ccTLD .at. They tried other TLDs but they were quickly shut down, and not long after leaving .at ccTLD the Rock Phish gang faded away completely. Since that time, it had been fairly quiet in the .at zone in terms of abuse, at least until the end of 2014 when we started to see miscreants register new domain names within the .at namespace again. The story is almost the same as in 2007: Miscreants registering domain names for exclusively malicious purposes. This time, instead of hosting phishing content, these domains are being used solely to provide DNS resolution for botnets. We call this "malware DNS hosting." To this end, they are hijacking modems and routers around the world and installing their own DNS servers that are then configured to resolve and service these botnet domains. Typically for botnets such as Zemot a click fraud bot or ebanking trojans such as KINS and Gozi. Below are some sample domains that are actively being used for malware DNS hosting at the time of writing. Note that the A-records point to end-user IP address space, meaning that these are all hijacked router/modems or otherwise compromised devices. $ dig +norec +noqu jeteligold.at @d.ns.at NS ;; AUTHORITY SECTION: jeteligold.at. 10800 IN NS bb.jeteligold.at. jeteligold.at. 10800 IN NS dd.jeteligold.at. jeteligold.at. 10800 IN NS cc.jeteligold.at. jeteligold.at. 10800 IN NS aa.jeteligold.at. ;; ADDITIONAL SECTION: aa.jeteligold.at. 10800 IN A 197.45.142.102 bb.jeteligold.at. 10800 IN A 188.136.148.242 cc.jeteligold.at. 10800 IN A 41.216.211.170 dd.jeteligold.at. 10800 IN A 78.158.162.235 $ dig +norec +noqu uhilod.at @d.ns.at NS ;; AUTHORITY SECTION: uhilod.at. 10800 IN NS aa.uhilod.at. uhilod.at. 10800 IN NS dd.uhilod.at. uhilod.at. 10800 IN NS cc.uhilod.at. uhilod.at. 10800 IN NS bb.uhilod.at. ;; ADDITIONAL SECTION: aa.uhilod.at. 10800 IN A 77.245.0.47 bb.uhilod.at. 10800 IN A 168.187.148.108 cc.uhilod.at. 10800 IN A 180.92.156.171 dd.uhilod.at. 10800 IN A 185.47.49.121 $ dig +norec +noqu hjll.at @d.ns.at NS ;; AUTHORITY SECTION: hjll.at. 10800 IN NS cc.hjll.at. hjll.at. 10800 IN NS dd.hjll.at. hjll.at. 10800 IN NS aa.hjll.at. hjll.at. 10800 IN NS bb.hjll.at. ;; ADDITIONAL SECTION: aa.hjll.at. 10800 IN A 77.245.0.47 bb.hjll.at. 10800 IN A 168.187.148.108 cc.hjll.at. 10800 IN A 180.92.156.171 dd.hjll.at. 10800 IN A 185.47.49.121 But this is merely the tip of the iceberg. Since January of this year we have seen many such .at domains all being used for the same purpose: 2015-08-20 12:50:10 rikklacrt.at Malware DNS 2015-08-16 09:02:54 serverweb.at Malware DNS 2015-08-14 09:52:37 zartrusrokl.at Malware DNS 2015-08-02 09:17:49 jeteligold.at Malware DNS 2015-07-25 08:38:48 zyzaeloft.at Malware DNS 2015-07-14 06:45:32 dfuktilor.at Malware DNS 2015-07-04 08:15:16 dirrolkh.at Malware DNS 2015-07-01 09:46:48 metanet.at Malware DNS 2015-06-25 05:54:48 gilkolt.at Malware DNS 2015-06-20 10:33:32 deorzae99.at Malware DNS 2015-06-07 07:52:44 kilofrogs.at Malware DNS 2015-06-04 06:47:33 hjll.at Malware DNS 2015-05-30 20:05:53 dudiklor.at Malware DNS 2015-05-30 09:07:15 rekmilk.at Malware DNS 2015-05-28 06:10:19 fgfj.at Malware DNS 2015-05-22 10:37:46 uhilod.at Malware DNS 2015-05-18 07:51:33 rhzq.at Malware DNS 2015-05-15 11:07:59 wxj.at Malware DNS 2015-05-15 08:41:01 lukpin.at Malware DNS 2015-05-14 07:28:50 zorjbneon.at Malware DNS 2015-05-13 12:40:01 wzq.at Malware DNS 2015-05-06 17:25:57 geokcha.at Malware DNS 2015-05-02 15:40:04 flurilk.at Malware DNS 2015-05-01 14:38:37 deosli.at Malware DNS 2015-04-25 07:02:13 mizare.at Malware DNS 2015-04-21 12:53:25 qlj.at Malware DNS 2015-04-20 07:34:25 cormk.at Malware DNS 2015-04-18 08:06:56 xwz.at Malware DNS 2015-04-04 10:49:02 qwj.at Malware DNS 2015-04-04 10:36:20 uwpi.at Malware DNS 2015-03-30 12:17:02 zjw.at Malware DNS 2015-03-26 09:23:58 techost.at Malware DNS 2015-03-24 12:00:33 qjq.at Malware DNS 2015-03-13 14:11:41 maxdns.at Malware DNS 2015-03-11 09:37:01 gogot.at Malware DNS 2015-03-09 11:28:02 qxq.at Malware DNS 2015-03-08 09:14:20 xqk.at Malware DNS 2015-03-05 08:19:36 pabla.at Malware DNS 2015-01-09 14:00:40 uberhosting.at Malware DNS 2014-12-26 09:46:42 webcore.at Malware DNS 2014-12-26 09:46:36 wqj.at Malware DNS 2014-11-12 12:17:13 keyhost.at Malware DNS Nic.at is one of the very few ccTLDs/TLDs that does not reveal the name of the registrar of a domain name, hence it is not possible to report malicious domain names directly to the registrar. Fortunately, Nic.at introduced an API shortly after our initial dispute that allows a reporter to reach out to the registrar. One still has no visibility as to which .at domain name has been registered through which registrar, but at least you have a possibility to reach them through Nic.at's API. Unfortunately this works very poorly as there is no guarantee that anyone at that domain's registrar takes care of, or even reads, reports that are sent - and no way to follow up with them. Contacting Nic.at about this with appeal to fix these abuse problems will get one nowhere: Dear Mr. X, I am referring to your e-mails below. [...] Regarding the legal situation of nic.at: Being located in Salzburg, nic.at is subject to Austrian law. The Austrian Supreme Court clearly stated in various decisions that nic.at as the registry cannot be held liable for the content of a website. Only the domain itself is the subject of the contract between nic.at and the domain holder. Therefore it is impossible for us to withdraw or lock a domain just on the request of a private company only referring to the content of a website and without a court order applicable in Austria as nic.at would face full liability against the domain holder. Reasons for nic.at to withdraw a domain according to the terms and conditions are wrong holder data, non-payment, non-working name servers, a court judgement or the violation of third parties rights through the domain itself, but not through the content. Below we forwarded you the contact data of the responsible registrar for the named domain and ask you kindly to contact this company for further activities. Best regards X X General Counsel nic.at We know that most of the malicious domain names shown above are registered through a German based registrar called Key-Systems. We have contacted them and outlined the problem. While some of the reported domain names have been suspended by Key-Systems, the registrar seems to have recommended their customer to move the domain name to a different registrar / reseller. What we are now seeing within ccTLD .at is ridiculous: Several registrars, mostly German-based, are moving malicious domain names around between each other. Once you report a malicious domain name to one of these registrars, they will just transfer it to a different registrar. Of course you won't notice that, because Nic.at does not reveal the registrars name on their whois system. So the only thing you see is that the domain name is still active even many weeks after your abuse report. If you report the domain name again through Nic.at' API, the abuse report will go to the new registrar and the miscreants will move the domain to a different registrar again. It is a cat and mouse game and Nic.at seems to be unable or unwilling to take effective action against the abuse of their domain name space. By "their domain name space" we really mean the domain name space belonging to the Austrian nation and its people and companies. We at Spamhaus are sad to see that more than eight years after dealing with the Rock Phish gang at Nic.at, the situation hasn't changed. Nic.at has not made essential changes in their policies in order to fight cybercrime. While the rules allow to revoke the delegations in the case of an instruction from a competent authority, to our knowledge no competent authority capable of instructing Nic.at to revoke the delegations of domains obviously registered for exclusively malicious purposes has been established. In a 2007 Document, Nic.at suggests that the "solution" is: "If we receive a proof of wrong domain holders data, we could withdraw domain according to our T&C." But this can not work in practice, as discussed in more detail below, registration data of malicious domains are either invalid - but proving that could well be a lengthy and labor-intensive proposition (who would do it?) that can exceed the domain lifetime expected by the miscreant - or refer to real innocent persons whose credentials were stolen. In contrast, the malicious nature of a domain is typically assessed by security researchers within minutes from its first appearance on the Internet, thanks to a multitude of technical indicators. Therefore, as a matter of fact, today Nic.at continues to refuse to suspend malicious domain names. At the same time, Nic.at does not provide the domain registrars the authority and permission to suspend malicious domain names, nor does it provide identification of those registrars. The result is that miscreants have "bulletproof" domains to control their botnets provided by Nic.at. It gets worse: Nic.at is not the only registry that is suffering from these abuse problems. DENIC, which is the provider of the German ccTLD .de, also has a weak registrar agreement in place and is providing insufficient information on their Whois gateway - again, not revealing the sponsoring domain name registrar - and are hence being heavily abused by spammers and phishers recently. Below is a list of recent spamming, phishing and botnet domains that have been registered in DENIC's ccTLD .de space: 2015-07-30 20:33:53 moncler-online-shop.de Fake product domains 2015-07-28 15:44:55 radio-def.de Malware C&C 2015-07-15 06:38:54 ssl-pp-authentifizierungsverfahren.de Phishing domain 2015-07-06 06:25:04 diazepamrezeptfrei.de Spammer domain (pillz gang) 2015-07-05 21:38:29 viagrakaufenonline.de Spammer domain (pillz gang) 2015-06-25 19:11:16 verifizierung-kundendienst.de Phishing domain 2015-06-25 19:10:47 paypal-datenabgleich.de Phishing domain 2015-06-16 09:29:43 postbank-zentrale.de Phishing domain 2015-06-14 14:04:07 kontoschutz-ssl-verfahrensabgleich.de Phishing domain 2015-05-22 12:57:01 archimagazine.de Italian spammer gang 2015-05-21 14:21:37 paypal-kundenverifizierung.de Phishing domain 2015-05-21 14:21:25 paypal-sicherer.de Phishing domain 2015-05-21 14:21:18 paypal-sicherheitsservice.de Phishing domain 2015-05-21 14:20:24 paypal-verifizieren.de Phishing domain 2015-05-21 14:20:21 paypal-authentifizierung.de Phishing domain 2015-05-09 08:16:52 pantozol40mg.de Spammer domain (pillz gang) 2015-05-09 08:16:52 bisoprolol5mg.de Spammer domain (pillz gang) 2015-05-09 08:16:52 doxycyclin100.de Spammer domain (pillz gang) 2015-05-09 08:16:52 torasemid10mg.de Spammer domain (pillz gang) 2015-05-09 08:16:52 mirtazapin15mg.de Spammer domain (pillz gang) 2015-05-09 08:16:52 azithromycin500.de Spammer domain (pillz gang) 2015-05-09 08:16:52 prednisolon20mg.de Spammer domain (pillz gang) 2015-05-09 08:16:52 tadalafil-kaufen.de Spammer domain (pillz gang) 2015-05-09 08:16:51 tabmd.de Spammer domain (pillz gang) 2015-05-09 08:16:51 apotheketop.de Spammer domain (pillz gang) 2015-05-09 08:16:51 ramilich5mg.de Spammer domain (pillz gang) 2015-05-09 08:16:51 amlodipin5mg.de Spammer domain (pillz gang) 2015-05-09 08:16:51 gesundeliebe.de Spammer domain (pillz gang) 2015-05-09 08:16:51 finasterid1mg.de Spammer domain (pillz gang) 2015-05-09 08:16:51 omeprazol40mg.de Spammer domain (pillz gang) 2015-05-09 08:16:51 prednisolon20.de Spammer domain (pillz gang) 2015-05-09 08:16:51 kaufen-viagra69.de Spammer domain (pillz gang) 2015-05-09 08:16:51 kaufentadalafil.de Spammer domain (pillz gang) 2015-05-09 08:16:51 pantoprazol40mg.de Spammer domain (pillz gang) 2015-05-08 12:37:38 potenzmittelapotheke24.de Spammer domain (pillz gang) 2015-05-03 08:34:31 flirtfair.de Spammer domain 2015-05-03 08:34:31 treffpunkt69.de Spammer domain 2015-05-03 08:34:31 sexpartnerclub.de Spammer domain 2015-05-03 08:34:31 images-flirtfair.de Spammer domain 2015-05-03 08:34:31 static-flirtfair.de Spammer domain 2015-04-28 08:52:32 hochzeit-im-garten.de Snowshoe spam 2015-04-28 08:52:32 it-loesungen-lange.de Snowshoe spam 2015-03-28 07:44:25 meine-db-aktualisierungskonto.de Phishing domain 2015-02-27 08:51:01 bekanntgabe-service.de Phishing domain 2015-02-27 08:50:46 kundeninformation-service.de Phishing domain 2015-02-27 08:50:27 consumerinformation.de Phishing domain 2015-02-23 13:43:09 sicherheit-veriifizierung.de Phishing domain 2015-02-10 12:26:54 kundendienst-commerzbanking.de Phishing domain 2015-02-02 08:23:21 abcnyx98cz.de Neurevt C&C Looking at the phishing domains that have been registered within ccTLD .de in the first half of 2015, it is interesting to see that the phishers are not only abusing ccTLD .de to target PayPal customers but also to target customers of certain German banks, such as Postbank and Commerzbank. So, phishers are weaponizing Germany's own internet infrastructure (in this case the ccTLD .de) to harm German citizens - yet DENIC refuses to take any action against offensive domain names. We can imagine how frustrating this is for both German citizens that are victims of these phishing attacks and for the affected financial institutions in Germany. The financial losses from these phishing fraud domains are all too real. We have contacted DENIC several times regarding these abuse problems. Unfortunately, their response was nearly exactly the same as the one we got from Nic.at: Hello, DENIC is only responsible for the registration of domains directly under the Top Level Domain (TLD) .de. It is the domain holders who are responsible for their individual domains as well as the contents and services that are available through them or processed by them. It is thus never possible for DENIC to be able to find out directly who is the source of spam mails or hacker attacks. DENIC is not able to block them, nor is it able to take any further steps. For further information, please visit our website at Mit freundlichen Grüßen / Kind regards -- Business Services DENIC eG Kaiserstraße 75-77 60329 Frankfurt am Main GERMANY Fon: +49 69 XXX Taking a look at DENIC Domain Terms and Conditions, the statement made by DENIC appears in a somehow strange light: § 3 Duties of the Domain Holder (1) In submitting the application for registration of a domain, the Domain Holder gives an explicit assur-ance. [...] § 7 Termination [...] (2) DENIC is only permitted to terminate the contract on substantial grounds. These grounds include, in particular, any case in which: [...] d) the registration of the domain for the Domain Holder manifestly infringes the rights of others or is otherwise illegal, regardless of the specific use made of it; or [...] f) the data supplied to DENIC regarding the Domain Holder or the Administrative Contact is incorrect; or [...] The Terms and Conditions allows DENIC to terminate a domain name if it is "manifestly infringes the rights of others or is otherwise illegal" (§7, 2d) or "the data supplied to DENIC regarding the Domain Holder or the Administrative Contact is incorrect" (§7, 2f). These two statements (specially the term "otherwise illegal", which is pretty generic and hence gives DENIC a big scope of interpretation) appear to be quite solid. But having a look at recent fraudulent registrations within the ccTLD .de, the situation looks a bit different, e.g. ssl-pp-authentifizierungsverfahren.de - which was actually a phishing domain that was targeting PayPal: Checking the registrants email address (hans-bader@dermails.net) reveals that the domain name (dermails.net) doesn't have an MX record and the registrant is not able to receive any email. But it actually gets worse: The domain name (dermails.net) is not even registered: No match for "DERMAILS.NET". >>> Last update of whois database: Wed, 22 Jul 2015 10:58:01 GMT <<< The situation we have here appears to be pretty clear: DENIC is apparently doing no validation and verification (automated or otherwise) of the data provided by the registrant. This opens big doors for spammers, malware coders and botnet operators to abuse the German domain name space. Beside the fact that the data provided by the registrant is incorrect and hence violating the Terms and Conditions of DENIC, the domain name can also be treat under §7, 2d ("or is otherwise illegal"), unless identity theft is legal Germany (which we really doubt). Beside the fact that many fraudulent domain name registration we see are being committed using incorrect registrant data, we also see a trend, specially in the ccTLD .de zone, of stealing the identity of someone else. Cybergangsters are registering malicious domain names using a stolen identity by impersonating being someone else. For example potenzmittelapotheke24.de (a pill domain that was being heavily spammed out by the Slenfbot botnet recently): Whois can be found here: A short search on Google reveals that the person that has registered this domain name is a painter in Schwerin (Germany): We doubt that a painter is able to run such a large spam botnet operation and, specially, using his real name for this purpose. We wonder, and are concerned, as to how this innocent individual who is being victimized by the cybercriminal botnet drug-spamming operation would ever be able to remove his personal information from the domain registration. It does not seem that any report to DENIC would have any effect. According to DENIC, victims whose identities have been abused for registration of domain names by third parties can request deletion of the domain names by filing a written statement to DENIC. However, this requires the victim, a) being aware his identity has been abused, and b) being familiar with the domain registration system to address DENIC and explain the situation to them. According to our experience, this isn't the case with most domain names registered using stolen identities. Therefore, DENIC should also take action and look into potential fraudulent domain registrations when being notified by third parties and provided with appropriate evidence. Nic.at and DENIC try to excuse their inaction to abuse reports with claims that they are not responsible for the content or use of a domain name. These claims seem to have the short-sighted and narrow goal of keeping responsibility away from themselves. Their procedures and regulations seem to be based on the idea of protecting the rights of the domain owners and their freedom to publish contents on their web sites without being shut down. While that is commendable in legitimate cases, the issue here is not legitimate domains. Cybercrime domains should not benefit from this kind of protection, as keeping them connected brings an immense damage to the Internet at large and is of benefit only to the cybercrime gangs that registered them. It is clear that the procedures and regulations need to be modified in order to take into account the existence of purely malicious domains, identified by security researchers, and stop the abuse quickly and effectively. In fact, a registry may act to stop malicious domain names in several ways. The most important mechanism is having a strong registrar agreement / registrant agreement in place, an "Acceptable Use Policy." Many registries create their own registrar agreement, so they can write a comprehensive agreement as long as it aligns with their local legislation and ICANN's policy. It should be noted that though neither Nic.at and DENIC are directly governed by ICANN policy, both work with and are involved in ICANN and have funded ICANN's operations (see). Some registries are bound to a registrar / registry agreement that has been setup by the local regulator, for example under telecommunication statutes. For both cases, there are two very good examples on how you can deal with abusive customers. 1. The ccTLD .ru (responsible for the domain name space .ru) introduced new terms and conditions in 2011 to battle cybercrime. That allows registrars to suspend malicious domain name upon the receipt of a request from a substantiated petition from an organization indicated by the Coordinator as a competent one to determine violations in the Internet: 5.7. The Registrar may terminate the domain name delegation upon the receipt of a substantiated petition from an organization indicated by the Coordinator as a competent one to determine violations in the Internet, should the petition contain information about the domain’s information addressing system being used for: 1. receipt from third parties (users of the system) of confidential information by misleading these persons regarding its origin (authenticity) due to similarity of the domain names, design or content of the information (phishing); 2. unauthorized access to third parties’(users, visitors) information systems or for infecting these systems with malware or taking control of such software (botnet control); [...] 2. The Swiss Regulator BAKOM (Federal Office of Communications), which is the owner of ccTLD .ch, actually goes down the same path. BAKOM has recently updated their regulation which now allows the registry to suspend malicious domain names upon receipt of a request from an organisation which is recognized by the regulator to deal with cybercrime: Art. 15 Blockierung eines Domain-Namens bei Missbrauchsverdacht 1 Die Registerbetreiberin muss einen Domain-Namen technisch und administrativ blockieren, wenn die folgenden Voraussetzungen erfüllt sind: a. Es besteht der begründete Verdacht, dass der Domain-Name benutzt wird: 1. um mit unrechtmässigen Methoden an sensible Daten zu gelangen; oder 2. um schädliche Software zu verbreiten. b. Eine zur Bekämpfung der Cyberkriminalität vom BAKOM anerkannte Stelle hat die Blockierung beantragt.:
https://www.spamhaus.org/news/article/724/ongoing-abuse-problems-at-nic.at-and-denic
CC-MAIN-2017-13
refinedweb
3,945
59.19
Accessing Session Map in the Domain or Service Layer The Session Map is available in Grails in the Views, TagLibs and the Controllers. That is, it can be directly accessed by the name “session”. If the Session Map is required to be accessed in the Service Layer or the Domain layer, such a straightforward approach will not work. In this case, a class which is a part of the Spring Framework can be used which gives the current context, the request attributes and the session. This class along with HttpSession have to be imported by issuing the following statements. import org.springframework.web.context.request.RequestContextHolder Now, the session variable can be defined in the Service class or Domain method as: def session = RequestContextHolder.currentRequestAttributes().getSession() The session attributes can now be accessed as session.attribute Hope this helps. S Vivek Krishna vivek@IntelliGrape.com Thank you, it was very helpful ! Hi Ashish, Glad to know that the post helped. This is just one of the workarounds. It is not ideal to be accessing session information inside the service as it binds the service to the Web layer very tightly, which means, the service is “servicing” HttpRequests only. If that constraint is not a problem for the application, then there is no harm in using this technique. @ Shashi : Glad to know that this helped. Thanks a lot for posting this wonderful article. Wonder why such criticla information are not readily available on the official site or by the authors/creators etc. One has to struggle to unearth such critical details. Thanks again Vivek Ashish thanks for the nice article. It saved me a lot of work of passing the whole session object to service classes from controllers.
http://www.tothenew.com/blog/accessing-session-map-in-the-domain-or-service-layer/comment-page-1/
CC-MAIN-2019-39
refinedweb
287
65.93
iCelHPath Struct Reference Hierarchical path between two points. More... #include <tools/celhpf.h> Inheritance diagram for iCelHPath: Detailed Description Hierarchical path between two points. Definition at line 45 of file celhpf.h. Member Function Documentation Get current node. Render path. Distance to goal. Get first node. Get last node. Check if the path can be transversed forward from the current position. Check if the path can be transversed backward from the current position. Invert path. Get next node. Get previous node. Restart path. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/structiCelHPath.html
CC-MAIN-2015-18
refinedweb
107
55.1
News aggregator Functional Jobs: Software Engineer (Haskell/Clojure) at Capital Match (Full-time) Overview CAPITAL MATCH is a leading marketplace lending and invoice financing platform in Singapore. Our in-house platform, mostly developed in Haskell, has in the last year seen more than USD 15 million business loans processed with a strong monthly growth (current rate of USD 1.5-2.5 million monthly). We are also eyeing expansion into new geographies and product categories. Very exciting times! We have just secured another funding round to build a world-class technology as the key business differentiator. The key components include credit risk engine, seamless banking integration and end-to-end product automation from loan origination to debt collection. Responsibilities We are looking to hire a software engineer with a minimum of 2-3 years coding experience. The candidate should have been involved in a development of multiple web-based products from scratch. He should be interested in all aspects of the creation, growth and operations of a secure web-based platform: front-to-back features development, distributed deployment and automation in the cloud, build and test automation etc. Background in fintech and especially lending / invoice financing space would be a great advantage. Requirements Our platform is primarily developed in Haskell with an Om/ClojureScript frontend. We are expecting our candidate to have experience working with a functional programming language e.g. Haskell/Scala/OCaml/F#/Clojure/Lisp/Erlang. Deployment and production is managed with Docker containers using standard cloud infrastructure so familiarity with Linux systems, command-line environment and cloud-based deployment is mandatory. depending on experience and skills of the candidate.. Get information on how to apply for this position. Gabriel Gonzalez: Electoral vote distributions are Monoids I'm a political junkie and I spend way too much time following the polling results on FiveThirtyEight's election forecast. A couple of days ago I was surprised that FiveThirtyEight gave Trump a 13.7% chance of winning, which seemed too high to be consistent with the state-by-state breakdowns. After reading their methodology I learned that this was due to them not assuming that state outcomes were independent. In other words, if one swing state breaks for Trump this might increase the likelihood that other swing states also break for Trump. However, I still wanted to do the exercise to ask: what would be Hillary's chance of winning if each state's probability of winning was truly independent of one another? Let's write a program to find out!Raw data A couple of days ago (2016-10-24) I collected the state-by-state data from FiveThirtyEight's website (by hand) and recorded: - the name of the state - the chance that Hillary Clinton would win the state - the number of electoral college votes for that state I recorded this data as a list of 3-tuples) ] Note that some states (like Maine) apportion electoral votes in a weird way:probabilities :: [(String, Double, Int)] probabilities = ... , ("Maine" , 0.852, 2) , ("Maine - 1" , 0.944, 1) , ("Maine - 2" , 0.517, 1) ... ] Maine apportions two of its electoral votes based on a state-wide vote (i.e. "Maine" in the above list) and then two further electoral votes are apportioned based on two districts (i.e. "Maine - 1" and "Maine - 2". FiveThirtyEight computes the probabilities for each subset of electoral votes, so we just record them separately.Combinatorial explosion So how might we compute Hillary's chances of winnings assuming the independence of each state's outcome? One naïve approach would be to loop through all possible electoral outcomes and compute the probability and electoral vote for each outcome. Unfortunately, that's not very efficient since the number of possible outcomes doubles with each additional entry in the list:>>> 2 ^ length probabilities 72057594037927936 ... or approximately 7.2 * 10^16 outcomes. Even if I only spent a single CPU cycle to compute each outcome (which is unrealistic) on a 2.5 GHz processor that would take almost a year to compute them all. The election is only a couple of weeks away so I don't have that kind of time or computing power!Distributions Fortunately, we can do much better than that! We can efficiently solve this using a simple "divide-and-conquer" approach where we subdivide the large problem into smaller problems until the solution is trivial. The central data structure we'll use is a probability distribution which we'll represent as a Vector of Doubles:import Data.Vector.Unboxed (Vector) newtype Distribution = Distribution (Vector Double) deriving (Show) This Vector will always have 539 elements, one element per possible final electoral vote count that Hillary might get. Each element is a Double representing the probability of that corresponding electoral vote count. We will maintain an invariant that all the probabilities (i.e. elements of the Vector) must sum to 1. For example, if the Distribution were:[1, 0, 0, 0, 0 ... ] ... that would represent a 100% chance of Hillary getting 0 electoral votes and a 0% chance of any other outcome. Similarly, if the Distribution were:[0, 0.5, 0, 0.5, 0, 0, 0 ... ] ... then that would represent a 50% chance of Hillary getting 1 electoral vote and a 50% chance of Hillary getting 3 electoral votes. In order to simplify the problem we need to subdivide the problem into smaller problems. For example, if I want to compute the final electoral vote probability distribution for all 50 states perhaps we can break that down into two smaller problems: - Split the 50 states into two sub-groups of 25 states each - Compute an electoral vote probability distribution for each sub-group of 25 states - Combine probability distributions for each sub-group into the final distribution In order to do that, I need to define a function that combines two smaller distributions into a larger distribution:import qualified Data.Vector.Unboxed) The combine function takes two input distributions named xs and ys and generates a new distribution named zs. To compute the probability of getting i electoral votes in our composite distribution, we just add up all the different ways we can get i electoral votes from the two sub-distributions. For example, to compute the probability of getting 4 electoral votes for the entire group, we add up the probabilities for the following 5 outcomes: - We get 0 votes from our 1st group and 4 votes from our 2nd group - We get 1 votes from our 1st group and 3 votes from our 2nd group - We get 2 votes from our 1st group and 2 votes from our 2nd group - We get 3 votes from our 1st group and 1 votes from our 2nd group - We get 4 votes from our 1st group and 0 votes from our 2nd group The probabilityOfEachOutcome function computes the probability of each one of these outcomes and then the totalProbability function sums them all up to compute the total probability of getting i electoral votes. We can also define an empty distribution representing the probability distribution of electoral votes given zero states:empty :: Distribution empty = Distribution (Data.Vector.Unboxed.generate 539 makeElement) where makeElement 0 = 1 makeElement _ = 0 This distribution says that given zero states you have a 100% chance of getting zero electoral college votes and 0% chance of any other outcome. This empty distribution will come in handy later on.Divide and conquer There's no limit to how many times we can subdivide the problem. In the extreme case we can sub-divide the problem down to individual states (or districts for weird states like Maine and Nebraska): - subdivide our problem into 56 sub-groups (one group per state or district) - compute the probability distribution for each sub-group, which is trivial - combine all the probability distributions to retrieve the final result In fact, this extreme solution is surprisingly efficient! All we're missing is a function that converts each entry in our original probabilities list into a Distribution:toDistribution :: (String, Double, Int) -> Distribution toDistribution (_, probability, votes) = Distribution (Data.Vector.Unboxed.generate 539 makeElement) where makeElement 0 = 1 - probability makeElement i | i == votes = probability makeElement _ = 0 This says that if our probability distribution for a single state should have two possible outcomes: - Hillary clinton has probability x of winning n votes for this state - Hillary clinton has probability 1 - x of winning 0 votes for this state - Hillary clinton has 0% probability of any other outcome for this state Let's test this out on a couple of states:>>> toDistribution ("Alaska" , 0.300, 3) Distribution [0.7,0.0,0.0,0.3,0.0,0.0,... >>> toDistribution ("North Dakota", 0.070, 3) Distribution [0.9299999999999999,0.0,0.0,7.0e-2,0.0... This says that: - Alaska has a 30% chance of giving Clinton 3 votes and 70% chance of 0 votes - North Dakota has a 7% chance of giving Clinton 3 votes and a 93% chance of 0 votes We can also verify that combine works correctly by combining the electoral vote distributions of both states. We expect the new distribution to be: - 2.1% chance of 6 votes (the probability of winning both states) - 65.1% chance of 0 votes (the probability of losing both states) - 32.8% chance of 3 votes (the probability of winning just one of the two states) ... and this is in fact what we get:>>> let alaska = toDistribution ("Alaska" , 0.300, 3) >>> let northDakota = toDistribution ("North Dakota", 0.070, 3) >>> combine alaska northDakota Distribution [0.6509999999999999,0.0,0.0,0.32799999999999996,0.0,0.0,2.1e-2,0.0,...Final result To compute the total probability of winning, we just transform each element of the list to the corresponding distribution:distributions :: [Distribution] distributions = map toDistribution probabilities ... then we reduce the list to a single value repeatedly applying the combine function, falling back on the empty distribution if the entire list is empty:import qualified Data.List distribution :: Distribution distribution = Data.List.foldl' combine empty distributions ... and if we want to get Clinton's chances of winning, we just add up the probabilities for all outcomes greater than or equal to 270 electoral college votes:chanceOfClintonVictory :: Double chanceOfClintonVictory = Data.Vector.Unboxed.sum (Data.Vector.Unboxed.drop 270 xs) where Distribution xs = distribution main :: IO () main = print chanceOfClintonVictory If we compile and run this program we get the final result:$ stack --resolver=lts-7.4 build vector $ stack --resolver=lts-7.4 ghc -- -O2 result.hs $ ./result 0.9929417642334847 In other words, Clinton has a 99.3% chance of winning if each state's outcome is independent of every other outcome. This is significantly higher than the probability estimated by FiveThirtyEight at that time: 86.3%. These results differ for the same reason I noted above: FiveThirtyEight assumes that state outcomes are not necessarily independent and that a Trump in one state could correlate with Trump wins in other states. This possibility of correlated victories favors the person who is behind in the race. As a sanity check, we can also verify that the final probability distribution has probabilities that add up to approximately 1:>>> let Distribution xs = distribution >>> Data.Vector.Unboxed.sum xs 0.9999999999999994 Exercise: Expand on this program to plot the probability distributionEfficiency Our program is also efficient, running in 30 milliseconds:$ bench ./result benchmarking ./result time 30.33 ms (29.42 ms .. 31.16 ms) 0.998 R² (0.997 R² .. 1.000 R²) mean 29.43 ms (29.13 ms .. 29.81 ms) std dev 710.6 μs (506.7 μs .. 992.6 μs) This is a significant improvement over a year's worth of running time. We could even speed this up further using parallelism. Thanks to our divide and conquer approach we can subdivide this problem among up to 53 CPUs to accelerate the solution. However, after a certain point the overhead of splitting up the work might outweigh the benefits of parallelism.Monoids People more familiar with Haskell will recognize that this solution fits cleanly into a standard Haskell interface known as the Monoid type class. In fact, many divide-and-conquer solutions tend to be Monoids of some sort. The Monoid typeclass is defined as:class Monoid m where mappend :: m -> m -> m mempty :: m -- An infix operator that is a synonym for `mappend` (<>) :: Monoid m => m -> m -> m x <> y = mappend x y ... and the Monoid class has three rules that every implementation must obey, which are known as the "Monoid laws". The first rule is that mappend (or the equivalent (<>) operator) must be associative:x <> (y <> z) = (x <> y) <> z The second and third rules are that mempty must be the "identity" of mappend, meaning that mempty does nothing when combined with other values:mempty <> x = x x <> mempty = x A simple example of a Monoid is integers under addition, which we can implement like this:instance Monoid Integer where mappend = (+) mempty = 0 ... and this implementation satisfies the Monoid laws thanks to the laws of addition:(x + y) + z = x + (y + z) 0 + x = x x + 0 = x However, Distributions are Monoids, too! Our combine and empty definitions both have the correct types to implement the mappend and mempty methods of the Monoid typeclass, respectively:instance Monoid Distribution where mappend = combine mempty = empty Both mappend and mempty for Distributions satisfy the Monoid laws: - mappend is associative (Proof omitted) - mempty is the identity of mappend We can prove the identity law using the following rules for how Vectors behave:-- These rules assume that all vectors involved have 539 elements -- If you generate a vector by just indexing into another vector, you just get back -- the other vector Data.Vector.Unboxed.generate 539 (Data.Vector.Unboxed.unsafeIndex xs) = xs -- If you index into a vector generated by a function, that's equivalent to calling -- that function Data.Vector.unsafeIndex (DataVector.generate 539 f) i = f i Equipped with those rules, we can then prove that mappend xs mempty = xsmapppend (Distribution xs) mempty -- mappend = combine = combine (Distribution xs) mempty -- Definition of `mempty` = combine (Distribution xs) (Distribution ys) where ys = Data.Vector.Unboxed.generate 539 makeElement where makeElement 0 = 1 makeElement _ = 0 -- Definition of `combine` =) ys = Data.Vector.Unboxed.generate 539 makeElement where makeElement 0 = 1 makeElement _ = 0 -- Data.Vector.unsafeIndex (DataVector.generate 539 f) i = f i = * makeElement (i - j) makeElement 0 = 1 makeElement _ = 0 -- Case analysis on `j` = * 1 -- makeElement (i - j) = makeElement 0 = 1 | otherwise = Data.Vector.Unboxed.unsafeIndex xs j * 0 -- makeElement (i - j) = 0 -- x * 1 = x -- y * 0 = 0 = | otherwise = 0 -- Informally: "Sum of a vector with one non-zero element is just that element" = Distribution zs where zs = Data.Vector.Unboxed.generate 539 totalProbability totalProbability i = Data.Vector.Unboxed.unsafeIndex xs i -- Data.Vector.Unboxed.generate 539 (Data.Vector.Unboxed.unsafeIndex xs) = xs = Distribution xs Exercise: Prove the associativity law for combineConclusion I hope people find this an interesting example of how you can apply mathematical design principles (in this case: Monoids) in service of simplifying and speeding up programming problems. If you would like to test this program out yourself the complete program is provided below:import Data.Vector.Unboxed (Vector) import qualified Data.List import qualified Data.Vector.Unboxed) ] newtype Distribution = Distribution { getDistribution :: Vector Double } deriving (Show)) empty :: Distribution empty = Distribution (Data.Vector.Unboxed.generate 539 makeElement) where makeElement 0 = 1 makeElement _ = 0 instance Monoid Distribution where mappend = combine mempty = empty toDistribution :: (String, Double, Int) -> Distribution toDistribution (_, probability, votes) = Distribution (Data.Vector.Unboxed.generate 539 makeElement) where makeElement 0 = 1 - probability makeElement i | i == votes = probability makeElement _ = 0 distributions :: [Distribution] distributions = map toDistribution probabilities distribution :: Distribution distribution = mconcat distributions chanceOfClintonVictory :: Double chanceOfClintonVictory = Data.Vector.Unboxed.sum (Data.Vector.Unboxed.drop 270 xs) where Distribution xs = distribution main :: IO () main = print chanceOfClintonVictory Functional Jobs: Clojure Engineer at ROKT (Full-time) ROKT is hiring thoughtful, talented functional programmers, at all levels, to expand our Clojure team in Sydney, Australia. (We're looking for people who already have the right to work in Australia, please.) ROKT is a successful startup with a transaction marketing platform used by some of the world's largest ecommerce sites. Our Sydney-based engineering team supports a business that is growing rapidly around the world. Our Clojure engineers are responsible for ROKT's "Data Platform", a web interface for our sales teams, our operations team, and our customers to extract and upload the data that drives our customers' businesses and our own. We write Clojure on the server-side, and a ClojureScript single-page application on the frontend. We don't have a Hadoop-based neural net diligently organising our customer data into the world's most efficiently balanced red-black tree (good news: we won't ask you to write one in an interview) — instead, we try to spend our time carefully building the simplest thing that'll do what the business needs done. We're looking for programmers who can help us build simple, robust systems — and we think that means writing in a very functional style — whether that involves hooking some CV-enhancing buzzword technology on the side or not. If you have professional Clojure experience, that's excellent, we'd like to hear about it. But we don't have a big matrix of exacting checkboxes to measure you against, so if your Clojure isn't fluent yet, we'll be happy to hear how you've been writing functional code in whatever language you're most comfortable in, whether it be Haskell or JavaScript, Common Lisp or Scala. We have the luxury of building out a solid team of thoughtful developers — no "get me a resource with exactly X years of experience in technology Y, stat!" Get information on how to apply for this position. Joachim Breitner:Imports yesterday.↩ I like how in this alignment of <*> and <* the > point out where the arguments are that are being passed to the function on the left.↩ Brent Yorgey: Adventures in enumerating balanced brackets Since I’ve been coaching my school’s ACM ICPC programming team, I’ve been spending a bit of time solving programming contest problems, partly to stay sharp and be able to coach them better, but also just for fun. I recently solved a problem (using Haskell) that ended up being tougher than I thought, but I learned a lot along the way. Rather than just presenting a solution, I’d like to take you through my thought process, crazy detours and all. Of course, I should preface this with a big spoiler alert: if you want to try solving the problem yourself, you should stop reading now!> {-# LANGUAGE GADTs #-} > {-# LANGUAGE DeriveFunctor #-} > > module Brackets where > > import Data.List (sort, genericLength) > import Data.MemoTrie (memo, memo2) > import Prelude hiding ((++)) The problem There’s a lot of extra verbiage at the official problem description, but what it boils down to is this: Find the th element of the lexicographically ordered sequence of all balanced bracketings of length . There is a longer description at the problem page, but hopefully a few examples will suffice. A balanced bracketing is a string consisting solely of parentheses, in which opening and closing parens can be matched up in a one-to-one, properly nested way. For example, there are five balanced bracketings of length : ((())), (()()), (())(), ()(()), ()()() By lexicographically ordered we just mean that the bracketings should be in “dictionary order” where ( comes before ), that is, bracketing comes before bracketing if and only if in the first position where they differ, has ( and has ). As you can verify, the list of length- bracketings above is, in fact, lexicographically ordered.A first try Oh, this is easy, I thought, especially if we consider the well-known isomorphism between balanced bracketings and binary trees. In particular, the empty string corresponds to a leaf, and (L)R (where L and R are themselves balanced bracketings) corresponds to a node with subtrees L and R. So the five balanced bracketings of length correspond to the five binary trees with three nodes: We can easily generate all the binary trees of a given size with a simple recursive algorithm. If , generate a Leaf; otherwise, decide how many nodes to put on the left and how many on the right, and for each such distribution recursively generate all possible trees on the left and right.> data Tree where > Leaf :: Tree > Node :: Tree -> Tree -> Tree > deriving (Show, Eq, Ord) > > allTrees :: Int -> [Tree] > allTrees 0 = [Leaf] > allTrees n = > [ Node l r > | k <- [0 .. n-1] > , l <- allTrees ((n-1) - k) > , r <- allTrees k > ] We generate the trees in “left-biased” order, where we first choose to put all nodes on the left, then on the left and on the right, and so on. Since a subtree on the left will result in another opening paren, but a subtree on the right will result in a closing paren followed by an open paren, it makes intuitive sense that this corresponds to generating bracketings in sorted order. You can see that the size- trees above, generated in left-biased order, indeed have their bracketings sorted. Writing allTrees is easy enough, but it’s definitely not going to cut it: the problem states that we could have up to . The number of trees with nodes has 598 digits (!!), so we can’t possibly generate the entire list and then index into it. Instead we need a function that can more efficiently generate the tree with a given index, without having to generate all the other trees before it. So I immediately launched into writing such a function, but it’s tricky to get right. It involves computing Catalan numbers, and cumulative sums of products of Catalan numbers, and divMod, and… I never did get that function working properly.The first epiphany But I never should have written that function in the first place! What I should have done first was to do some simple tests just to confirm my intuition that left-biased tree order corresponds to sorted bracketing order. Because if I had, I would have found this:> brackets :: Tree -> String > brackets brackets (Node l r) = mconcat ["(", brackets l, ")", brackets r] > > sorted :: Ord a => [a] -> Bool > sorted xs = xs == sort xs ghci> sorted (map brackets (allTrees 3)) True ghci> sorted (map brackets (allTrees 4)) False As you can see, my intuition actually led me astray! is a small enough case that left-biased order just happens to be the same as sorted bracketing order, but for this breaks down. Let’s see what goes wrong: In the top row are the size- trees in “left-biased” order, i.e. the order generated by allTrees. You can see it is nice and symmetric: reflecting the list across a vertical line leaves it unchanged. On the bottom row are the same trees, but sorted lexicographically by their bracketings. You can see that the lists are almost the same except the red tree is in a different place. The issue is the length of the left spine: the red tree has a left spine of three nodes, which means its bracketing will begin with (((, so it should come before any trees with a left spine of length 2, even if they have all their nodes in the left subtree (whereas the red tree has one of its nodes in the right subtree). My next idea was to try to somehow enumerate trees in order by the length of their left spine. But since I hadn’t even gotten indexing into the original left-biased order to work, it seemed hopeless to get this to work by implementing it directly. I needed some bigger guns.Building enumerations At this point I had the good idea to introduce some abstraction. I defined a type of enumerations (a la FEAT or data/enumerate):> data Enumeration a = Enumeration > { fromNat :: Integer -> a > , size :: Integer > } > deriving Functor > > enumerate :: Enumeration a -> [a] > enumerate (Enumeration f n) = map f [0..n-1] An Enumeration consists of a size along with a function Integer -> a, which we think of as being defined on [0 .. size-1]. That is, an Enumeration is isomorphic to a finite list of a given length, where instead of explicitly storing the elements, we have a function which can compute the element at a given index on demand. If the enumeration has some nice combinatorial structure, then we expect that this on-demand indexing can be done much more efficiently than simply listing all the elements. The enumerate function simply turns an Enumeration into the corresponding finite list, by mapping the indexing function over all possible indices. Note that Enumeration has a natural Functor instance, which GHC can automatically derive for us. Namely, if e is an Enumeration, then fmap f e is the Enumeration which first computes the element of e for a given index, and then applies f to it before returning. Now, let’s define some combinators for building Enumerations. We expect them to have all the nice algebraic flavor of finite lists, aka free monoids. First, we can create empty or singleton enumerations, or convert any finite list into an enumeration:> empty :: Enumeration a > empty = Enumeration (const undefined) 0 > > singleton :: a -> Enumeration a > singleton a = Enumeration (\_ -> a) 1 > > list :: [a] -> Enumeration a > list as = Enumeration (\n -> as !! fromIntegral n) (genericLength as) ghci> enumerate (empty :: Enumeration Int) [] ghci> enumerate (singleton 3) [3] ghci> enumerate (list [4,6,7]) [4,6,7] We can form the concatenation of two enumerations. The indexing function compares the given index against the size of the first enumeration, and then indexes into the first or second enumeration appropriately. For convenience we can also define union, which is just an iterated version of (++).> (++) :: Enumeration a -> Enumeration a -> Enumeration a > e1 ++ e2 = Enumeration > (\n -> if n < size e1 then fromNat e1 n else fromNat e2 (n - size e1)) > (size e1 + size e2) > > union :: [Enumeration a] -> Enumeration a > union = foldr (++) empty ghci> enumerate (list [3, 5, 6] ++ empty ++ singleton 8) [3,5,6,8] Finally, we can form a Cartesian product: e1 >< e2 is the enumeration of all possible pairs of elements from e1 and e2, ordered so that all the pairs formed from the first element of e1 come first, followed by all the pairs with the second element of e1, and so on. The indexing function divides the given index by the size of e2, and uses the quotient to index into e1, and the remainder to index into e2.> (><) :: Enumeration a -> Enumeration b -> Enumeration (a,b) > e1 >< e2 = Enumeration > (\n -> let (l,r) = n `divMod` size e2 in (fromNat e1 l, fromNat e2 r)) > (size e1 * size e2) ghci> enumerate (list [1,2,3] >< list [10,20]) [(1,10),(1,20),(2,10),(2,20),(3,10),(3,20)] ghci> let big = list [0..999] >< list [0..999] >< list [0..999] >< list [0..999] ghci> fromNat big 2973428654 (((2,973),428),654) Notice in particular how the fourfold product of list [0..999] has elements, but indexing into it with fromNat is basically instantaneous. Since Enumerations are isomorphic to finite lists, we expect them to have Applicative and Monad instances, too. First, the Applicative instance is fairly straightforward:> instance Applicative Enumeration where > pure = singleton > f <*> x = uncurry ($) <$> (f >< x) ghci> enumerate $ (*) <$> list [1,2,3] <*> list [10, 100] [10,100,20,200,30,300] pure creates a singleton enumeration, and applying an enumeration of functions to an enumeration of arguments works by taking a Cartesian product and then applying each pair. The Monad instance works by substitution: in e >>= k, the continuation k is applied to each element of the enumeration e, and the resulting enumerations are unioned together in order.> instance Monad Enumeration where > return = pure > e >>= f = union (map f (enumerate e)) ghci> enumerate $ list [1,2,3] >>= \i -> list (replicate i i) [1,2,2,3,3,3] Having to actually enumerate the elements of e is a bit unsatisfying, but there is really no way around it: we otherwise have no way to know how big the resulting enumerations are going to be. Now, that function I tried (and failed) to write before that generates the tree at a particular index in left-biased order? Using these enumeration combinators, it’s a piece of cake. Basically, since we built up combinators that mirror those available for lists, it’s just as easy to write this indexing version as it is to write the original allTrees function (which I’ve copied below for comparison):allTrees :: Int -> [Tree] allTrees 0 = [Leaf] allTrees n = [ Node l r | k <- [0 .. n-1] , l <- allTrees ((n-1) - k) , r <- allTrees k ] > enumTrees :: Int -> Enumeration Tree > enumTrees 0 = singleton Leaf > enumTrees n = union > [ Node <$> enumTrees (n-k-1) <*> enumTrees k > | k <- [0 .. n-1] > ] (enumTrees and allTrees look a bit different, but actually allTrees can be rewritten in a very similar style:allTrees :: Int -> [Tree] allTrees 0 = [Leaf] allTrees n = concat [ Node <$> allTrees ((n-1) - k) <*> r <- allTrees k | k <- [0 .. n-1] ] Doing as much as possible using the Applicative interface gives us added “parallelism”, which in this case means the ability to index directly into a product with divMod, rather than scanning through the results of calling a function on enumerate until we have accumulated the right size. See the paper on the GHC ApplicativeDo extension.) Let’s try it out:ghci> enumerate (enumTrees 3) [Node (Node (Node Leaf Leaf) Leaf) Leaf,Node (Node Leaf (Node Leaf Leaf)) Leaf,Node (Node Leaf Leaf) (Node Leaf Leaf),Node Leaf (Node (Node Leaf Leaf) Leaf),Node Leaf (Node Leaf (Node Leaf Leaf))] ghci> enumerate (enumTrees 3) == allTrees 3 True ghci> enumerate (enumTrees 7) == allTrees 7 True ghci> brackets $ fromNat (enumTrees 7) 43 "((((()())))())" It seems to work! Though actually, if we try larger values of , enumTrees just seems to hang. The problem is that it ends up making many redundant recursive calls. Well… nothing a bit of memoization can’t fix! (Here I’m using Conal Elliott’s nice MemoTrie package.)> enumTreesMemo :: Int -> Enumeration Tree > enumTreesMemo = memo enumTreesMemo' > where > enumTreesMemo' 0 = singleton Leaf > enumTreesMemo' n = union > [ Node <$> enumTreesMemo (n-k-1) <*> enumTreesMemo k > | k <- [0 .. n-1] > ] ghci> size (enumTreesMemo 10) 16796 ghci> size (enumTreesMemo 100) 896519947090131496687170070074100632420837521538745909320 ghci> size (enumTreesMemo 1000) 2046105521468021692642519982997827217179245642339057975844538099572176010191891863964968026156453752449015750569428595097318163634370154637380666882886375203359653243390929717431080443509007504772912973142253209352126946839844796747697638537600100637918819326569730982083021538057087711176285777909275869648636874856805956580057673173655666887003493944650164153396910927037406301799052584663611016897272893305532116292143271037140718751625839812072682464343153792956281748582435751481498598087586998603921577523657477775758899987954012641033870640665444651660246024318184109046864244732001962029120 ghci> brackets $ fromNat (enumTreesMemo 1000) 8234587623904872309875907638475639485792863458726398487590287348957628934765 "((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((()(((()((((()))())(()()()))()(())(())((()((()))(((())()(((((()(((()()))(((()((((()()(())()())(((()))))(((()()()(()()))))(((()((()))(((()())())))())(()()(())(())()(()())))()))((()()))()))()))()(((()))(()))))))())()()()))((())((()))((((())(())))((())))))()))()(())))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))" That’s better!A second try At this point, I thought that I needed to enumerate trees in order by the length of their left spine. Given a tree with a left spine of length , we enumerate all the ways to partition the remaining elements among the right children of the spine nodes, preferring to first put elements as far to the left as possible. As you’ll see, this turns out to be wrong, but it’s fun to see how easy it is to write this using the enumeration framework. First, we need an enumeration of the partitions of a given into exactly parts, in lexicographic order.> kPartitions :: Int -> Int -> Enumeration [Int] There is exactly one way to partition into zero parts.> kPartitions 0 0 = singleton [] We can’t partition anything other than into zero parts.> kPartitions _ 0 = empty Otherwise, pick a number from down to to go in the first spot, and then recursively enumerate partitions of into exactly parts.> kPartitions n k = do > i <- list [n, n-1 .. 0] > (i:) <$> kPartitions (n-i) (k-1) Let’s try it:ghci> let p43 = enumerate $ kPartitions 4 3 ghci> p43 [[4,0,0],[3,1,0],[3,0,1],[2,2,0],[2,1,1],[2,0,2],[1,3,0],[1,2,1],[1,1,2],[1,0,3],[0,4,0],[0,3,1],[0,2,2],[0,1,3],[0,0,4]] ghci> all ((==3) . length) p43 True ghci> all ((==4) . sum) p43 True ghci> sorted (reverse p43) True Now we can use kPartitions to build our enumeration of trees:> spinyTrees :: Int -> Enumeration Tree > spinyTrees = memo spinyTrees' > where > spinyTrees' 0 = singleton Leaf > spinyTrees' n = do > > -- Pick the length of the left spine > spineLen <- list [n, n-1 .. 1] > > -- Partition the remaining elements among the spine nodes > bushSizes <- kPartitions (n - spineLen) spineLen > bushes <- traverse spinyTrees bushSizes > return $ buildSpine (reverse bushes) > > buildSpine :: [Tree] -> Tree > buildSpine [] = Leaf > buildSpine (b:bs) = Node (buildSpine bs) b This appears to give us something reasonable:ghci> size (spinyTrees 7) == size (enumTreesMemo 7) True But it’s pretty slow—which is to be expected with all those monadic operations required. And there’s more:ghci> sorted . map brackets . enumerate $ spinyTrees 3 True ghci> sorted . map brackets . enumerate $ spinyTrees 4 True ghci> sorted . map brackets . enumerate $ spinyTrees 5 False Foiled again! All we did was stave off failure a bit, until . I won’t draw all the trees of size for you, but the failure mode is pretty similar: picking subtrees for the spine based just on how many elements they have doesn’t work, because there are cases where we want to first shift some elements to a later subtree, keeping the left spine of a subtree, before moving the elements back and having a shorter left spine.The solution: just forget about trees, already It finally occurred to me that there was nothing in the problem statement that said anything about trees. That was just something my overexcited combinatorial brain imposed on it: obviously, since there is a bijection between balanced bracketings and binary trees, we should think about binary trees, right? …well, there is also a bijection between balanced bracketings and permutations avoiding (231), and lattice paths that stay above the main diagonal, and hundreds of other things, so… not necessarily. In this case, I think trees just end up making things harder. Let’s think instead about enumerating balanced bracket sequences directly. To do it recursively, we need to know how to enumerate possible endings to the start of any balanced bracket sequence. That is, we need to enumerate sequences containing opening brackets and extra closing brackets (so closing brackets in total), which can be appended to a sequence of brackets with more opening brackets than closing brackets. Given this idea, the code is fairly straightforward:> enumBrackets :: Int -> Enumeration String > enumBrackets n = enumBracketsTail n 0 > > enumBracketsTail :: Int -> Int -> Enumeration String > enumBracketsTail = memo2 enumBracketsTail' > where To enumerate a sequence with no opening brackets, just generate c closing brackets.> enumBracketsTail' 0 c = singleton (replicate c ')') To enumerate balanced sequences with opening brackets and an exactly matching number of closing brackets, start by generating an opening bracket and then continue by generating sequences with opening brackets and one extra closing bracket to match the opening bracket we started with.> enumBracketsTail' n 0 = ('(':) <$> enumBracketsTail (n-1) 1 In general, a sequence with opening and extra closing brackets is either an opening bracket followed by an (n-1, c+1)-sequence, or a closing bracket followed by an (n, c-1)-sequence.> enumBracketsTail' n c = > (('(':) <$> enumBracketsTail (n-1) (c+1)) > ++ > ((')':) <$> enumBracketsTail n (c-1)) This is quite fast, and as a quick check, it does indeed seem to give us the same size enumerations as the other tree enumerations:ghci> fromNat (enumBrackets 40) 16221270422764920820 "((((((((()((())()(()()()())(()))((()()()()(()((()())))((()())))))))()))()())()))" ghci> size (enumBrackets 100) == size (enumTreesMemo 100) True But, are they sorted? It would seem so!ghci> all sorted (map (enumerate . enumBrackets) [1..10]) True At this point, you might notice that this can be easily de-abstracted into a fairly simple dynamic programming solution, using a 2D array to keep track of the size of the enumeration for each (n,c) pair. I’ll leave the details to interested readers. Douglas M. Auclair (geophf):. Edwin Brady: State Machines All The Way Down A. Roman Cheplyaka: Mean-variance ceiling Today I was playing with the count data from a small RNA-Seq experiment performed in Arabidopsis thaliana. At some point, I decided to look at the mean-variance relationship for the fragment counts. As I said, the dataset is small; there are only 3 replicates per condition from which to estimate the variance. Moreover, each sample is from a different batch. I wasn’t expecting to see much. But there was a pattern in the mean-variance plot that was impossible to miss.<figure> <figcaption>Mean-variance plot of counts per million, log-log scale</figcaption> </figure> It is a nice straight line that many points lie on, but none dare to cross. A ceiling. The ceiling looked mysterious at first, but then I found a simple explanation. The sample variance of \(n\) numbers \(a_1,\ldots,a_n\) can be written as \[\sigma^2=\frac{n}{n-1}\left(\frac1n\sum_{i=1}^n a_i^2-\mu^2\right),\] where \(\mu\) is the sample mean. Thus, \[\frac{\sigma^2}{\mu^2}=\frac{\sum a_i^2}{(n-1)\mu^2}-\frac{n}{n-1}.\] For non-negative numbers, \(n^2\mu^2=(\sum a_i)^2\geq \sum a_i^2\), and \[\frac{\sigma^2}{\mu^2}\leq\frac{n^2}{n-1}-\frac{n}{n-1}=n.\] This means that on a log-log plot, all points \((\mu,\sigma^2)\) lie on or below the line \(y=2x+\log n\). Moreover, the points that lie exactly on the line correspond to the samples where all \(a_i\) but one are zero. In other words, those are gene-condition combinations where the gene’s transcripts were registered in a single replicate for that condition. Roman Cheplyaka: The rule of 17 in volleyball:<figure> </figure>\)):<figure> </figure>.<figure> <figcaption>Dmitriy Muserskiy is about to score the gold medal point</figcaption> </figure> Philip Wadler:! Ken T Takusagawa: [uitadwod] Stackage Stack? Tweag I/O: A new ecosystem for Haskell: the JVM. A Swing GUI application in Haskell{-#.How it works.Calling into the JVM, the hard way.Using Haskell types for safer JVM calls There are two downsides to the raw JNI calls we saw above: - performance: getting class and method handles is expensive. Ideally, we’d only ever lookup classes and methods by name at most once throughout the lifetime of the program, assuming that loaded classes exist for all time and are never redefined. - stringly typing: we pass signatures explicitly, but these are literally strings, typos and all. If you mistype the signature, no compiler will call that out. Ideally ill-formed signatures would be caught at compile-time, rather than at runtime when it’s far too late and your program will simply crash. pureStuff@, - the type of Swing option panes, J ('Class "javax.swing.JOptionPane") - the type of boxed Java integers, J ('Class "java.lang.Integer"), - the type of primitive integer arrays, J ('Array ('Prim "int")), - etc.data) JVM calls the Java way…The road ahead: - box-free foreign calls: because we infer precise JVM types from Haskell types, arguments passed to JVM methods are boxed only if they need to be. Small values of primitive type can be passed to/from the JVM with no allocation at all on the heap. - marshalling-free argument passing: Java objects can be manipulated as easily from Haskell as from Java. This means that you can stick to representing all your data as Java objects if you find yourself calling into Java very frequently, hence avoiding any marshalling costs when transferring control to/from the JVM. - type safe Java calls: when calls are made in Java syntax, this syntax is supplied to an embedded instance of javac.Copyright 2015-2016 Tweag I/O. Dan Piponi (sigfpe): Expectation. where we sum over all possible values of . The MLE approach says we now need to maximise One of the things that is a challenge here is that the components of might be mixed up among the terms in the sum. If, instead, each term only referred to its own unique block of , then the maximisation would be easier as we could maximise each term independently of the others. Here's how we might move in that direction. Consider instead the log-likelihood Now imagine that by magic we could commute the logarithm with the sum. We'd need to maximise One reason this would be to our advantage is that often takes the form where is a simple function to optimise. In addition, may break up as a sum of terms, each with its own block of 's. Moving the logarithm inside the sum would give us something we could easily maximise term by term. What's more, the for each is often a standard probability distribution whose likelihood we already know how to maximise. But, of course, we can't just move that logarithm in. Now suppose there are also some variables that we didn't get to observe. We assume a density . We nowWe can try optimising with respect to within a neighbourhood of . If we pick a small circular neighbourhood then the optimal value will be in the direction of steepest descent. (Note that picking a circular neighbourhood is itself a somewhat arbitrary step, but that's another story.) For gradient descent we're choosing because it matches both the value and derivatives of at . We could go further and optimise a proxy that shares second derivatives too, and that leads to methods based on Newton-Raphson iteration.. The are constants we'll determine. We want to match the derivatives on either side of the at : On the other hand we have Write Our desired proxy function is: To achieve equality we want to make these expressions match. We chooseyou can iterate, at each step computing where is the previous iteration. If the take a convenient form then this may turn out to be much easier. Note This was originally written as a PDF using LaTeX. It'll be available here for a while. Some fidelity was lost when converting it to HTML. Michael Snoyman: New Conduit Tutorial A few weeks back I proposed a reskin of conduit to make it easier to learn and use. The proposal overall got broad support, and therefore I went ahead with it. I then spent some time (quite a bit more than expected) updating the conduit tutorial to use this new reskin. If you're interested in conduit or streaming data in Haskell, please take a look at the new version. - haskell-lang version - Raw Github version (if you want to send a PR) Thanks to all who provided feedback. Also, if you want to provide some more feedback, there's one more Github issue open up: RFC: Stop using the type synonyms in library type signatures. Please feel free to share your opinions/add a reaction/start a flame war. And yes, the flame war comment is a joke. Please don't take that one literally. Philip Wadler: Lambdaman (and Lambdawoman) supporting Bootstrap - Last Three Days! Edward Z. Yang: Try Backpack: ghc --backpack Backpack, a new system for mix-in packages in Haskell, has landed in GHC HEAD. This means that it has become a lot easier to try Backpack out: you just need a nightly build of GHC. Here is a step-by-step guide to get you started.Download a GHC nightly Get a nightly build of GHC. If you run Ubuntu, this step is very easy: add Herbert V. Riedel's PPA to your system and install ghc-head:sudo add-apt-repository ppa:hvr/ghc sudo apt-get update sudo aptitude install ghc-head This will place a Backpack-ready GHC in /opt/ghc/head/bin/ghc. My recommendation is you create a symlink named ghc-head to this binary from a directory that is in your PATH. If you are not running Ubuntu, you'll have to download a nightly or build GHC yourself.Hello World GHC supports a new file format, bkp files, which let you easily define multiple modules and packages in a single source file, making it easy to experiment with Backpack. This format is not suitable for large scale programming, but we will use it for our tutorial. Here is a simple "Hello World" program:unit main where module Main where main = putStrLn "Hello world!" We define a unit (think package) with the special name main, and in it define a Main module (also specially named) which contains our main function. Place this in a file named hello.bkp, and then run ghc --backpack hello.bkp (using your GHC nightly). This will produce an executable at main/Main which you can run; you can also explicitly specify the desired output filename using -o filename. Note that by default, ghc --backpack creates a directory with the same name as every unit, so -o main won't work (it'll give you a linker error; use a different name!)A Play on Regular Expressions Let's write some nontrivial code that actually uses Backpack. For this tutorial, we will write a simple matcher for regular expressions as described in A Play on Regular Expressions (Sebastian Fischer, Frank Huch, Thomas Wilke). The matcher itself is inefficient (it checks for a match by testing all exponentially many decompositions of a string), but it will be sufficient to illustrate many key concepts of Backpack. To start things off, let's go ahead and write a traditional implementation of the matcher by copy-pasting the code from this Functional Pearl into a Regex module in the Backpack file and writing a little test program to run it:unit regex where module Regex where -- | A type of regular expressions. data Reg = Eps | Sym Char | Alt Reg Reg | Seq Reg Reg | Rep Reg -- | Check if a regular expression 'Reg' matches a 'String' accept :: Reg -> String -> Bool accept Eps u = null u accept (Sym c) u = u == [c] accept (Alt p q) u = accept p u || accept q u accept (Seq p q) u = or [accept p u1 && accept q u2 | (u1, u2) <- splits u] accept (Rep r) u = or [and [accept r ui | ui <- ps] | ps <- parts u] -- | Given a string, compute all splits of the string. -- E.g., splits "ab" == [("","ab"), ("a","b"), ("ab","")] splits :: String -> [(String, String)] splits [] = [([], [])] splits (c:cs) = ([], c:cs):[(c:s1,s2) | (s1,s2) <- splits cs] -- | Given a string, compute all possible partitions of -- the string (where all partitions are non-empty). -- E.g., partitions "ab" == [["ab"],["a","b"]] parts :: String -> [[String]] parts [] = [[]] parts [c] = [[[c]]] parts (c:cs) = concat [[(c:p):ps, [c]:p:ps] | p:ps <- parts cs] unit main where dependency regex module Main where import Regex nocs = Rep (Alt (Sym 'a') (Sym 'b')) onec = Seq nocs (Sym 'c') -- | The regular expression which tests for an even number of cs evencs = Seq (Rep (Seq onec onec)) nocs main = print (accept evencs "acc") If you put this in regex.bkp, you can once again compile it using ghc --backpack regex.bkp and invoke the resulting executable at main/Main. It should print True.Functorizing the matcher The previously shown code isn't great because it hardcodes String as the type to do regular expression matching over. A reasonable generalization (which you can see in the original paper) is to match over arbitrary lists of symbols; however, we might also reasonably want to match over non-list types like ByteString. To support all of these cases, we will instead use Backpack to "functorize" (in ML parlance) our matcher. We'll do this by creating a new unit, regex-indef, and writing a signature which provides a string type (we've decided to call it Str, to avoid confusion with String) and all of the operations which need to be supported on it. Here are the steps I took: First, I copy-pasted the old Regex implementation into the new unit. I replaced all occurrences of String with Str, and deleted splits and parts: we will require these to be implemented in our signature. Next, we create a new Str signature, which is imported by Regex, and defines our type and operations (splits and parts) which it needs to support:signature Str where data Str splits :: Str -> [(Str, Str)] parts :: Str -> [[Str]] At this point, I ran ghc --backpack to typecheck the new unit. But I got two errors!regex.bkp:90:35: error: • Couldn't match expected type ‘t0 a0’ with actual type ‘Str’ • In the first argument of ‘null’, namely ‘u’ In the expression: null u In an equation for ‘accept’: accept Eps u = null u regex.bkp:91:35: error: • Couldn't match expected type ‘Str’ with actual type ‘[Char]’ • In the second argument of ‘(==)’, namely ‘[c]’ In the expression: u == [c] In an equation for ‘accept’: accept (Sym c) u = u == [c] Traversable null nonsense aside, the errors are quite clear: Str is a completely abstract data type: we cannot assume that it is a list, nor do we know what instances it has. To solve these type errors, I introduced the combinators null and singleton, an instance Eq Str, and rewrote Regex to use these combinators (a very modest change.) (Notice we can't write instance Traversable Str; it's a kind mismatch.) Here is our final indefinite version of the regex unit:unit regex-indef where signature Str where data Str instance Eq Str null :: Str -> Bool singleton :: Char -> Str splits :: Str -> [(Str, Str)] parts :: Str -> [[Str]] module Regex where import Prelude hiding (null) import Str data Reg = Eps | Sym Char | Alt Reg Reg | Seq Reg Reg | Rep Reg] (To keep things simple for now, I haven't parametrized Char.)Instantiating the functor (String) This is all very nice but we can't actually run this code, since there is no implementation of Str. Let's write a new unit which provides a module which implements all of these types and functions with String, copy pasting in the old implementations of splits and parts] One quirk when writing Backpack implementations for functions is that Backpack does no subtype matching on polymorphic functions, so you can't implement Str -> Bool with a polymorphic function Traversable t => t a -> Bool (adding this would be an interesting extension, and not altogether trivial). So we have to write a little impedance matching binding which monomorphizes null to the expected type. To instantiate regex-indef with str-string:Str, we modify the dependency in main:-- dependency regex -- old dependency regex-indef[Str=str-string:Str] Backpack files require instantiations to be explicitly specified (this is as opposed to Cabal files, which do mix-in linking to determine instantiations). In this case, the instantiation specifies that regex-indef's signature named Str should be filled with the Str module from str-string. After making these changes, give ghc --backpack a run; you should get out an identical looking result.Instantiating the functor (ByteString) The whole point of parametrizing regex was to enable us to have a second implementation of Str. So let's go ahead and write a bytestring implementation. After a little bit of work, you might end up with this) There are two things to note about this implementation: - Unlike str-string, which explicitly defined every needed method in its module body, str-bytestring provides null and singleton simply by reexporting all of the entities from Data.ByteString.Char8 (which are appropriately monomorphic). We've cleverly picked our names to abide by the existing naming conventions of existing string packages! - Our implementations of splits and parts are substantially more optimized than if we had done a straight up transcription of the consing and unconsing from the original String implementation. I often hear people say that String and ByteString have very different performance characteristics, and thus you shouldn't mix them up in the same implementation. I think this example shows that as long as you have sufficiently high-level operations on your strings, these performance changes smooth out in the end; and there is still a decent chunk of code that can be reused across implementations. To instantiate regex-indef with bytestring-string:Str, we once again modify the dependency in main:-- dependency regex -- oldest -- dependency regex-indef[Str=str-string:Str] -- old dependency regex-indef[Str=str-bytestring:Str] We also need to stick an {-# LANGUAGE OverloadedStrings #-} pragma so that "acc" gets interpreted as a ByteString (unfortunately, the bkp file format only supports language pragmas that get applied to all modules defined; so put this pragma at the top of the file). But otherwise, everything works as it should!Using both instantiations at once There is nothing stopping us from using both instantiations of regex-indef at the same time, simply by uncommenting both dependency declarations, except that the module names provided by each dependency conflict with each other and are thus ambiguous. Backpack files thus provide a renaming syntax for modules which let you give each exported module a different name:dependency regex-indef[Str=str-string:Str] (Regex as Regex.String) dependency regex-indef[Str=str-bytestring:Str] (Regex as Regex.ByteString) How should we modify Main to run our regex on both a String and a ByteString? But is Regex.String.Reg the same as Regex.ByteString.Reg? A quick query to the compiler will reveal that they are not the same. The reason for this is Backpack's type identity rule: the identity of all types defined in a unit depends on how all signatures are instantiated, even if the type doesn't actually depend on any types from the signature. If we want there to be only one Reg type, we will have to extract it from reg-indef and give it its own unit, with no signatures. After the refactoring, here is the full final program:{-# LANGUAGE OverloadedStrings #-}] unit regex-types where module Regex.Types where data Reg = Eps | Sym Char | Alt Reg Reg | Seq Reg Reg | Rep Reg unit regex-indef where dependency regex-types signature Str where data Str instance Eq Str null :: Str -> Bool singleton :: Char -> Str splits :: Str -> [(Str, Str)] parts :: Str -> [[Str]] module Regex where import Prelude hiding (null) import Str import Regex.Types] unit main where dependency regex-types dependency regex-indef[Str=str-string:Str] (Regex as Regex.String) dependency regex-indef[Str=str-bytestring:Str] (Regex as Regex.ByteString) module Main where import Regex.Types import qualified Regex.String import qualified Regex.ByteString nocs = Rep (Alt (Sym 'a') (Sym 'b')) onec = Seq nocs (Sym 'c') evencs = Seq (Rep (Seq onec onec)) nocs main = print (Regex.String.accept evencs "acc") >> print (Regex.ByteString.accept evencs "acc") And beyond! Next time, I will tell you how to take this prototype in a bkp file, and scale it up into a set of Cabal packages. Stay tuned! Postscript. If you are feeling adventurous, try further parametrizing regex-types so that it no longer hard-codes Char as the element type, but some arbitrary element type Elem. It may be useful to know that you can instantiate multiple signatures using the syntax dependency regex-indef[Str=str-string:Str,Elem=str-string:Elem] and that if you depend on a package with a signature, you must thread the signature through using the syntax dependency regex-types[Elem=<Elem>]. If this sounds user-unfriendly, it is! That is why in the Cabal package universe, instantiation is done implicitly, using mix-in linking. Joachim Breitner:FONTFACE="Terminus" FONTSIZE="12x24" in /etc/default/console-setup yielded good results. For the few GTK-2 applications that I am still running, I setgtkPx to 1.5 in about:config. GVim has set guifont=Monospace\ 16 in ~/.vimrc. The toolbar is tiny, but I hardly use it anyways. Setting the font of Xmonad prompts requires the sytax, font = "xft:Sans:size=16" Speaking about Xmonad prompts: Check out the XMonad.Prompt.Unicode module beforeSection writingSUBSYSTEM==Section . FP Complete: Static compilation with Stack In our last blog post we showed you the new docker init executable pid1. What if we wanted to use our shiny new pid1 binary on a CentOS Docker image but we compiled it on Ubuntu? The answer is that it wouldn't likely work. All Linux flavors package things up a little differently and with different versions and flags. If we were to compile pid1 completely static it could be portable (within a given range of Linux kernel versions). Let's explore different ways to compile a GHC executable with Stack. Maybe we can come up with a way to create portable binaries.Base Image for Experiments First let's create a base image since we are going to be trying many different compilation scenarios. Here's a Dockerfile for Alpine Linux & GHC 8.0 with Stack.# USE ALPINE LINUX FROM alpine RUN apk update # INSTALL BASIC DEV TOOLS, GHC, GMP & ZLIB RUN echo "" >> /etc/apk/repositories ADD \ /etc/apk/keys/mitch.tishmack@gmail.com-55881c97.rsa.pub RUN apk update RUN apk add alpine-sdk git ca-certificates ghc gmp-dev zlib-dev # GRAB A RECENT BINARY OF STACK ADD /usr/local/bin/stack RUN chmod 755 /usr/local/bin/stack Let's build it and give it a tag.docker build --no-cache=true --tag fpco/pid1:0.1.0-base .Default GHC Compilation Next let's compile pid1 with default Stack & GHC settings. Here's our minimalist stack.yaml file.resolver: lts-7.1 Here's our project Dockerfile that extends our test base image above.FROM fpco/pid1:0.1.0-base # COMPILE PID1 ADD ./ /usr/src/pid1 WORKDIR /usr/src/pid1 RUN stack --local-bin-path /sbin install --test # SHOW INFORMATION ABOUT PID1 RUN ldd /sbin/pid1 || true RUN du -hs /sbin/pid1 Let's compile this default configuration using Docker and give it a label.docker build --no-cache=true --tag fpco/pid1:0.1.0-default . A snippet from the Docker build showing the results.Step 6 : RUN ldd /sbin/pid1 || true ---> Running in fcc138c199d0 /lib/ld-musl-x86_64.so.1 (0x559fe5aaf000) libgmp.so.10 => /usr/lib/libgmp.so.10 (0x7faff710b000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x559fe5aaf000) ---> 70836a2538e2 Removing intermediate container fcc138c199d0 Step 7 : RUN du -hs /sbin/pid1 ---> Running in 699876efeb1b 956.0K /sbin/pid1 You can see that this build results in a semi-static binary with a link to MUSL (libc) and GMP. This is not extremely portable. We will always have to be concerned about the dynamic linkage happening at run-time. This binary would probably not run on Ubuntu as is.100% Static Let's try compiling our binary as a 100% static Linux ELF binary without any link to another dynamic library. Note that our open source license must be compatible with MUSL and GMP in order to do this. Let's try a first run with static linkage. Here's another Dockerfile that shows a new ghc-option to link statically.FROM fpco/pid1:0.1.0-base # TRY TO COMPILE ADD ./ /usr/src/pid1 WORKDIR /usr/src/pid1 RUN stack --local-bin-path /sbin install --test --ghc-options '-optl-static' Let's give it a go.docker build --no-cache=true --tag fpco/pid1:0.1.0-static . Oh no. It didn't work. Looks like there's some problem with linking. :|PIC flag OK that last error said we should recompile with -fPIC. Let's try that. Once again, here's a Dockerfile with the static linkage flag & the new -fPIC flag.FROM fpco/pid1:0.1.0-base # TRY TO COMPILE ADD ./ /usr/src/pid1 WORKDIR /usr/src/pid1 RUN stack --local-bin-path /sbin install --test --ghc-options '-optl-static -fPIC' Let's give it a try.docker build --no-cache=true --tag fpco/pid1:0.1.0-static-fpic . But we still get the error again.crtbeginT swap Searching around for this crtbegint linkage problem we find that if we provide a hack that it'll work correctly. Here's the Dockerfile with the hack' # SHOW INFORMATION ABOUT PID1 RUN ldd /sbin/pid1 || true RUN du -hs /sbin/pid1 When we try it againdocker build --no-cache=true --tag fpco/pid1:0.1.0-static-fpic-crtbegint . It works this time!Step 8 : RUN ldd /sbin/pid1 || true ---> Running in 8b3c737c2a8d ldd: /sbin/pid1: Not a valid dynamic program ---> 899f06885c71 Removing intermediate container 8b3c737c2a8d Step 9 : RUN du -hs /sbin/pid1 ---> Running in d641697cb2a8 1.1M /sbin/pid1 ---> aa17945f5bc4 Nice. 1.1M isn't too bad for a binary that's portable. Let's see if we can make it smaller though. On larger executables, especially with other linked external libraries, this static output can be 50MB(!)Optimal SizeGCC Optimization It says on the GCC manpage if we use -Os that this will optimize for size. Let's try it. Specify -optc-Os to optimize for size -Os' # SHOW INFORMATION ABOUT PID1 RUN ldd /sbin/pid1 || true RUN du -hs /sbin/pid1 docker build --no-cache=true --tag fpco/pid1:0.1.0-static-fpic-crtbegint-optcos .. You may want to try it on a little larger or more complex executable to see if it makes a difference for you.Split Objects GHC allows us to "split objects" when we compile Haskell code. That means each Haskell module is broken up into it's own native library. In this scenario, when we import a module, our final executable is linked against smaller split modules instead of to the entire package. This helps reduce the size of the executable. The trade-off is that it takes more time for GHC to compile.resolver: lts-7.1 build: { split-objs: true } docker build --no-cache=true --tag fpco/pid1:0.1.0-static-fpic-crtbegint-optcos-split . in this case. On some executables this really makes a big difference. Try it yourself.UPX Compression Let's try compressing our static executable with UPX. Here's a Dockerfile -optc-Os' # COMPRESS WITH UPX ADD /usr/local/bin/upx RUN chmod 755 /usr/local/bin/upx RUN upx --best --ultra-brute /sbin/pid1 # SHOW INFORMATION ABOUT PID1 RUN ldd /sbin/pid1 || true RUN du -hs /sbin/pid1 Build an image that includes UPX compression.docker build --no-cache=true --tag fpco/pid1:0.1.0-static-fpic-crtbegint-optcos-split-upx . And, wow, that's some magic.Step 11 : RUN ldd /sbin/pid1 || true ---> Running in 69f86bd03d01 ldd: /sbin/pid1: Not a valid dynamic program ---> c01d54dca5ac Removing intermediate container 69f86bd03d01 Step 12 : RUN du -hs /sbin/pid1 ---> Running in 01bbed565de0 364.0K /sbin/pid1 ---> b94c11bafd95 This makes a huge difference with the resulting executable 1/3 the original size. There is a small price to pay in extracting the executable on execution but for a pid1 that just runs for the lifetime of the container, this is not noticeable.Slackware Support Here's a Slackware example running pid1 that was compiled on Alpine LinuxFROM vbatts/slackware ADD /sbin/pid1 RUN chmod 755 /sbin/pid1 ENTRYPOINT [ "/sbin/pid1" ] CMD bash -c 'while(true); do sleep 1; echo alive; done' Build an image that includes UPX compression.docker build -t fpco/pid1:0.1.0-example-slackware . docker run --rm -i -t fpco/pid1:0.1.0-example-slackware It works!alive alive alive ^C Brent Yorgey: IC was born the same week. So this was my first time in Japan, or anywhere in Asia, for that matter. (Of course, this time I missed my son’s fifth birthday…) I’ve been to Europe multiple times, and although it is definitely foreign, the culture is similar enough that I feel like I basically know how to behave. I did not feel that way in Japan. I’m pretty sure I was constantly being offensive without realizing it, but most of the time people were polite and accommodating. …EXCEPT for that one time I was sitting in a chair chatting with folks during a break between sessions, with my feet up on a (low, plain) table, and an old Japanese guy WHACKED his walking stick on the table and shouted angrily at me in Japanese. That sure got my adrenaline going. Apparently putting your feet on the table is a big no-no, lesson learned. The food was amazing even though I didn’t know what half of it was. I was grateful that I (a) am not vegetarian, (b) know how to use chopsticks decently well, and (c) am an adventurous eater. If any one of those were otherwise, things might have been more difficult! On my last day in Japan I had the whole morning before I needed to head to the airport, so Ryan Yates and I wandered around Nara and saw a bunch of temples, climbed the hill, and such. It’s a stunningly beautiful place with a rich history.The People As usual, it’s all about the people. I enjoyed meeting some new people, including (but not limited to): - Pablo Buiras and Marco Vassena were my hotel breakfast buddies, it was fun getting to know them a bit. - I finally met Dominic Orchard, though I feel like I’ve known his name and known about some of his work for a while. - I don’t think I had met Max New before but we had a nice chat about the Scheme enumerations library he helped develop and combinatorial species. I hope to be able to follow up that line of inquiry. - As promised, I met everyone who commented on my blog post, including Jürgen Peters (unfortunately we did not get a chance to play go), Andrey Mokhov (who nerd-sniped me with a cool semiring-ish thing with some extra structure — perhaps that will be another blog post), and Jay McCarthy (whom I had actually met before, but we had some nice chats, including one in the airport while waiting for our flight to LAX). - I don’t think I had met José Manuel Calderón Trilla before; we had a great conversation over a meal together (along with Ryan Yates) in the Osaka airport while waiting for our flights. - I met Diogenes Nunez, who went to my alma mater Williams College. When I taught at Williams a couple years ago I’m pretty sure I heard Diogenes mentioned by the other faculty, so it was fun to get to meet him. - Last but certainly not least, I met my coauthor, Piyush Kurur. We collaborated on a paper through the magic of the Internet (Github in particular), and I actually met him in person for the first time just hours before he presented our paper! My student Ollie Kwizera came for PLMW—it was fun having him there. I only crossed paths with him three or four times, but I think that was all for the best, since he made his own friends and had his own experiences. Other people who I enjoyed seeing and remember having interesting conversations with include (but I am probably forgetting someone!) Michael Adams, Daniel Bergey, Jan Bracker, Joachim Breitner, David Christiansen, David Darais, Stephen Dolan, Richard Eisenberg, Kenny Foner, Marco Gaboardi, Jeremy Gibbons, John Hughes, David Janin, Neel Krishnaswami, Dan Licata, Andres Löh, Simon Marlow, Tom Murphy, Peter-Michael Osera, Jennifer Paykin, Simon Peyton Jones, Ryan Scott, Mary Sheeran, Mike Sperber, Luite Stegeman, Wouter Swierstra, David Terei, Ryan Trinkle, Tarmo Uustalu, Stephanie Weirich, Nick Wu, Edward Yang, and Ryan Yates. My apologies if I forgot you, just remind me and I’ll add you to the list! I’m amazed and grateful I get to know all these cool people.The Content Here are just a few of my favorite talks: I’m a sucker for anything involving geometry and/or random testing and/or pretty pictures, and Ilya Sergey’s talk Growing and Shrinking Polygons for Random testing of Computational Geometry had them all. In my experience, doing effective random testing in any domain beyond basic functions usually requires some interesting domain-specific insights, and Ilya had some cool insights about ways to generate and shrink polygons in ways that were much more likely to generate small counterexamples for computational geometry algorithms. Idris gets more impressive by the day, and I always enjoy David Christiansen’s talks. Sandra Dylus gave a fun talk, All Sorts of Permutations, with the cute observation that a sorting algorithm equipped with a nondeterministic comparison operator generates permutations (though it goes deeper than that). During the question period someone asked whether there is a way to generate all partitions, and someone sitting next to me suggested using the group function—and indeed, I think this works. I wonder what other sorts of combinatorial objects can be enumerated by this method. In particular I wonder if quicksort with nondeterministic comparisons can be adapted to generate not just all permutations, but all binary trees. I greatly enjoyed TyDe, especially Jeremy Gibbons’ talk on APLicative Programming with Naperian Functors (I don’t think the video is online yet, if there is one). I’ll be serving as co-chair of the TyDe program committee next year, so start thinking about what you would like to submit! There were also some fun talks at FARM, for example, Jay McCarthy’s talk on Bithoven. But I don’t think the FARM videos are uploaded yet. Speaking of FARM, the performance evening was incredible. It will be hard to live up to next year. FP Complete: Docker demons: PID-1, orphans, zombies, and signals There are a number of corner cases to consider when dealing with Docker, multiple processes, and signals. Probably the most famous post on this matter is from the Phusion blog. Here, we'll see some examples of how to see these problems first hand, and one way to work around it: fpco/pid1. The Phusion blog post recommends using their baseimage-docker. This image provides a my_init entrypoint which handles the problems described here, as well as introducing some extra OS features, such as syslog handling. Unfortunately, we ran into problems with Phusion's usage of syslog-ng, in particular with it creating unkillable processes pegged at 100% CPU usage. We're still investigating the root cause, but in practice we have found that the syslog usage is a far less motivating case than simply a good init process, which is why we've created the pid1 Haskell package together with a simple fpco/pid1 Docker image. This blog post is intended to be interactive: you'll get the most bang for your buck by opening up your terminal and running commands along with reading the text. It will be far more motivating to see your Ctrl-C completely fail to kill a process. NOTE The primary reason we wrote our own implementation in Haskell was to be able to embed it within the Stack build tool. There are other lightweight init processes already available, such as dumb-init. I've also blogged about using dumb-init. While this post uses pid1, there's nothing specific to it versus other init processes.Playing with entrypoints Docker has a concept of entrypoints, which provides a default wrapping command for commands you provides to docker run. For example, consider this interaction with Docker:$ docker run --entrypoint /usr/bin/env ubuntu:16.04 FOO=BAR bash c 'echo $FOO' BAR This works because the above is equivalent to:$ docker run ubuntu:16.04 /usr/bin/env FOO=BAR bash -c 'echo $FOO' Entrypoints can be overridden on the command line (as we just did), but can also be specified in the Dockerfile (which we'll do later). The default entrypoint for the ubuntu Docker image is a null entrypoint, meaning that the provided command will be run directly without any wrapping. We're going to simulate that experience by using /usr/bin/env as an entrypoint, since switching entrypoint back to null isn't yet supported in released Docker. When you run /usr/bin/env foo bar baz, the env process will exec the foo command, making foo the new PID 1, which for our purposes gives it the same behavior as a null entrypoint. Both the fpco/pid1 and snoyberg/docker-testing images we'll use below set /sbin/pid1 as the default entrypoint. In the example commands, we're explicitly including --entrypoint /sbin/pid1. This is just to be clear on which entrypoint is being used; if you exclude that option, the same behavior will persist.Sending TERM signal to process We'll start with our sigterm.hs program, which runs ps (we'll see why soon), then sends itself a SIGTERM and then loops forever. On a Unix system, the default process behavior when receiving a SIGTERM is to exit. Therefore, we'd expect that our process will just exit when run. Let's see:$ docker run --rm --entrypoint /usr/bin/env snoyberg/docker-testing sigterm PID TTY TIME CMD 1 ? 00:00:00 sigterm 9 ? 00:00:00 ps Still alive! Still alive! Still alive! ^C $ The process ignored the SIGTERM and kept running, until I hit Ctrl-C (we'll see what that does later). Another feature in the sigterm code base, though, is that if you give it the command line argument install-handler, it will explicitly install a SIGTERM handler which will kill the process. Perhaps surprisingly, this has a significant impact on our application:$ docker run --rm --entrypoint /usr/bin/env snoyberg/docker-testing sigterm install-handler PID TTY TIME CMD 1 ? 00:00:00 sigterm 8 ? 00:00:00 ps Still alive! $ The reason for this is some Linux kernel magic: the kernel treats a process with PID 1 specially, and does not, by default, kill the process when receiving the SIGTERM or SIGINT signals. This can be very surprising behavior. For a simpler example, try running the following commands in two different terminals:$ docker run --rm --name sleeper ubuntu:16.04 sleep 100 $ docker kill -s TERM sleeper Notice how the docker run command does not exit, and if you check your ps aux output, you'll see that the process is still running. That's because the sleep process was not designed to be PID 1, and does not install a special signal handler. To work around this problem, you've got two choices: - Ensure every command you run from docker run has explicit handling of SIGTERM. - Make sure the command you run isn't PID 1, but instead use a process that is designed to handle SIGTERM correctly. Let's see how the sigterm program works with our /sbin/pid1 entrypoint:$ docker run --rm --entrypoint /sbin/pid1 snoyberg/docker-testing sigterm PID TTY TIME CMD 1 ? 00:00:00 pid1 8 ? 00:00:00 sigterm 12 ? 00:00:00 ps The program exits immediately, as we'd like. But look at the ps output: our first process is now pid1 instead of sigterm. Since sigerm is being launched as a different PID (8 in this case), the special casing from the Linux kernel does not come into play, and default SIGTERM handling is active. To step through exactly what happens in our case: - Our container is created, and the command /usr/sbin/pid1 sigterm is run inside of it. - pid1 starts as PID-1, does its business, and then fork/execs the sigterm executable. - sigterm raises the SIGTERM signal to itself, causing it to die. - pid1 sees that its child died from SIGTERM (== signal 15) and exits with exit code 143 (== 128 + 15). - Since our PID1 is dead, our container dies too. This isn't just some magic with sigterm, you can do the same thing with sleep:$ docker run --rm --name sleeper fpco/pid1 sleep 100 $ docker kill -s TERM sleeper Unlike with the ubuntu image, this will kill the container immediately, due to the /sbin/pid1 entrypoint used by fpco/pid1. NOTE In the case of sigterm, which sends the TERM signal to itself, it turns out you don't need a special PID1 process with signal handling, anything will do. For example, try docker run --rm --entrypoint /usr/bin/env snoyberg/docker-testing /bin/bash -c "sigterm;echo bye". But playing with sleep will demonstrate the need for a real signal-aware PID1 process.Ctrl-C: sigterm vs sleep There's a slight difference between sigterm and sleep when it comes to the behavior of sending hitting Ctrl-C. When you use Ctrl-C, it sends a SIGINT to the docker run process, which proxies that signal to the process inside the container. sleep will ignore it, just as it ignores SIGTERM, due to the default signal handlers for PID1 in the Linux kernel. However, the sigterm executable is written in Haskell, and the Haskell runtime itself installs a signal handler that converts SIGINT into a user interrupt exception, overriding the PID1 default behavior. For more on signal proxying, see the docker attach documentation.Reaping orphans Suppose you have process A, which fork/execs process B. When process B dies, process A must call waitpid to get its exit status from the kernel, and until it does so, process B will be dead but with an entry in the system process table. This is known as being a zombie. But what happens if process B outlives process A? In this case, process B is known as an orphan, and needs to be adopted by the init process, aka PID1. It is the init process's job to reap orphans so they do not remain as zombies. The orphans.hs program will: - Spawn a child process, and then loop forever calling ps - In the child process: run the echo command a few times, without calling waitpid, and then exit As you can see, none of the processes involved will reap the zombie echo processes. The output from the process confirms that we have, in fact, created zombies:$ docker run --rm --entrypoint /usr/bin/env snoyberg/docker-testing orphans 1 2 3 4 Still alive! PID TTY TIME CMD 1 ? 00:00:00 orphans 8 ? 00:00:00 orphans 13 ? 00:00:00 echo <defunct> 14 ? 00:00:00 echo <defunct> 15 ? 00:00:00 echo <defunct> 16 ? 00:00:00 echo <defunct> 17 ? 00:00:00 ps Still alive! PID TTY TIME CMD 1 ? 00:00:00 orphans 13 ? 00:00:00 echo <defunct> 14 ? 00:00:00 echo <defunct> 15 ? 00:00:00 echo <defunct> 16 ? 00:00:00 echo <defunct> 18 ? 00:00:00 ps Still alive! And so on until we kill the container. That <defunct> indicates a zombie process. The issue is that our PID 1, orphans, doesn't do reaping. As you probably guessed, we can solve this by just using the /sbin/pid1 entrypoint:$ docker run --rm --entrypoint /sbin/pid1 snoyberg/docker-testing orphans 1 2 3 4 Still alive! PID TTY TIME CMD 1 ? 00:00:00 pid1 10 ? 00:00:00 orphans 14 ? 00:00:00 orphans 19 ? 00:00:00 echo <defunct> 20 ? 00:00:00 echo <defunct> 21 ? 00:00:00 echo <defunct> 22 ? 00:00:00 echo <defunct> 23 ? 00:00:00 ps Still alive! PID TTY TIME CMD 1 ? 00:00:00 pid1 10 ? 00:00:00 orphans 24 ? 00:00:00 ps Still alive! pid1 now adopts the echo processes when the child orphans process dies, and reaps accordingly.Surviving children Let's try out something else: process A is the primary command for the Docker container, and it spawns process B. Before process B exits, process A exits, causing the Docker container to exit. In this case, the running process B will be forcibly closed by the kernel (see this Stack Overflow question for details). We can see this with our surviving.hs program$ docker run --rm --entrypoint /usr/bin/env snoyberg/docker-testing surviving Parent sleeping Child: 1 Child: 2 Child: 4 Child: 3 Child: 1 Child: 2 Child: 3 Child: 4 Parent exiting Unfortunately this doesn't give our child processes a chance to do any cleanup. Instead, we would rather send them a SIGTERM, and after a grace period send them a SIGKILL. This is exactly what pid1 does:$ docker run --rm --entrypoint /sbin/pid1 snoyberg/docker-testing surviving Parent sleeping Child: 2 Child: 3 Child: 1 Child: 4 Child: 2 Child: 1 Child: 4 Child: 3 Parent exiting Got a TERM Got a TERM Got a TERM Got a TERMSignaling docker run vs PID1 When you run sleep 60 and then hit Ctrl-C, the sleep process itself receives a SIGINT. When you instead run docker run --rm fpco/pid1 sleep 60 and hit Ctrl-C, you may think that the same thing is happening. However, in reality, it's not at all the same. Your docker run call creates a docker run process, which sends a command to the Docker daemon on your machine, and that daemon creates the actual sleep process (inside a container). When you hit Ctrl-C on your terminal, you're sending SIGINT to docker run, which is in fact sending a command to the Docker daemon, which in turn sends a SIGINT to your sleep process. Want proof? Try out the following:$ docker run --rm fpco/pid1 sleep 60& [1] 417 $ kill -KILL $! $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 69fbc70e95e2 fpco/pid1 "/sbin/pid1 sleep 60" 11 seconds ago Up 11 seconds hopeful_mayer [1]+ Killed docker run --rm fpco/pid1 sleep 60 In this case, we sent a SIGKILL to the docker run command. Unlike SIGINT or SIGTERM, and SIGKILL cannot be handled, and therefore docker run is unable to delegate signal handling to a different process. As a result, the docker run command itself dies, but the sleep process (and its container) continue running. Some takeaways from this: - Make sure you use something like pid1 so that your SIGINT or SIGTERM to the docker run process actually get your container to reliably shut down - If you must send a SIGKILL to your process, use the docker kill command instead We've used --entrypoint /sbin/pid1 a lot here. In fact, each usage of that has been superfluous, since the fpco/pid1 and snoyberg/docker-testing images both use /sbin/pid1 as their default entrypoint anyway. I included it for explicitness. To prove it to you:$ docker run --rm fpco/pid1 sleep 60 ^C$ But if you don't want to muck with entrypoints, you can always just include /sbin/pid1 at the beginning of your command, e.g.:$ docker run --rm --entrypoint /usr/bin/env fpco/pid1 /sbin/pid1 sleep 60 ^C$ And if you have your own Docker image and you'd just like to include the pid1 executable, you can download it from the Github releases page.Dockerfiles, command vs exec form You may be tempted to put something like ENTRYPOINT /sbin/pid1 in your Dockerfile. Let's see why that won't work:$ ---> Using cache ---> f875b43a9e40 Successfully built f875b43a9e40 $ docker run --rm test ps pid1: No arguments provided The issue here is that we specified /sbin/pid1 in what Docker calls command form. This is just a raw string which is interpreted by the shell. It is unable to be passed an additional command (like ps), and therefore pid1 itself complains that it hasn't been told what to run. The correct way to specify your entrypoint is ENTRYPOINT ["/sbin/pid1"], e.g.:$ ---> Running in ba0fa8c5bd41 ---> 4835dec4aae6 Removing intermediate container ba0fa8c5bd41 Successfully built 4835dec4aae6 $ docker run --rm test ps PID TTY TIME CMD 1 ? 00:00:00 pid1 8 ? 00:00:00 ps Generally speaking, you should stick with command form in your Dockerfiles at all times. It is explicit about whitespace handling, and avoids the need to use a shell as an interpreter.Takeaways The main takeaway here is: unless you have a good reason to do otherwise, you should use a minimal init process like pid1. The Phusion/my_init approach works, but may be too heavy weight for some. If you don't need syslog and other add-on features of Phusion, you're probably best with a minimal init instead. As a separate but somewhat related comment: we're going to have a follow up post on this blog in the coming days explaining how we compiled the pid1 executable as a static executable to make it compatible with all various Linux flavors, and how you can do the same for your Haskell executables. Stay tuned!
http://sequence.complete.org/aggregator?page=5
CC-MAIN-2017-04
refinedweb
14,004
60.45
Find Maximum & Minimum Element in an Array Using C++ Hello Learners, In this article, we’ll be learning how to Find the Maximum and Minimum elements of an Array in C++. The array can either be user-defined or by default given in the program. Loops are going to be used in this program but we can also do it in another method which is Recursion. Let us see an example to understand properly: Suppose an array is given like: a[]={100,20,30,40,50} here, the maximum element is 100 and the minimum element is 20. We will write a program to print this on the output screen. C++ Program to find Maximum & Minimum Element of an Array let us consider an array named arr[n]. Here, n is the size of the array which should be an integer. we will consider here only 5 elements but you can choose your array size as per your choice. We will use a loop to solve this problem. This is also called the iterative approach. Firstly, we will put the first element in two variables named max and min. Then, we’ll compare every element of the array with these variables and if found the maximum and minimum then put that particular element in those variables max and min respectively. Lastly, we will print the value of max and min as the maximum element and minimum element. Now, let us see the source code for a better understanding: #include<iostream> using namespace std; int main() { int arr[5],max,min,i; cout<<"Enter elements of array: "; for(i=0;i<5;i++) cin>>arr[i]; cout<<"Your array is: "; for(i=0;i<5;i++) cout<<arr[i]<<" "; max=arr[0]; min=arr[0]; for(i=0;i<5;i++) if(max<arr[i]) max=arr[i]; else if(min>arr[i]) min=arr[i]; cout<<"\nMaximum element of Array: "<<max; cout<<"\nMinimum element of Array: "<<min; return 0; } Firstly, in max and min, we will put the first element of the array. And, variable i is used for controlling the loop where all the elements are being compared with variable max and min. Let us see the output now: Enter elements of array: 100 30 45 2 78 Your array is: 100 30 45 2 78 Maximum element of Array: 100 Minimum element of Array: 2 In this way, you can find the Maximum and Minimum elements of an array. I hope it was easy enough to understand. If you have any doubt, feel free to ask in the comment section. THANK YOU! Regards, Isha Rani Prasad Codespeedy Tech Pvt. Ltd.
https://www.codespeedy.com/find-maximum-minimum-element-in-an-array-using-c/
CC-MAIN-2021-17
refinedweb
442
59.03
11 March 2005 16:56 [Source: ICIS news] ?xml:namespace> ?xml:namespace> Some of these companies are now taking advantage of the situation by making acquisitions, buying back stock and boosting dividends. "The Indeed, a number of US specialty chemical companies with clean balance sheets are stepping up to make cash acquisitions. Most recently, Engelhard offered to buy Coletica, a publicly-traded producer of skin care compounds and related technologies for the cosmetic and personal care industries for $88.7m (Euro65.9m). However, companies can sometimes simply use stock to make acquisitions. For example, Crompton is merging with Great Lakes Chemical in a $1.8bn stock deal. Other specialty chemical deals over the past year include Cytec Industries/UCB Surface Specialties ($1.8bn), Sigma-Aldrich/JRH Biosciences ($370m), Henkel/Sovereign Specialty Chemicals ($575m), Arch Chemicals/Avecia’s biocides business ($215m), Albemarle/Akzo Nobel’s catalysts unit ($841m) and Lubrizol/Noveon ($1.84bn). "Those with the cleanest balance sheets seem to be favouring a more aggressive approach toward acquisitions, while the rest are mixed with some favouring share repurchases and others focusing more on dividend increases," McNulty said. "The companies with the most firepower include Engelhard, Ecolab and Valspar, all of which we expect to make acquisitions." The specialty chemical group has significantly slashed debt over the past four years. From 2000 to 2004, Engelhard has cut its debt/capital ratio from around 45% to just over 20%, facilitating deals. Cytec’s aggressive debt paydown over the years has allowed it to make a substantial acquisition. Prior to buying UCB Surface Specialties, Cytec had net debt of just $142m in net debt and a debt/capital ratio of 9% versus nearly 60% in 2000. Lubrizol had an A+ rated balance sheet before acquiring Noveon, while Arch cut net debt by 23% in 2003 before picking up Avecia’s biocides business in 2004. To determine how much excess capital companies have to make acquisitions, buy back stock or hike dividends, McNulty recommends looking at current debt/capital relative to past peak levels "as it shows how much debt they are willing to take on for the right opportunity, which is usually an acquisition." Based on this analysis, most companies have substantial capital available to make acquisitions. McNulty then looks at available capital as a percentage of equity market capitalization to determine what that leverage means to each company with regard to its size. On this basis, companies with the greatest buying power include Ferro, Valspar, Engelhard, Praxair and Air Products. Engelhard has around $2bn in potential excess capital, according to McNulty. "We expect acquisitions to remain a top priority for Engelhard," he said. Taking on $2bn in debt would push Engelhard’s debt/capital from 22% to roughly 60%, taking into account 2005 free cash flow estimate, McNulty said. "Even if Engelhard was not willing to drive its debt to historic peak levels and just pushed debt towards its target level of 40%, the company could take on over $600m of debt." Engelhard has also used cash for share repurchase, having bought back $110m in stock each year for the past three years, reducing shares outstanding by 7m shares, or about 5%. Valspar has slashed debt/capital from a high of 77% after the Lilly Industries acquisition in 2001 to around 40% today, McNulty said. "After spending a number of years reducing its debt, Valspar is back in a position to do what it does best - use its capital to make acquisitions," he said. "Management has made no secret as to what it wants to do with its excess capital, having constantly called for consolidation in the coatings industry." McNulty estimates Valspar has roughly $200m of excess capital if it pushed debt/capital to its target of 45%. However, taking debt/capital back to 77% would yield as much as $2.5bn in excess capital. "Aside from acquisitions, we expect Valspar to put cash and excess capital toward dividend increases and minimizing share creep," the analyst said. "However, we expect the company to keep most of its powder dry until it finds the right acquisitions." Specialty chemicals giant Rohm and Haas is also deleveraging. The company cut net debt from $2.39bn at the end of 2003 to $2.02bn by the end of 2004 for a debt/capital level of 35%. In late February, the company retired $400m in debt. Rohm and Haas continues to buy back stock, having authorized a $1bn share repurchase program last December. The company hiked its quarterly dividend by 14% to 25 cents
http://www.icis.com/Articles/2005/03/11/659720/analysis-underleveraged-specialty-chem-firms-look-for.html
CC-MAIN-2014-42
refinedweb
757
53.1
§Body parsers §What is a body parser? An HTTP PUT or POST request contains a body. This body can use any format, specified in the Content-Type request header. In Play, a body parser transforms this request body into a Scala value. However the request body for an HTTP request can be very large and a body parser can’t just wait and load the whole data set into memory before parsing it. A BodyParser[A] is basically an Iteratee[Array[Byte],A], meaning that it receives chunks of bytes (as long as the web browser uploads some data) and computes a value of type A as result. Let’s consider some examples. - A text body parser could accumulate chunks of bytes into a String, and give the computed String as result ( Iteratee[Array[Byte],String]). - A file body parser could store each chunk of bytes into a local file, and give a reference to the java.io.Fileas result ( Iteratee[Array[Byte],File]). - A s3 body parser could push each chunk of bytes to Amazon S3 and give a the S3 object id as result ( Iteratee[Array[Byte],S3ObjectId]). Additionally a body parser has access to the HTTP request headers before it starts parsing the request body, and has the opportunity to run some precondition checks. For example, a body parser can check that some HTTP headers are properly set, or that the user trying to upload a large file has the permission to do so. Note: That’s why a body parser is not really an Iteratee[Array[Byte],A]but more precisely a Iteratee[Array[Byte],Either[Result,A]], meaning that it has the opportunity to send directly an HTTP result itself (typically 400 BAD_REQUEST, 412 PRECONDITION_FAILEDor 413 REQUEST_ENTITY_TOO_LARGE) if it decides that it is not able to compute a correct value for the request body Once the body parser finishes its job and gives back a value of type A, the corresponding Action function is executed and the computed body value is passed into the request.Default body parser: AnyContent In our previous examples we never specified a body parser. So how can it work? If you don’t specify your own body parser, Play will use the default, which processes the body as an instance of play.api.mvc.AnyContent. This body parser checks the Content-Type header and decides what kind of body to process: - text/plain: String - application/json: JsValue - application/xml, text/xml or application/XXX+xml: NodeSeq - application/form-url-encoded: Map[String, Seq[String]] - multipart/form-data: MultipartFormData[TemporaryFile] - any other content type: RawBuffer For example: def save = Action { request => val body: AnyContent = request.body val textBody: Option[String] = body.asText // Expecting text body textBody.map { text => Ok("Got: " + text) }.getOrElse { BadRequest("Expecting text/plain request body") } } §Specifying a body parser The body parsers available in Play are defined in play.api.mvc.BodyParsers.parse. So for example, to define an action expecting a text body (as in the previous example): def save = Action(parse.text) { request => Ok("Got: " + request.body) } Do you see how the code is simpler? This is because the parse.text body parser already sent a 400 BAD_REQUEST response if something went wrong. We don’t have to check again in our action code, and we can safely assume that request.body contains the valid String body. Alternatively we can use: def save = Action(parse.tolerantText) { request => Ok("Got: " + request.body) } This one doesn’t check the Content-Type header and always loads the request body as a String. Tip: There is a tolerantfashion provided for all body parsers included in Play. Here is another example, which will store the request body in a file: def save = Action(parse.file(to = new File("/tmp/upload"))) { request => maximum content length because they have to load all of the content into memory. There is a default maximum content length (the default is 100KB), but you can also specify it inline: // Accept only 10KB of data. def save = Action(parse.text(maxLength = 1024 * 10)) { request => Ok("Got: " + text) } Tip: The default content size can be defined in application.conf: parsers.text.maxLength=128K Unit sizes are defined in Size in bytes format section of the Configuration page. You can also wrap any body parser with maxLength: // Accept only 10KB of data. def save = Action(parse.maxLength(1024 * 10, storeInUserFile)) { request => Ok("Saved the request content to " + request.body) } Next: Action composition.
https://www.playframework.com/documentation/tr/2.3.x/ScalaBodyParsers
CC-MAIN-2022-33
refinedweb
742
56.05
The ILOM command-line interface (CLI) enables you to use keyboard commands to configure and manage many ILOM features and functions. Any task that you can perform using the ILOM web interface has an equivalent ILOM CLI command. This chapter includes the following sections: The ILOM command-line interface (CLI) is based on the Distributed Management Task Force specification, Server Management Command-Line Protocol Specification, version 11.0a.8 Draft (DMTF CLP). You can view the entire specification at the following site: The DMTF CLP provides a management. The following table lists the various hierarchy methods you can use with the ILOM CLI, depending on the particular Sun server platform that you are using. Service processors can access two namespaces: the /SP namespace and the overall system namespace /SYS or /HOST for SPARC-based systems. In the /SP namespace, you can manage and configure the service processor. In the /SYS or /HOST namespace you can access other information for managed system hardware. FIGURE 3-1 /SP Example of the ILOM CLI Target Tree For information about user privilege levels, see Roles for ILOM User Accounts. When using the ILOM CLI, information is entered in the following order: Command syntax: <command> <options> <target> <properties> The following sections include more information about each part of the syntax. The ILOM CLI supports the DMTF CLP commands listed in the following table. CLI commands are case-sensitive. The ILOM CLI supports the following options, but note that not every command supports every option. The help option can be used with any command. Every object in your namespace is a target. Properties are the configurable attributes specific to each object. To execute most commands, specify the location of the target and then enter the command. You can perform these actions individually, or you can combine them on the same command line. 1. Navigate to the namespace using the cd command. For example: cd /SP/services/http 2. Enter the command, target, and value. For example: set port=80 or set prop1=x set prop2=y Using the syntax <command><target>=value, enter the command on a single command line. For example: set /SP/services/http port=80 or set /SP/services/http prop1=x prop2=y The following table provides an example and description of the individual and combined command execution methods.. This section describes how to log in to and log out of ILOM. You should first refer to Assign IP Addresses to the Sun Server Platform SP Interfaces to configure ILOM before logging in to the ILOM CLI. ILOM supports from 5 to 10 active sessions depending on your platform, including serial, SSH, and web interface sessions. Telnet connections to ILOM are not supported. You can access the ILOM CLI remotely through a Secure Shell (SSH) or serial connection. Secure Shell connections are enabled by default. The following procedure shows an example using an SSH client on a UNIX system. Use an appropriate SSH client for your operating system. The default user name is root and default password is changeme. Follow these steps to log in to ILOM using the default enabled SSH connection: 1. Type this command to log in to ILOM: $ ssh root@ipaddress where ipaddress is the IP address of the server SP. 2. Type this password when prompted: Password: changeme After you log in to ILOM using the default user name and password, you should change the the ILOM root account password (changeme). For information about changing the root account password, see Change ILOM Root Account Password Using the CLI. Follow this step to log out of ILOM: Type this command to log out of ILOM: -> exit
https://docs.oracle.com/cd/E19121-01/sf.x4500/820-1188-12/core_ilom_cli.html
CC-MAIN-2019-47
refinedweb
608
55.84
Should programming be a required curriculum in public schools? [. Wouldn't work (Score:5, Insightful) I remember reading something a while back about how certain people's brains are just more geared towards programming and that other people simply won't really "get" it no matter how much you try to force it into their head. There should definitely be more programming classes available to those who want them, but if you're going to force the less tech-oriented students to take a tech class, it should be something that either teaches them general computer usage or helps them use computers in education. That's another issue with how computers are used in education: you have to strike the perfect balance between "using teaching time and resources to teach technology" and "using technology to educate". Getting the wrong mix ends up screwing both aspects over since you end up with technology being forced into places it doesn't belong, or you end up with students who only know how to use a computer to do their homework. Re:Wouldn't work (Score:5, Funny) I had to take four semesters of foreign languages during high school and two in middle school. I failed all six. I'm an excellent computer programmer with over 25 years in the industry. If I had to fail foreign languages, everyone else can fail a couple of semesters of computer programming. Re:Wouldn't work (Score:5, Interesting) Pretty much the same agreement here. My teenage brain wasn't wired for going home and learning things I spent all day learning. So I didn't and I failed a bunch of classes because I just rarely did homework. No biggie, because I did spend time reading programming books and other resources that just weren't available through traditional education. I'm sure there are loads of kids that are like I was. They get crappy to marginal grades in subjects "their brains aren't wired for." The classic underachiever. I guess the biggest problem is getting people to teach programming. I've taught a few students when I worked at a day treatment school, but would I go back to something like that (for that pay) when I could be building cool stuff instead (for that pay)? You'd probably just be best off leaving the kids to run their own show, as the adults will do nothing but hinder and control it. Re:Wouldn't work (Score:5, Insightful) "My teenage brain wasn't wired for going home and learning things I spent all day learning." No teenage brain is. That is why parents also have to instill a reasonable work ethic and show them algorithms for reaching goals. If we all just passed stuff off that we weren't naturally good at as not being worth it we would be inadequately prepared for life. I was not and am not naturally inclined towards math, but in my adulthood I went back to school, spent hours a day at it and had tutors and I finally got through basic calculus and even a little linear algebra. It was hard. And it was worth it. Re: (Score:3, Interesting) No teenage brain is. That is why ... people should pull their children out of these shitty one-size-fits-all schools. Re: (Score:3) Why is my parent modded flaimbait? He is perfectly right! Our school systems (not looking either at USA nor germany nor any other for that matter) fail in three ways: they can not teach a genious (bored bored bored) they can not teach the mediocre ... and most of all they can not socialize them to live/learn together. When I was young we kids considereds chool a children prison .. no one realy understood why we should spend 6 - 8 hours a day in school when you could read/learn the stuff in 1 hour. No one told/ Re:Wouldn't work (Score:5, Insightful) It's not about having a predisposition towards programming. Plenty of people aren't predisposed to math, science, english, etc, but they still have to take these courses. Just because you take a few classes that help you to understand logic and how computers work doesn't mean you have to be a computer scientist. Re: (Score:2) . Re: (Score:2) When I was in high school I took two years of IBM basic {electively, and I know 1980s was a long time ago} the computer courses offered at that school, now that my kid goes there, are Microsoft Office. I think even VB.Net would be better than what they have although I would prefer C or C++. Re: (Score:2) Somehow I got past the "typing and MS Office" class. I don't know what my life would've been if I was forced into those instead of being able to try out the cool stuff instead. Re: (Score:3) I'm completely disappointed with my kid's classes most of the things I loved about school aren't there anymore aside from the IBM Basic they don't do any lab experiments in their science classes either due to cost and insurance issues. I used to spend my last morning class {history} impatiently watching the clock because after lunch it was Computers and Science. If I were to take those courses as they are offered now I would probably fall asleep. Re: (Score:2) Posting to undo accidental down-mod. Re: (Score:2) Please no more required subjects (Score:2) I'm not convinced you can learn a language in school. I had to study 4 languages in high school (one of the many down sides to growing up a native speaker of a dying minority language). For years, never mind semesters. I always had decent grades for them too. Two of them are now missing. If you don't get to use a language every day, it just goes away, no matter how hard you studied it in school. I would have been better off taking crash course a month before, if I ever end up needing one of those languages Re: (Score:3, Insightful) Anyone who can learn how to read and write, and is capable of following a recipe for baking a cake, is capable of learning how to understand and write a simple program. Have you ever tried it? Have you ever tried to teach people programming? I do it for ten years and according to my observastions there are some people (about 75% of population) who will never be able to program. I'm convinced you'll observe the same if you try to teach a class Chinese, yet in China, everyone manages to learn it just fine. Start early and practice every day works for almost any skill. Re: (Score:3) Anyone who can pass home e Re: (Score:2) Re: (Score:2) Re: (Score:2) I've yet to see a compiler's output as pleasant as my mother's apple pie. When can I expect gcc to make me pie? Re: (Score:2) When can I expect gcc to make me pie? #include "mpi.h" #include #include int main( int argc, char *argv[] ) { int n, myid, numprocs, i; double PI25DT = 3.141592653589793238462643; double mypi, pi, h, sum, x; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); while (1) { Re: (Score:2) Oops - It didn't like the brackets around stdio.h or math.h. Oh, well - You get it. Re: (Score:2) Re: (Score:2) ...if you can pass home ec, you can learn programming... I have to disagree there. If you can follow a recipe, you can make cookies exactly as well as the person at the next station. If you can copy and paste, you can run a program exactly as well as the person at the next computer. But running an identical program is NOT programming, making identical cookies IS baking. If you can find me a job where they want the same program written every day for years, sign me up. I'll return the favor by finding you a place where they want the same burger cooked 1,000 t Re: (Score:2) It's a shame so many programmers insist bugs must exist. They spend more time coding in bugs than working on processes that eliminate them. Rather than Open Source programs, we should have Open Sou Re: (Score:2) Re: (Score:3) Re: (Score:2) Re: (Score:2) Re: (Score:2) I'm not talking about sanitizing inputs from a web form headed to an SQL database. If you're referring to the cheapest possible bidder that pays pennies for shit code, then obviously you'll get exactly what you pay for. You seem to think that that's the only option out there. It's not, and that's my point. Like I said, generic components that fit everyone's requirements are the hardest components to get right. I did not state, nor imply that they do not exist. You can infer, however, that there aren't that Re: (Score:3) And I believe everyone should take some form of home economics, and that it should not be a freshman class, but a senior one. One needs to be able to cook, clean one's living space, take care of one's clothes, handle one's personal finances, pay one's taxes, and do all sorts of other perso Re: (Score:2) And I believe everyone should take some form of home economics, and that it should not be a freshman class, but a senior one. Why not freshman? Why not even earlier? I learned how to do cleaning and cooking by the age of 8. Long before Home Ec was offered in my middle school (I took wood shop, instead, but I had already learned a lot of that before, too). (Unfortunately, liability issues have driven most "hands on" activities out of schools (and out of what parents teach their children).) Basic computer usage skills do make sense, but those developing the curriculum will have to be very responsive to industry changes, which makes it difficult for such an education to be terribly practical That depends on how it is taught. Unfortunately, at least in the US, these classes often end up being "how to MS Office", which was easy for the t Re: (Score:2) Because normally students only get one semester or one year of Home Ec, and since after that year it's most needed for real, it makes sense to teach it then put it into immediate practice. If one doesn't put it into practice then one tends to forget, and as freshmen, sophomores Re: (Score:2) Re: (Score:2) The real barrier to programming is the OCD disorder that compels the programmers to try the for loop 1000 times to see what happens. Yes, there are that many things you can try there that Re: (Score:2) This is true of all learned subjects. Temper your nihilism. Re: (Score:2) Re: (Score:3) There should definitely be more programming classes available to those who want them [my emphasis] You know I hate to be boring and complain about missing poll options ... Re: (Score:3) Agree 100%. Should a plethora of programming courses be required? No. Should they be available? Hell yes. Re: (Score:2) Incorporated option (Score:2) What about treating it like other "literacy" types? Many subject include projects include the option or expectation of writing, speaking, mathematically analyzing, and or graphically illustrating topics. Why shouldn't dedicated education in this modality be supplemented by incorporating it in the other classes? Mathematical and computer modelling is a huge educational and research tool. It'd be nice to see a bit more of that in our classrooms. Problem Solving (Score:5, Insightful) Re:Problem Solving (Score:5, Insightful) I think problem solving and logic are much more important than coding itself. Lots of kids have no interest in coding and it will just be another class they struggle in. Teach them how to solve a problem logically and they can apply it to lots of things in life. Coding can be used as examples to show how the logic flows through the process from beginning to end, but to try and force a bunch of kids to learn how to code variables, If Then statements, recursive loops, I'd be banging my head against a wall as a teacher. "Ah, but teaching them coding is a vehicle for teaching them logical problem solving," said the ex-math teacher in rebuttal. I used to use the same rationale for explaining to students why they needed the logical problem solving abilities they were learning in my math classes as much (if not more) than they needed the actual math techniques. I don't know how you would teach "logical problem solving" without some vehicle like math, programming, etc. Cheers, Dave Re: (Score:2) Flowcharts no longer exist? When did that happen? Re: (Score:2) Flowcharts no longer exist? When did that happen? Sometimes I actually find it amusing when people criticize some unmentioned, tangental aspect of an assertion and then don't bother to connect what they've said or offer an alternative. In your case it's just sad. Cheers, Dave Re: (Score:2) Re: (Score:2) Flowcharts no longer exist? When did that happen? News to me Actually, flow charts are a form of programming. A good place to start. Something like a simple variant of Simulink or LabView would even allow the computer to run the logic depicted in the diagram. No, but make it available as an option everywhere (Score:4, Insightful) Re: (Score:2) It should be the opposite - make calculus a mandatory course. Calculus is/was Required (Score:2) Just like calculus... don't make it required Obvious missing option -- ELECTIVES! (Score:5, Insightful) [x] No, it should be elective. Remember electives? Typing was elective when I went to HS. I took it specifically because I figured it would be useful for computing. Home ec, auto shop, languages, etc. All elective. IIRC, you were required to take at least one elective to broaden your horizons; but the choice was yours. I'm pretty sure my HS computer class (with 8-bit NECs!) was elective because there just weren't enough resources to make it required. That made for a teacher and students who were both motivated. The teacher was a consultant who only taught for that one hour. I wonder what he's up to these days. Re: (Score:2) Re: (Score:2) [x] No, it should be elective. Seconded. Just because computers are something "they'll use every day" doesn't mean programming should be a required course, any more than automotive repair should be. Or food prep. In the case of many public school attendees, Point-of-Sale and industrial deep fryer operation would be far more relevant to their future careers. Re:Obvious missing option -- ELECTIVES! (Score:4, Insightful) I would argue that food prep should be a required class. When I was in school, everyone was required to take cooking, sewing, and wood and metal shop. Cooking is, or ought to be, a daily activity for most people; they certainly shouldn't count on a lifestyle that has somebody else preparing all their meals. A cooking class with a good emphasis on nutrition could do a lot to reduce widespread obesity. Re: (Score:3) I went to school in a third world country in 1982 and we had it then (basic on conmodore pets) That rocks. BTW, my HS experience was in 1984 in Fairfax County, VA, USA. That was and still is considered one of the best public school systems in the US. At the time, most of the school was not air conditioned. One wing had AC and that's where the computers were. We had one classroom that seated about 30 students. There were perhaps a dozen of these NEC computers in there. The only other computers I was Re: (Score:2) Are CS courses really available in all districts? It's easier to assume that now when everybody has 3 computers falling out of their pockets. It definitely wasn't true when I was in school. There may still be some districts where the courses aren't available or there aren't enough slots for all who want to take them. When I was in school the tech was a bottleneck. Today the bottleneck is probably a shortage of qualified teachers. So. I don't think it's available as an elective to everybody. Certain Might be useful for screening.. (Score:3) Seriously, there are those who do have the aptitude, but they'd be going like 90 while their classmates are struggling with if .. then .. else I don't think high school is a place for screening courses, save that for the first year of college. "yes, your code works, but you put documentation in it, that will not do." Tough question... (Score:2) Better question, before or after algebra (Score:2) Algebra introduces the use of variables. Should students first see variables in a programming class or a math class. Re: (Score:2) I disagree. Many people understand the concept of variables long before their introduction to Algebra. I remember Algebra being a course about factoring and moving variables within the equation. I am not going to deny that the skills gained in these exercises are useful; but, they should not be a barrier. I remember being in High School and not being permitted to take a computer class because I was not in Calculus. This didn't motivate me to study Calculus; further, computer classes should not be used as a ca Long term will spell doom (Score:5, Funny) Eventually some of those high school students will make their way through an MBA course which will lead to disaster. Let me put it this way - how much worse could your boss/team lead/manager screw things up if they thought they could code? Re: (Score:2) [X] No - it'll do more harm than good I'll have to agree with this. Programming should be only taught to people who have already managed to learn the basics by themselves, using whatever methods available to them. They are the ones that will benefit the most from being taught, having already proved that both motivation towards subject and required reasoning capability exist. Nowadays, there are plenty of self-learning resources available on the internet, both the tools and documentation are available mostly for free. The remaining obstacles for Re: (Score:2) Using Google? Seriously! I have came across many of using google type that can not write a single line of code without using google. Yes, seriously. I spend a surprisingly lot of time googling stuff for others (I don't magically know everything, either, even though they seem to think that I do). If they knew how to do it themselves, they would save (a) their time (because I usually can't respond immediately), (b) my time. Re: (Score:2) how much worse could your boss/team lead/manager screw things up if they thought they could code? Maybe they would be better at their jobs if they knew a little more about software, and that it is not black magic, but an art that takes time and care to get good results. Re: (Score:3) Tough to say. It might be offset from the GOOD that would come if people would quit making SharePoint lists with text fields to hold numeric data. ... 1 10 11 12 2 3 kill me. Re: (Score:2) You're right, better leave them doing the same job with a convoluted Excel spreadsheet. Yes, as part of computer skills (Score:3) Should all kids be required to take a computer class? Yes. Several? Yes. This stuff is going to be a big part of their lifestyle for the next 70 years. It'd be idiotic not to give them more of a clue that they'll get browsing Facebook. What should those classes teach? - OS principles (files, folders, networks, security) - Basic hardware and architecture, printers, etc - Troubleshooting, debugging skills - Basics of software, which includes an intro to programming principles and practices (processes, procedures, loops, bugs, etc) Computing and data will be such a fundamental omnipresent part of ALL modern life that no one can call themselves educated without gaining some rudimentary understanding of how they work. That includes software, how it works, and how it's made. I Don't think it should be an elective (Score:2) Speaking as someone who had to do a "computer" subject at school back in the mid 80s, I will say that I leared a lot from that time that still applies today. We basically only learned BASIC, but I can't remember the PCs. I think they were Casio. Anyway, this subject was compulsory, even though it was not a core subject and was not part of final year exams. Most of us loved it, even though quite a few struggled with it. Like some others have already mentioned, programming does teach you logic. In my opinion and New Math (Score:2) Some of you lot here might be old enough to remember or have been victimized by "new math" [wikipedia.org]. Among other things, new math attempted to teach school kids various things that would be needed in order to be more computer savvy. I learned, among other things, basic set theory and the idea that numbers could be represented in other than base 10, including binary representation. But after I had learned programming I found at a rummage sale somewhere an older "new math" basic algebra book that contained various Start simple and young, slowly build up from there (Score:2) Logo - turtle graphics (Score:4, Informative) It should be taught as a tool (Score:2) Re: (Score:2) Why? (Score:2) Why on earth should kids learn to program? If you want to teach them programming, are you going to teach them every other profession too? Will they get flying lessons, in "pilot class"? Will they be taught architecture, so they can build their own house? I think it's nonsense. Re: (Score:2) Because no matter what profession they choose, they will work with computers. That's the way the world is going. Teaching them how to use those computers effectively will be a huge advantage to them. For example, most office workers spend their days doing the same repetitive tasks over and over again. Knowing how to write scripts to automate those tasks would be very useful. Teaching the basics of programming isn't equivalent of training them to be developers, it's just showing them how to use the tools the Missing Option: There should be no public schools (Score:2) It's not the government's job the educate children, even less so when the funds used to do so are extorted essentially at gunpoint from people who don't even have children or whose children are not using the public schools. However, in the current terrible and violent public school system, it's beneficial to the kids to have programming courses available, at least those so inclined may get something useful in life out of it. The overall public school experience is detrimental and harmful though, to kids and YES! (sort of) (Score:2) One or two good computer classes that would teach a bit of code but mainly a) how computers work (hint: NOT MAGIC, and NOT SENTIENT) and b) logic would be great. Re: (Score:2) One or two good computer classes that would teach a bit of code but mainly a) how computers work (hint: NOT MAGIC, and NOT SENTIENT) and b) logic would be great. Because people use computers every day, and thus should have at least a basic understanding of how they work, right? So... why isn't a basic Automotive Maintenance class mandated as well? Oh, and we should also mandate a basic electrical class, since we all use electrical devices every day. Oh, and a basic food prep class, and a basic parenting class, and a basic household/finance management class... You know, I started off going for sarcasm, but the more "joke" mandatory classes I think of, the more I think, Re: (Score:2) Many of those subjects *are* required in American high schools. Electricity and the principles of combustion engines/mechanical systems are covered in physics and chemistry. The math used to manage finances and in food recipes is covered by traditional math classes, even in elementary school. Biology covers the mechanisms behind producing a child, if not the parenting. There is value in understanding the basic concepts of computing, just as there is value in the basic concepts of chemistry, physics, biolo Require nothing (Score:2) Alternate Question (Score:2) Should wood shop be a required class? Welding? Auto shop? POS system operation? Schools are supposed to places that teach kids to be well-rounded, fully functional adults, not how to be a good employee. Let the businesses pay for their own damn training*. * Not to say it shouldn't be offered as an elective, like wood shop or media production. Critical Thinking would be better (Score:2) No. We've been lied to (Score:2) thro Re: (Score:2) I think that you can find crappy work environments in any field of employment, and Software is no exception. I've been coding professionally since the late 80s and have nearly always enjoyed my projects and work environment. Like art or science (Score:2) Most students won't be scientists, but science is required, in part to help students understand the basics of how science works. Most students won't be artists, nor can many of them succeed at being good artists, but many schools require at least some art or music, in part to help students have a basic understanding of this important part of our lives. Most students won't become programmers, but they should at least understand the basics of how you tell computers to do things. This understanding will help th Spooky (Score:2) It already is? (Score:2) Before high school? Really? (Score:3) The purpose of grades 1 to 8 are to teach kids how things are made and work, not how to design / engineer them. That's largely the purpose of college and university. So much to say on this topic (Score:4, Insightful) What are mandatory STEM subject that nearly everyone else thinks are useless? math, chemistry & physics Now that we are adults (many of us are now parents also) we can see that this all of it was important and we should have applied ourselves better in high school. We can say "all kids need to learn computing theory, programming etc. because it is important" all we want; but, look how it was when we were young. Ultimately, the lessons I take away as far as educating our kids: 1. We need to pay & respect teachers better to get better results in public education 2. For key STEM subjects, you must find a way to make it relevant to the student TODAY. 3. Parents must work everyday to keep their kids motivated to learn. 4. Bottom line: kids need discipline and fun in their lives in equal measure. Don't beat it into them, motivate them & let them WANT to do it. Are you out of your minds? (Score:3) No, no, a thousand times no. Would you require plumbing classes? Electrician classes? Carpentry? Auto repair? Accounting? The list goes on. You can't offer classes for every job specialty that there is out there. The logistics alone are unfeasible. Where will you hold these classes? Where will the equipment come from and who will maintain it? Who are you going to get to teach programming? These are fine things to offer as electives, but to require programming is idiotic. Re: (Score:2) Show me one person who wishes to do such tutoring - and I'll show you someone who can't competently explain any of it. My experience with programming classes in high school (yes, in the late '80s) differs from yours. Your statement is wrong. I've lived through the exception. Yes, most of the learning was self-taught, but those who didn't understand had it competently explained until they did. Re: (Score:2) In the 80s, lots of adults were fascinated... this enthused them to study alongside students. That happened. Today, adults are either specialists already or are disinterested. Specialists are, in my experience, today, unlikely to find a career teaching a mandated subject (likely to a bunch of less than enthusiastic high school students) a compelling occupation. Re: (Score:2) The specific teacher I'm thinking of had a PhD in education, so I'm not sure had he come later that he would have had the interest crushed from him. He enjoyed sparking the curiosity of stud Re: (Score:2) That description of measure theory re-affirms why I wanted to know more in the first place. I tried reading one book - and, while the subject material was interesting, the language used to explain the concepts felt rather opaque and this represented an uphill struggle - for me - as a non-specialist. I cheekily mentioned it - erm - because I'd love to find someone who has a deep understanding who wants to de-mystify the subject... My interest was piqued by the possibility that measure-theory might illumin Re: (Score:2). Isn't a function basically a representation of an algebraic formula? And I was always under the impression that exponents were de-facto elements of algebra... Re: (Score:2) Knowing how to program doesn't mean that you will be a good programmer, but it will be easier for you to spot a good programmer. Re: (Score:2) In any university-track school curriculum, calculus is a requirement, not an elective. Yea, but the problems are the following: 1) except for engineering and math students, very few students will *take* higher math classes in college 2) the first two weeks of college calculus covered more then the entire year of high-school calculus. It's slowed down so people who want to go to college, but don't want to take math, can keep up. 3) learning abstract logic through programming in an expressive language like python or ruby (or perhaps some domain-specific language) is much more applicable to day-to- Re: (Score:2) We already have many, you silly fox... Let's give your kind your own country, Re: (Score:2) I took a comp sci class and two programming classes in High School. There was talk of adding a third but it never happened while I was there. We learned pascal in the final class and I suppose their might have been copying going on but I sure didn't know about it. Both of the teachers I had though were interested in the curriculm and had experience programming themselves. Actually now that I think about it there wouldn't have been a good way to copy paste any of the code as we were working on 8086's and 808
http://slashdot.org/poll/2719
CC-MAIN-2015-22
refinedweb
5,246
70.53
Distribution.TestSuite Description This module defines the detailed test suite interface which makes it possible to expose individual tests to Cabal or other test agents. Synopsis - newtype Options = Options [(String, String)] - lookupOption :: Read r => String -> Options -> r - class TestOptions t where - data Test - pure :: PureTestable p => p -> Test - impure :: ImpureTestable i => i -> Test - data Result - class TestOptions t => ImpureTestable t where - class TestOptions t => PureTestable t where Example The following terms are used carefully throughout this file: - test interface - The interface provided by this module. - test agent - A program used by package users to coordinates the running of tests and the reporting of their results. - test framework - A package used by software authors to specify tests, such as QuickCheck or HUnit. Test frameworks are obligated to supply, at least, instances of the TestOptions and ImpureTestable classes. It is preferred that test frameworks implement PureTestable whenever possible, so that test agents have an assurance that tests can be safely run in parallel. Test agents that allow the user to specify options should avoid setting options not listed by the options method. Test agents should use check before running tests with non-default options. Test frameworks must implement a check function that attempts to parse the given options safely. The packages cabal-test-hunit, cabal-test-quickcheck1, and cabal-test-quickcheck2 provide simple interfaces to these popular test frameworks. An example from cabal-test-quickcheck2 is shown below. A better implementation would eliminate the console output from QuickCheck's built-in runner and provide an instance of PureTestable instead of ImpureTestable. import Control.Monad (liftM) import Data.Maybe (catMaybes, fromJust, maybe) import Data.Typeable (Typeable(..)) import qualified Distribution.TestSuite as Cabal import System.Random (newStdGen, next, StdGen) import qualified Test.QuickCheck as QC data QCTest = forall prop. QC.Testable prop => QCTest String prop test :: QC.Testable prop => String -> prop -> Cabal.Test test n p = Cabal.impure $ QCTest n p instance Cabal.TestOptions QCTest where name (QCTest n _) = n options _ = [ ("std-gen", typeOf (undefined :: String)) , ("max-success", typeOf (undefined :: Int)) , ("max-discard", typeOf (undefined :: Int)) , ("size", typeOf (undefined :: Int)) ] defaultOptions _ = do rng <- newStdGen return $ Cabal.Options $ [ ("std-gen", show rng) , ("max-success", show $ QC.maxSuccess QC.stdArgs) , ("max-discard", show $ QC.maxDiscard QC.stdArgs) , ("size", show $ QC.maxSize QC.stdArgs) ] check t (Cabal.Options opts) = catMaybes [ maybeNothing "max-success" ([] :: [(Int, String)]) , maybeNothing "max-discard" ([] :: [(Int, String)]) , maybeNothing "size" ([] :: [(Int, String)]) ] -- There is no need to check the parsability of "std-gen" -- because the Read instance for StdGen always succeeds. where maybeNothing n x = maybe Nothing (\str -> if reads str == x then Just n else Nothing) $ lookup n opts instance Cabal.ImpureTestable QCTest where runM (QCTest _ prop) o = catch go (return . Cabal.Error . show) where go = do result <- QC.quickCheckWithResult args prop return $ case result of QC.Success {} -> Cabal.Pass QC.GaveUp {}-> Cabal.Fail $ "gave up after " ++ show (QC.numTests result) ++ " tests" QC.Failure {} -> Cabal.Fail $ QC.reason result QC.NoExpectedFailure {} -> Cabal.Fail "passed (expected failure)" args = QC.Args { QC.replay = Just ( Cabal.lookupOption "std-gen" o , Cabal.lookupOption "size" o ) , QC.maxSuccess = Cabal.lookupOption "max-success" o , QC.maxDiscard = Cabal.lookupOption "max-discard" o , QC.maxSize = Cabal.lookupOption "size" o } Options are provided to pass options to test runners, making tests reproducable. Each option is a ( of the form String, String) (Name, Value). Use mappend to combine sets of Options; if the same option is given different values, the value from the left argument of mappend will be used. Constructors Instances lookupOption :: Read r => String -> Options -> rSource Read an option from the specified set of Options. It is an error to lookup an option that has not been specified. For this reason, test agents should mappend any Options against the defaultOptions for a test, so the default value specified by the test framework will be used for any otherwise-unspecified options. class TestOptions t whereSource Methods name :: t -> StringSource The name of the test. options :: t -> [(String, TypeRep)]Source A list of the options a test recognizes. The name and TypeRep are provided so that test agents can ensure that user-specified options are correctly typed. defaultOptions :: t -> IO OptionsSource The default options for a test. Test frameworks should provide a new random seed, if appropriate. check :: t -> Options -> [String]Source Try to parse the provided options. Return the names of unparsable options. This allows test agents to detect bad user-specified options. Instances Tests Test is a wrapper for pure and impure tests so that lists containing arbitrary test types can be constructed. Instances pure :: PureTestable p => p -> TestSource impure :: ImpureTestable i => i -> TestSource class TestOptions t => ImpureTestable t whereSource Class abstracting impure tests. Test frameworks should implement this class only as a last resort for test types which actually require IO. In particular, tests that simply require pseudo-random number generation can be implemented as pure tests. Methods runM :: t -> Options -> IO ResultSource Instances class TestOptions t => PureTestable t whereSource Class abstracting pure tests. Test frameworks should prefer to implement this class over ImpureTestable. A default instance exists so that any pure test can be lifted into an impure test; when lifted, any exceptions are automatically caught. Test agents that lift pure tests themselves must handle exceptions.
http://www.haskell.org/ghc/docs/latest/html/libraries/Cabal-1.14.0/Distribution-TestSuite.html#t:Test
crawl-003
refinedweb
870
58.58
Writing Custom Laravel Artisan Commands I’ve written console commands in many different languages, including Node.js, Golang, PHP, and straight up bash. In my experience, the Symfony console component is one of the best-built console libraries in existence—in any language. Laravel’s artisan command line interface (CLI) extends Symfony’s Console component, with some added conveniences and shortcuts. Follow along if you want to learn how to create some kick-butt custom commands for your Laravel applications. Overview Laravel ships with a bunch of commands that aim to make your life as a developer easier. From generating models, controllers, middleware, test cases, and many other types of files for the framework. The base Laravel framework Command extends the Symfony Command class. Without Laravel’s console features, creating a Symfony console project is pretty straightforward: #!/usr/bin/env php <?php // application.php require __DIR__.'/vendor/autoload.php'; use Symfony\Component\Console\Application; $application = new Application(); // ... register commands $application->add(new GenerateAdminCommand()); $application->run(); You would benefit from going through the Symfony console component documentation, specifically creating a command. The Symfony console component handles all the pain of defining your CLI arguments, options, output, questions, prompts, and helpful information. Laravel is getting base functionality from the console component, and extends a beautiful abstraction layer that makes the building consoles even more convenient. Combine the Symfony console with the ability to create a shippable phar archive—like composer does—and you have a powerful command line tool at your disposal. Setup Now that you have quick intro and background of the console in Laravel let’s walk through creating a custom command for Laravel. We’ll build a console command that runs a health check against your Laravel application every minute to verify uptime. I am not suggesting you ditch your uptime services, but I am suggesting that artisan makes it super easy to build a quick-and-dirty health monitor straight out of the box that we can use as a concrete example of a custom command. An uptime checker is just one example of what you can do with your consoles. You can build developer-specific consoles that help developers be more productive in your application and production-ready commands that perform repetitive and automated jobs. Alright, let’s create a new Laravel project with the composer CLI. You can use the Laravel installer as well, but we’ll use composer. composer create-project laravel/laravel:~5.4 cli-demo cd cli-demo/ # only link if you are using Laravel valet valet link composer require fabpot/goutte Do you want to know what the beauty of that composer command was? You just used a project that relies on the Symfony console. I also required the Goutte HTTP client that we will use to verify uptime. Registering the Command Now that you have a new project, we will create a custom command and register it with the console. You can do so through a closure in the routes/console.php file, or by registering the command in the app/Console/Kernel.php file’s protected $commands property. Think of the former as a Closure-based route and the latter as a controller. We will create a custom command class and register it with the Console’s Kernel class. Artisan has a built-in command to create a console class called make:command: php artisan make:command HealthcheckCommand This command creates a class in the app/Console/Commands/HealthcheckCommand.php file. If you open the file, you will see the $signature and the $description properties, and a handle() method that is the body of your console command. Adjust the file to have the following name and description: <?php namespace App\Console\Commands; use Illuminate\Console\Command; class HealthcheckCommand extends Command { /** * The name and signature of the console command. * * @var string */ protected $signature = 'healthcheck {url : The URL to check} {status=200 : The expected status code}'; /** * The console command description. * * @var string */ protected $description = 'Runs an HTTP healthcheck to verify the endpoint is available'; /** * Create a new command instance. * * @return void */ public function __construct() { parent::__construct(); } /** * Execute the console command. * * @return mixed */ public function handle() { // } } Register the command in the app/Console/Kernel.php file: protected $commands = [ Commands\HealthcheckCommand::class, ]; If you run php artisan help healthcheck you should see something like the following: Setting up the HTTP Client Service You should aim to make your console commands “light” and defer to application services to accomplish your tasks. The artisan CLI has access to the service container to inject services, which will allow us to inject an HTTP client in the constructor of our command from a service. In the app/Providers/AppServiceProvider.php file, add the following to the register method to create an HTTP service: // app/Providers/AppServiceProvider.php public function register() { $this->app->singleton(\Goutte\Client::class, function ($app) { $client = new \Goutte\Client(); $client->setClient(new \GuzzleHttp\Client([ 'timeout' => 20, 'allow_redirects' => false, ])); return $client; }); } We set up the Goutte HTTP crawler and set the underlying Guzzle client with a few options. We set a timeout (that you could make configurable) and we don’t want to allow the client to follow redirects. We want to know the real status of an HTTP endpoint. Next, update the HealthcheckCommand::__construct() method with the service you just defined. When Laravel constructs the console command, the dependency will be resolved out of the service container automatically: use Goutte\Client; // ... /** * Create a new command instance. * * @return void */ public function __construct(Client $client) { parent::__construct(); $this->client = $client; } The Health Check Command Body The last method in the HealthcheckCommand class is the handle() method, which is the body of the command. We will get the {url} argument and status code to check that the URL returns the expected HTTP status c Let’s flesh out a simple command to verify a healthcheck: /** * Execute the console command. * * @return mixed */ public function handle() { try { $url = $this->getUrl(); $expected = (int) $this->option('status'); $crawler = $this->client->request('GET', $url); $status = $this->client->getResponse()->getStatus(); } catch (\Exception $e) { $this->error("Healthcheck failed for $url with an exception"); $this->error($e->getMessage()); return 2; } if ($status !== $expected) { $this->error("Healthcheck failed for $url with a status of '$status' (expected '$expected')"); return 1; } $this->info("Healthcheck passed for $url!"); return 0; } private function getUrl() { $url = $this->argument('url'); if (! filter_var($url, FILTER_VALIDATE_URL)) { throw new \Exception("Invalid URL '$url'"); } return $url; } First, we validate the URL argument and throw an exception if the URL isn’t valid. Next, we make an HTTP request to the URL and compare the expected status code to the actual response. You could get even fancier with the HTTP client and crawl the page to verify status by checking for an HTML element, but we just check for an HTTP status code in this example. Feel free to play around with it on your own and expand on the healthcheck. If an exception happens, we return a different status code for exceptions coming from the HTTP client. Finally, we return a 1 exit code if the HTTP status isn’t valid. Let’s test out our command. If you recall, I linked my project with valet link: $ php artisan healthcheck Healthcheck passed! $ php artisan healthcheck Healthcheck failed with a status of '404' (expected '200') $ echo $? 1 The healthcheck is working as expected. Note that the second command that fails returns an exit code of 1. In the next section, we’ll learn how to run our command on a schedule, and we will force a failure by shutting down valet. Running Custom Commands on a Schedule Now that we have a basic command, we are going to hook it up on a scheduler to monitor the status of an endpoint every minute. If you are new to Laravel, the Console Kernel allows you to run Artisan commands on a schedule with a nice fluent API. The scheduler runs every minute and checks to see if any commands need to run. Let’s set up this command to run every minute: protected function schedule(Schedule $schedule) { $schedule->command( sprintf('healthcheck %s', url('/')) ) ->everyMinute() ->appendOutputTo(storage_path('logs/healthcheck.log')); } In the schedule method, we are running the command every minute and sending the output to a storage/logs/healthcheck.log file so we can visually see the results of our commands. Take note that the scheduler has both an appendOutputTo() method and a sendOutputTo() method. The latter will overwrite the output every time the command runs, and the former will continue to append new items. Before we run this, we need to adjust the URL. By default, the url('/') function will probably return unless you’ve updated the .env file already. Let’s do so now so we can fully test out the healthcheck against our app: # .env file APP_URL= Running the Scheduler Manually We are going to simulate running the scheduler on a cron that runs every minute with bash. Open a new tab so you can keep it in the foreground and run the following infinite while loop: while true; do php artisan schedule:run; sleep 60; done If you are watching the healthcheck.log file, you will start to see output like this every sixty seconds: tail -f storage/logs/healthcheck.log Healthcheck passed for! Healthcheck passed for! Healthcheck passed for! If you are following along with Valet, let’s shut it down, so the scheduler fails. Shutting down the web server simulates an application being unreachable: valet stop Valet services have been stopped. # from the healthcheck.log Healthcheck failed with an exception cURL error 7: Failed to connect to cli-demo.dev port 80: Connection refused (see) Next, let’s bring our server back and remove the route so we can simulate an invalid status code. valet start Valet services have been started. Next, comment out the route in routes/web.php: // Route::get('/', function () { // return view('welcome'); // }); If you aren’t running the scheduler, start it back up, and you should see an error message when the scheduler tries to check the status code: Healthcheck failed for with a status of '404' (expected '200') Don’t forget to shut down the infinite scheduler tab with Ctrl + C! Further Reading Our command simply outputs the result of the healthcheck, but you could expand upon it by broadcasting a failure to Slack or logging it to the database. On your own, try to set up some notification when the healthcheck fails. Perhaps you can even provide logic that it will only alert if three subsequent fails happen. Get creative! We covered the basics of running your custom command, but the documentation has additional information we didn’t cover. You can easily do things like prompt users with questions, render tables, and a progress bar, to name a few. I also recommend that you experiment with the Symfony console component directly. It’s easy to set up your own CLI project with minimal composer dependencies. The documentation provides knowledge that will also apply to your artisan commands, for example, when you need to customize things like hiding a command from command list. Conclusion When you need to provide your custom console commands, Laravel’s artisan command provides nice features that make it a breeze to write your own CLI. You have access to the service container and can create command-line versions of your existing services. I’ve built CLI tools for things like helping me debug 3rd party APIs, provide formatted details about a record in the database, and perform cache busting on a CDN. Filed in: Laravel Tutorials / set up Xdebug for PhpStorm and Laravel Valet Xdebug always seems to be a challenge to set up, follow along and learn how to find Xdebug settings and configure it… Optimize Laravel Eloquent Queries with Eager Loading Learn how to optimize your related model queries in Laravel with eager loading. We will set up some example relations…
https://laravel-news.com/custom-artisan-commands
CC-MAIN-2019-13
refinedweb
1,990
53.81
C++ Sqaure Class[code]#include <iostream> #include <cmath> #include "Square.cc" class Square { private: int len... C++ Sqaure Classbump C++ Sqaure ClassHi, I'm having trouble with this square class. I want to write a C++ class Square whose objects are ... Prime Numbers (Two dimensional array using pointers)I have started from scratch as my first code was very hard to follow. This is the new code: [code]#i... Prime Numbers (Two dimensional array using pointers)Yes the assigment requires a program that "reads from the keyboard the number of rows and columns of... This user does not accept Private Messages
http://www.cplusplus.com/user/theoneeyedsnake/
CC-MAIN-2014-15
refinedweb
101
76.42
This weekend the qualification round of the Facebook Hacker Cup 2013 took place. Below you’ll find the first Sample Input 5 ABbCcc Good luck in the Facebook Hacker Cup this year! Ignore punctuation, please 🙂 Sometimes test cases are hard to make up. So I just go consult Professor Dalves Sample Output Case #1: 152 Case #2: 754 Case #3: 491 Case #4: 729 Case #5: 646 ———- My Solution The first problem is usually the easiest one, and this was the case here. All you had to do was to count how many times each letter appeared, and then assign 26 points to the most frequent one, 25 to the next most frequent one and so on. #include <stdio.h> int letters[26]; int main(){ int i,j,k; scanf("%d",&j); char c; int total; int max,position,beauty; c = getchar(); for (i=0;i<j;i++){ beauty = 26; total = 0; for(k=0;k<26;k++) letters[k] = 0; while((c=getchar())!=10){ if(c>=65&&c<=90) letters[c-65]++; else if(c>=97&&c<=122) letters[c-97]++; } while(1){ max = 0; position = 0; for(k=0;k<26;k++){ if(letters[k]>max){ max = letters[k]; position = k; } } if(max>0){ letters[position] = 0; total += max * beauty; beauty--; } else{ break; } } if(i>0) printf("n"); printf("Case #%d: %d",i+1,total); } return 0; }
https://www.programminglogic.com/facebook-hacker-cup-2013-beautiful-strings-solution/
CC-MAIN-2020-16
refinedweb
229
63.02
Availability: Tk. The turtle module provides turtle graphics primitives, in both an object-oriented and procedure-oriented ways. Because it uses Tkinter for the underlying graphics, it needs a version of python installed with Tk support. The procedural interface uses a pen and a canvas which are automagically created when any of the functions are called. The turtle module defines the following functions: fill(1)before drawing a path you want to fill, and call fill(0)when you finish to draw the path. If extent is not a full circle, one endpoint of the arc is the current pen position. The arc is drawn in a counter clockwise direction if radius is positive, otherwise in a clockwise direction. In the process, the direction of the turtle is changed by the amount of the extent. This module also does from math import *, so see the documentation for the math module for additional constants and functions useful for turtle graphics. For examples, see the code of the demo() function. This module defines the following classes:
http://docs.python.org/release/2.4.4/lib/module-turtle.html
crawl-003
refinedweb
174
54.42
I just installed Visual Studio 2015 Preview on a Windows 10 tech. preview, the first thing I wanted to try out was C# 6.0. The are a lot of videos and posts on internet about the new features of C# 6.0. The goal of this post is to test them by myself and share my impressions. I will use this post in the future as a personal reference for C# 6.0 new features. C# 6.0 new feautures This is the list of new features I have collected so far: - Getter only auto properties - Auto-Initialization of properties - Calculated properties with body expressions - Null conditional operators ?. - Operator nameof() - Strings projections - Exceptions filters - Await in catch and finally - Index initializers I wrote a small sample class that I will use as an example to test the new features. This is the C# 5 sample class: using System; namespace ConsoleApplication1 { public class Person { public Person(string name, int age) { if (name == null) { throw new NullReferenceException("name"); } Name = name; Age = age; Id = GetNewId(); } public string Name { get; set; } public int Id { get; private set; } public int Age { get; set; } public event EventHandler Call; protected virtual void OnCall() { EventHandler handler = Call; if (handler != null) handler(this, EventArgs.Empty); } public int GetYearOfBirth() { return DateTime.Now.Year - Age; } public JObject ToJSon() { var personAsJson = new JObject(); personAsJson["id"] = this.Id; personAsJson["name"] = this.Name; personAsJson["age"] = this.Age; return personAsJson; } #region Get new person id private static int lastId; private static int GetNewId() { lastId++; Logger.WriteInfo(string.Format("New Id = {0}", lastId)); return lastId; } #endregion } } Applying C# 6.0 new features to a C# 5.0 class: Now lets go throw the list of new features and apply them to our sample class: Getter only auto properties We can remove “private set” from read-only properties, properties with only a “setter” can be initialized only from the constructor or with auto-initialization. public int Id { get; } Auto-Initialization of properties Using auto initialization we can auto initialize the Id property calling a method. It is also possible to set a default value to editable properties: public int Id { get; } = GetNewId(); public int Age { get; set; } = 18; Calculated properties with body expressions It is common to have a lot of single line calculated properties or methods on our code. In lambda expressions it was already possible to write only the value to return. Now this is also possible in normal methods: Our “GetYearOfBirth” method can be re-write like this: public int GetYearOfBirth => DateTime.Now.Year - Age; Note that if the method has not parameters we can also avoid the parentheses.
https://softwarejuancarlos.com/2014/11/
CC-MAIN-2020-05
refinedweb
436
53.51
Does anyone know the differance and what should be used when? If someone has a 'rule of thumb' I would love to hear it. James Does anyone know the differance and what should be used when? If someone has a 'rule of thumb' I would love to hear it. James I don't know - but it may have something to do with the way they're compiled, and .h files may be compiled in C syntax and not C++. I use hpp to be on the safe side. You should know that in C++ you must declare functions before you use them. You must 'declare' all functions before the main() fxn otherwise you will get an error. ////////////////////////////////////////////////////////////////// int myfxn(int k, int y); //This is considered a 'declaration' since it doesn't have any '{}' blocks and thus does nothing yet. void main() { //Here's our main fxn that starts our program ...; } int myfxn(int k, int y) //Down here is where we 'define' our function and tell it what to do { ...; } ////////////////////////////////////////////////////////////////////////// The Header files (.h) simply hold the declarations, that way you don't have to put them in your .CPP file. Instead you just #include "name.h" them into your .cpp file. The header files hold the declarations for the functions you use in the .cpp file. However, their real value becomes evident when you get into Classes and other advanced C++ topics. For now, the answer to your question is the header files hold the 'declarations' of the functions or Classes you use so you could put our declaration of myfxn(int k, int y); into a file and give it a .h extension and that is our header file, then #include it above the main() fxn. .cpp files hold the Definitions of our functions and Classes. //////////////////////////////////////// Here's our new code////////// #include "somename.h" void main() { //Here's our main fxn that starts our program ...; } int myfxn(int k, int y) //Down here is where we 'define' our function and tell it what to do { ...; } My Avatar says: "Stay in School" Rocco is the Boy! "SHUT YOUR LIPS..." At the moment I am putting everything into header files, but after reading what you've written I think I might change a few around and see what happens Just put the class declarations in the header files. Than define the methods of these classes in separate .cpp files. In the main program include the header files but not the .cpp files because the cpp's are compiled along with main as long as you are using a project workspace. I have no idea what to do in Linux, I think you need a make file or some god aweful thing. Now to ask a question that some people may find is a bit simple: Because I have been using headers and only a main.cpp I have forgotten how to use a second .cpp file. Can someone please remind me of what I need to do in the .cpp? James You include it in the same way as a header file. In the main.cpp file add #include "yourfile.cpp" . In that .cpp file usually goes the implementation of whatever you've declared in the header, like e.g. the definitions of member functions. Seron I did programming work this summer (and over the holidays) on a major software project being designed in C++, and this was their convention: All class declarations (including declarations of member variables and functions) and global functions were to be located in header files, along with the documentation. Each class in most cases was to have its own individual header file. Each header file then had its own .cc file of the same name, which contained the implementations of all the functions declared in the header file. Only .h files were #included by other files. The .cc files were compiled individually by the person who created the code and stored as makefiles, which were automatically added to the final executable when included by a .cc file with a main() function. However, different compilers and environments deal with multiple .cc files in different ways. Thanks for all the comments people, I'm rearranging my program as we all talk
https://cboard.cprogramming.com/cplusplus-programming/9010-differance-between-headers-cpp-files.html
CC-MAIN-2017-47
refinedweb
705
82.65