text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
The KerasLMU team is happy to announce the release of KerasLMU 0.3.0.
What is KerasLMU?
KerasLMU is a Keras-based implementation of Legendre Memory Units, a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. It has been shown to perform better than standard LSTM or other RNN-based models in a variety of tasks, generally with fewer internal parameters (see the paper for more details).
How do I use it?
KerasLMU is built using the standard Keras RNN API. If you have a model containing an RNN layer, such as
tf.keras.layers.LSTM(...) or
tf.keras.layers.RNN(tf.keras.layers.LSTMCell(...)), those can be swapped with
keras_lmu.LMU(...) or
tf.keras.layers.RNN(keras_lmu.LMUCell(...)).
More information on the available parameters and configuration options can be found in the documentation.
What’s new?
Previous users will note that the name of the package has changed from “NengoLMU” to “KerasLMU” (as this better reflects the Keras-based implementation of this package, and we have plans to make a separate Nengo-based implementation in the future). Along with this name change the package/module name has changed, so instead of doing
pip install lmu and
import lmu you now do
pip install keras-lmu and
import keras_lmu.
In addition, we have significantly reworked the LMU API, and introduced a number of new features. Some of the most significant changes are:
- Removed a bunch of elements that we haven’t really found to be that useful in our experimentation with LMUs, like individual trainable/initializer arguments for each connection within the LMU, or the LMUCellODE class (although we have plans to implement an improved method for optimizing the memory component in the future).
- Added the ability to selectively enable/disable connections within the LMU between the hidden and memory components. Note that these default to disabled, so the new default LMU will be a trimmed-down version of the old defaults.
- Added support for multi-dimensional memories.
- Added support for arbitrary hidden components (anything that implements the Keras RNNCell API can be used, e.g.
tf.keras.layers.LSTMCell).
How do I get it?
To install KerasLMU, we recommend using
pip:
pip install keras-lmu
More detailed installation instructions can be found here.
Where can I learn more?
Where can I get help?
You’re already there! If you have an issue upgrading or have any other questions, please post them in this forum.
|
https://forum.nengo.ai/t/keraslmu-0-3-0-released/1452
|
CC-MAIN-2020-50
|
refinedweb
| 417
| 54.63
|
Description
Guidelines
This is an individual lab assignment. You must do the vast majority of the work on your own. It is permissible to consult with classmates to ask general questions about the assignment, to help discover and fix specific bugs, and to talk about high level approaches in general terms. It is not permissible to give or receive answers or solution details from fellow students.
You may research online for additional resources; however, you may not use code that was written specifically *to* solve the problem you have been given, and you may not have anyone else help you write the code or solve the problem. You may use code snippets found online, providing that they are appropriately and clearly cited, within your submitted code.
*By submitting this assignment, you agree that you have followed the above guidelines regarding collaboration and research.*
### Policies
Commits
Getting Help and Extensions
Please follow the debugging guidelines outline [here](). We will try to answer questions and provide help within 24 hours of your request. If you do not receive a response in 24 hours, please send the request again.
Although we will answer questions, provide clarification, and give general help where possible up until the deadline, we will not help you debug specific code within 24 hours of the deadline. We will not provide any help after the deadline.
Extensions
If an extension is requested more than 24 hours before the deadline, it may be granted, depending on the reason for the requested extension. No extension will be granted, regardless of the reason, within 24 hours of the deadline.
***
The goal of Lab 6 is to build a Binary Search Tree, work with templates a bit more, and develop tree based algorithms such as traversal and balancing.
In the last lab we got a list of habitable planets in our universe. The list can be pretty large, so we want to be able to find specific planets quickly. We are going to build a templated binary search tree so we can store and search for specific Planets. We’ll start by testing our BST with
egers to make things a bit easier.
(Parts A must be completed in lab)
## Part A: Creating a BST Class
Write your own simple templated Binary Search Tree (BST) C++ class that includes the insert(), find(), and remove() methods, as well as traversal and utility methods. We can start with the following public interface:
* `BSTree()`
* `bool empty()`
* `true` if the tree is empty
* `false` if it is not
* `bool insert(T val)`
* Returns `true` if the value was inserted
* `false` if the value was already in the tree
* INSERT ALGORITHM: To insert a new node, start at the root and move down left or right, following the appropriate pointers, until you get to the appropriate place at the bottom. Then create a new BSTree<T>::Node and set the appropriate pointer to the new node.
* `bool find(T val)`
* `true` if the value is in the tree
* `false` if the value is not in the tree
* FIND ALGORITHM: Find works the same as insert, except you do not create a new node. Instead you return a boolean value when you find the value or a null branch.
* Use one class for the entire BST, which contains a single pointer to a Node, which is the root of the tree.
You’ll also have a nested helper C++ class, Node, which contains a templated data member to store the data item, and three pointers to other Nodes, one for the left subtree, one for the right subtree, and one for the parent.
* You are going to create the Node class as an internal class to the Tree. This MUST be the first thing in your BSTree class. You declare it just like a normal class, except it is kept as a private data member:
“`c++
template <class T>
class BSTree{
private:
class Node{
public:
…
“`
* You can only access the node class within your `BSTree` class if it is private.
* More info about [nested classes]()
* For the templated nested class, `Node`, you will need to qualify the type with the keyword, `typename`. This is due to the ambiguity of whether you are referring to a class or a namespace. Read the following for more [information]().
* You must complete your templated implementations of your `insert` and `find` in lab. Make sure everything compiles before moving on.
__Show your TA your code.__
__–END OF IN LAB REQUIRED WORK–__
_You may continue to work on the remainder of the lab on your own time or in lab_
## Part B: Implementing Remove and Traversal
Now we are going to extend the templated BST with remove and traversal. Your BST must have the following public interface:
* `BSTree()`
* `BSTree(const BSTree &old_tree)`
* Performs a deep copy of a `BSTree` object using preorder traversal
* `~BSTree()`
* Removes all allocated memory in a `BSTree` using postorder traversal
* `bool empty()`
* `true` if the tree is empty
* `false` if it is not
* `bool insert(T val)`
* Returns `true` if the value was inserted
* `false` if the value was already in the tree
* `bool find(T val)`
* `true` if the value is in the tree
* `false` if the value is not in the tree
* `void sortedArray(vector<T> &list)`
* Takes a `vector` reference, and fills the `vector` with the tree values in sorted order
* `bool remove(T val)`
* Takes a value and removes that value from the tree if found
* Returns `true` if the value was removed, `false` if the value is not in the tree.
You will need to implement the following algorithms. I recommend implementing them as separate private methods that you call in each of the associated public methods.
* InOrder Traversal for sortedArray
* PostOrder Traversal for tree deletion
* PreOrder Traversal for tree deep copy
## Part C – EXTRA CREDIT: Balancing Your Tree
There are many different ways to balance a binary tree. The simplest method (but not necessarily the most efficient) is to sort the tree into a sorted list, then delete all values from the tree and remake the tree by recursively dividing the array in half, and inserting the center value back into the tree. Here is the basic algorithm:
1) Get the middle of the array and make it root.
2) Recursively do same for left half and right half.
* Get the middle of left half and make it left child of the root
created in step 1.
* Get the middle of right half and make it right child of the
root created in step 1.
* Continue logically dividing the array in half until it is empty
* :bulb: You do not need to actually break the array into two arrays. Just keep track of the start and end with indices.
For part C you must have a method that balances your tree object. You may use another algorithm if you choose. The only requirement for your balance method is that it must have this interface:
“`c++
void balance();
“`
In order to test your tree, you will need to have an additional method, height, which returns the height of your tree. Add the following method to your `BSTree` class:
“`c++
int BSTree::height(){
return findHeight(this->root);
}
int BSTree::findHeight(Node* node){
// base case tree is empty
if(node == NULL)
return 0;
// If tree is not empty then height = 1 + max of left height and right heights
lh = height(node->left);
rh = height(node->right)
max = (lh >= rh) ? lh : rh;
return 1 + max;
}
“`
You may have to make some modifications to work with your tree, but you should not change the basic algorithm.
__You must add a blank file to your repo called EXTRA in order for the TA’s to give you credit for the extra credit__
__Use the command `touch EXTRA` to create this file__
## Part D: Code Organization and Submission
* Required code organization:
* lab6.cpp (driver code – You must include this file in your submission)
* BSTree.h
* makefile
* executable should be called: lab6
* You must have the following targets in your makefile:
* `all` – only compiles your source code using separate compilation for each .cpp file
* `clean` – removes all object files and binary executables
* `run` – compiles if necessary and runs your executable
* `memcheck` – compiles your source if necessary, then runs your executable with valgrind
* EXTRA (optional)
Below is just a reminder of the commands you should use to submit your code. If you cannot remember the exact process, please review lab 1.
*These commands all presume that your current working directory is within the directory tracked by `git`.*
You will need to do the following when your submission is ready for grading.
“`shell
git commit -am “final commit”
git push
“`
To complete your submission, you must copy and paste the commit hash into MyCourses. Go to MyCourses, select CS240, and then assignments. Select Lab 6, and where it says text submission, paste your commit hash. You can get your latest commit hash with the following command:
“`shell
git rev-parse HEAD
“`
:warning: Remember, you __MUST__ make a submission on mycourses before the deadline to be considered on time.
|
https://edulissy.com/shop/solved/lab-6-binary-search-tree-solution-2/
|
CC-MAIN-2021-17
|
refinedweb
| 1,515
| 64.44
|
form_field_opts(3X) form_field_opts(3X)
set_field_opts, field_opts_on, field_opts_off, field_opts - set and get field options
#include <form.h> int set_field_opts(FIELD *field, Field_Options opts); int field_opts_on(FIELD *field, Field_Options opts); int field_opts_off(FIELD *field, Field_Options opts); Field standard options are defined (all are on by default): O_ACTIVE The field is visited during processing. If this option is off, the field will not be reachable by navigation keys. Please notice that an invisible field appears to be inactive also. O_AUTOSKIP Skip to the next field when this one fills. O_BLANK The field is cleared whenever a character is entered at the first position. O_EDIT The field can be edited. O_NULLOK Allow a blank field. O_PASSOK Validate field only if modified by user. O_PUBLIC The field contents are displayed as data is entered. O_STATIC Field buffers are fixed to field's original size. Turn this option off to create a dynamic field. O_VISIBLE The field is displayed. If this option is off,.
Except for field_opts, each routine returns one of the following: E_OK The routine succeeded. E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argument. E_CURRENT The field is the current field. E_SYSTEM_ERROR System error occurred (see errno).
curses(3X), form(3X). form_field_just_opts(3X)
|
http://man7.org/linux/man-pages/man3/form_field_opts.3x.html
|
CC-MAIN-2017-26
|
refinedweb
| 201
| 52.36
|
Difference between revisions of "Wfmesh"
Latest revision as of 11:06, 15 January 2012
Contents
DESCRIPTION
This script will create an object for any Wavefront(.OBJ) mesh file. This is a way to extend the number of objects you can use. Also, you have more control over the coloring, transformations, etc than the CGOs. Although there are a number of these obj files on the web, you can also easily created them with open source tools (OpenFX, Crossroads3D). It takes literally, 2 min to get an object created and then loaded into pymol. Simply open OpenFX Designer, click File->Insert->Model, then choose any of the models (or create your own of course!), then export it as .3ds file. Then open the .3ds file from Crossroads3D and export as Wavefront OBJ.
- createWFMesh - create a mesh object from Wavefront (*.obj) formated file
IMAGES
SETUP
Simply "import wfmesh.py"
NOTES / STATUS
- Tested on Pymolv0.97, Windows platform, should work on linux as well.
- Coloring is fixed for grey and sections of mesh are stored, but not used.
- Simple opengl calls; not optimized (display lists, etc) or anything.
- Vertex Normal code is broken, so normals are per polygon right now.
- Post problems in the discussion page, on 'my talk' page or just email me : dwkulp@mail.med.upenn.edu
EXAMPLES
import wfmesh cd /home/tlinnet/Software/pymol/Pymol-script-repo/files_for_examples wfmesh.createWFObj('torus.obj','Torus',flip=1) # Flip = 1, if OBJ created by openFX, crossroads3D combination wfmesh.createWFObj("torus.obj","Torus2",translate=[5,5,0], flip=1) wfmesh.createWFObj("ship.obj","Ship")
|
https://pymolwiki.org/index.php?title=Wfmesh&diff=10426&oldid=10104
|
CC-MAIN-2020-29
|
refinedweb
| 260
| 60.61
|
Are you a Data Science and Machine Learning enthusiast? Then you may know numpy.The scientific calculating tool for N-dimensional array providing Python the processing speed like FORTRAN and C. This can do various things like an array to the image. array to list, etc. Similarly, is the numpy.gradient() method a highly advanced tool used at the level of neural networks.
What is a Gradient in Layman Language?
In simple mathematics, the gradient is the slope of the graph or the tangential value of the angle forming the line connecting two points in 2D and a plane in 3D. But in scientific terms, the gradient of a function becomes the greatest increase or decrease of a function calculated by partial derivative of all points in the function.
In NumPy, we basically calculate the gradient descent, shifting the function towards a negative gradient to decrease the difference in the greatest increase and decrease of the function.
What Numpy Gradient is?
As per Numpy.org, the numpy gradient is used to compute gradient using second-order accurate central differences in the interior points and either first or second-order accurate one-sides (forward or backward) differences at the boundaries. In other words, from all ways find the shortest path that covers almost all points
For example, consider going from the peak of a hill to its foothill. But with a condition that you are blindfolded and only can know things like your present height and distance traveled. It will take various steps to come down and by checking the inputs given by equipment and in the end binding the best way of descent to the bottom. Similarly is the working of gradient descent in NumPy
Syntax to be used
numpy.gradient(f,*varargs,axis=None,edge_order=1)
This contains various parameters, but it is not necessary to write the same way always you can directly write numpy.gradient(f) wherein place of ‘f‘ you can use a single array or multiple arrays
Going for the Parameters :
Array : f
the array of numbers are the inputs which are used to find the gradient
variable arguement or vararg
Spacing between the array values. It is the default unitary spacing for all dimensions. Spacing can be specified using:
-.
If axis is given, the number of varargs must equal the number of axes. Default: 1.
axis
It can None type, int type, or a tuple of int type. It decides direction so as to calculate the gradient. 0 for row and c1 for column-wise direction. None is used when the gradient is calculated from all directions. The axis may be negative, for this case, it counts from the last to the first axis.
edge_order
It can be 1 or 2. It is used with respect to the boundaries aspect. The gradient is calculated using N-th order accurate differences at the boundaries. Default: 1
Return Value
It returns an N-dimensional array or a list of N-dimensional array. In other words, it returns a set of ndarrays (depends on the number of dimensions) that corresponds to the derivatives of the array with respect to each dimension. Each derivative has the same shape as the array
Examples to understand the use
Example:
import numpy as np f = np.array([2,4,5,6,7,8], dtype = float) np.gradient(f) array([2. , 1.5, 1. , 1. , 1. , 1. ]) np.gradient(f,2) array([1. , 0.75, 0.5 , 0.5 , 0.5 , 0.5 ])
The second one has changed spacing so variant result
Similarly we can use it for multiple arrays
array1 = np.array([1,2,4,5,7], dtype = float) array2 = np.array([2,3,4,7,8], dtype = float) np.gradient(array1,array2) array([1. , 1.5 , 1.58333333, 1.58333333, 2. ])
And N-dimensional array as well. Consequently, this returns the same number of arrays as the dimensions with the same dimensions
array_2d = np.array([[11,22,33],[14,15,16]], dtype=float) np.gradient(array_2d) [array([[ 3., -7., -17.], [ 3., -7., -17.]]), array([[11., 11., 11.], [ 1., 1., 1.]])]
The spacing can uniform using fixed value
x= [1,2,3,4,5,6] f array([2., 4., 5., 6., 7., 8.]) np.gradient(f,x) array([2. , 1.5, 1. , 1. , 1. , 1. ])
Or non uniform:
y= np.array([1.3,2.2,3.4,4.2,5.1,6.2],dtype=float) np.gradient(f,y) array([2.22222222, 1.62698413, 1.08333333, 1.18464052, 1.02020202, 0.90909091])
We can fix the axis in which the gradient is calculated
np.gradient(np.array([[11,23,34,45],[22,33,44,55]],dtype =float),axis=0) array([[11., 10., 10., 10.], [11., 10., 10., 10.]]) np.gradient(np.array([[11,23,34,45],[22,33,44,55]],dtype =float),axis=1) array([[12. , 11.5, 11. , 11. ], [11. , 11. , 11. , 11. ]]
We can fix boundaries of gradient
a= np.array([24,34,45,56],dtype= float) np.gradient(a,edge_order=1) array([10. , 10.5, 11. , 11. ]) np.gradient(a,edge_order=2) array([ 9.5, 10.5, 11. , 11. ])
A short function using numpy Gradient :
def elevation_gradient(elevation): """Calculate the two-dimensional gradient vector for an elevation raster. :param elevation: a raster giving linear scale unit heights. Return a raster with 2 planes giving, respectively, the dz/dx and dz/dy values measured in metre rise per horizontal metre travelled. """ dx, dy = np.gradient(elevation.data) # Convert from metre rise / pixel run to metre rise / metre run. dx *= 1.0 / (elevation.pixel_linear_shape[1]) dy *= 1.0 / (elevation.pixel_linear_shape[0]) return similar_raster(np.dstack((dx, dy)), elevation)
Numpy Diff vs Gradient
There is another function of numpy similar to gradient but different in use i.e diff
As per Numpy.org, used to calculate n-th discrete difference along given axis
numpy.diff(a,n=1,axis=-1,prepend=<no value>,append=<no value>)
While diff simply gives difference from matrix slice.The gradient return the array of gradients along the dimensions provided whereas gradient produces a set of gradients of an array along all its dimensions while preserving its shape
b= np.array([2,3,4,7,8],dtype=float) np.diff(b) array([1., 1., 3., 1.]) np.gradient(b) array([1., 1., 2., 2., 1.])
What’s Next?
NumPy is very powerful and incredibly essential for information science in Python. That being true, if you are interested in data science in Python, you really ought to find out more about Python.
You might like our following tutorials on numpy.
- Mean: Implementation and Importance
- Using Random Function to Create Random Data
- Reshape: Reshaping Arrays With Ease
- In-depth Explanation of np.power() With Examples
- Clip Function
Numpy Gradient in Neural Network
Neural Network is a prime user of a numpy gradient. The algorithm used is known as the gradient descent algorithm. Basically used to minimize the deviation of the function from the path required to get the training done. Mathematically it’s a vector that gives us the direction in which the loss function increases faster. So we should move in the opposite direction if we try to minimize it.
Taking the example of mountain descent, the place where position changes, height changes, and slope changes. To find the path that as minimum slope and minimum direction change required.
|
https://www.pythonpool.com/numpy-gradient/
|
CC-MAIN-2021-43
|
refinedweb
| 1,222
| 57.37
|
Creating an Elegant Plot
Creating visualizations for you data is essential. In another post I did I take an in depth look at EDA according to the National Institute of Standards of Technology which can be found here.
After talking about the importance of EDA, it become a syntactical issue. In this article I plan walking through different techniques and tricks for customizing plots in Python.
As you read, keep referring back to this table I created for myself in an intro to Stats class. Actually my professor created it as a list, but I made a table out of it and keep it posted on the bulletin board in my office. It has proved to be invaluable. The only caveat to this is that the more and more I go through my data journey I hear and read people say “Oh god, never use a pie chart”. I have to agree on this tiny little detail. I have rarely see a pie chart used in a professional study. After creating a ton of my own visualizations, I personally dislike using them as well. They just aren’t the most effective way to convey the story you want to tell.
Basic Charts and Graphs
Matplotlib and Seaborn will be the two most quintessential libraries used by a data scientist or analyst in Python.
Matplotlib
Matplotlib is the gold standard for creating any sort of statistical visualization, based on the infamous MATPLOTLAB. An extension of Numpy, its simple object oriented programming can be embedded easily into applications and GUIs. It does have a rudimentary feel to it, making matplotlib alone appear less refined than adding additional libraries. Below is a simple bar plot in matplotlib just showing class imbalance, more likely used for technical analysis than showing a stakeholder.[1]
Seaborn
Seaborn is a statistical graphics library that builds on top of matplotlib and pandas data frames. Even basic seaborn plots can add a little more sophistication to a visualization. It is important to note when working with seaborn that the sns call can be used in tandem with the plt call. Below is a basic seaborn barplot call with plt.show() for cleaning up the output. Also note the ability to call color palettes as opposed to basic colors.
Seaborn Palettes
Seaborn is infamous for their wide array of predetermined palettes, which tend to look seamless and pleasing. Coming in a wide variety of color spectrums, this is a feature that makes seaborn highly customizable. If you take a look at the seaborn documentation, there are arguments that you can add to the palette call that will customize your graph even further.
A Few Odd Graphs
Histogram
When first learning statistics I could not figure out the difference between a histogram and a bar chart. I think the simplest way to put it is that a histogram measures the frequency of continuous variable and a bar chart measures the frequency of a categorical variable. The graph below was used to check for a normal distribution in housing prices.
Bubble Plot
A bubble plot is used to describe three quantitative or continuous variables. I find that they are particularly useful when it is difficult to understand the visualization from looking at just two variables alone. Adding a tertiary variable always helps. Look at the bottom of the graph and the color and size of the smaller pink bubbles. They start at the bottom and fizzle out at the top and curve. In contrast the large purple bubbles have a high concentration at the top and scoop downward underneath the majority of the visualization.
Why is all this? The answer I came up with is that although homes built before 1940 are smaller they maintain a higher resale value. Newer homes have far more space but don’t seem to hold their value.
Labelling and Refining
The example bar chart that I am using for this section is a segmented bar chart which is best for two categorical variables. In our case a tertiary variable was shown. This is based on foreign and total world wide grosses for movies. When the foreign amount is stacked on the world wide gross, the remaining section of the bar becomes the domestic gross. This results in our three continuous variables: foreign gross, domestic gross, and total world wide gross. Each has a valuable story to tell.
You can achieve this by not designating the two different bar charts to different axes calls. Its important to note however, that when you do this there will be specifications like both bar charts needing to have the same features in order to appease matplotlib.
- Storing the plot in f and ax. Remembering that this is an object oriented programming language we can store all of our executional routes in a single object that can be manipulated. creating subplots in this initial call can be helpful to not only create subplots, but in the case of this graph stack to graphs on top of each other.
- The color options in seaborn are endless. Not only can you choose from a variety of palettes as well as colors but you can adjust the literal hue or tone of the colors. Using set_color_codes in this example I set it to options like “muted” and “pastel”.
- Going back to using our ax call this can be so multifunctional. Here we are using it to set labels to the overall outcome of the chart, but this is also where we could call what axes it is at. (See in subplots below).
- This is a picky detail that I add to almost any visualization that is not small integers. Rotating the labels on a 45 is one quick and easy way to always have pretty visuals.
- A little bonus, this call automatically saves the image as a png to your main jupyter notebook page so it can be used reports like a README.
Subplots
Subplots are an effective way to visualize what is occurring in multiple features simultaneously. Let’s quickly look over two different ways to achieve subplots; one in seaborn and one in plain matplotlib. The example below was pulled from a project where I was trying to predict if a customer would accept a bank loan or not. Using a side by side boxplot this is a great example of one categorical and one continuous variable.
The trick to subplots in seaborn is specifying which axes each plot is on. There is no need to predicate the sns call with anything, but inside the list of arguments you need to specify ax =. Remember that python starts all lists of of objects with 0. So call to [0,0] would be the first row in the first column within the axes grid of the subplots.
In this second example of subplots, seaborn was not implemented. Instead just plain matplotlib was used. Notice the difference in how to call subplots to their respective positions. plt.subplot(number of rows, number of columns, index)
I want to point out in this particular example that the first argument used is 2 even though we have 1 row. Why? The best answer that I can give is that to get my visuals to look the way I wanted it became a game of balancing the figsize and creating rows and columns out of that fig size. If you were to change the figsize or number of rows here the graph would become distorted. You will have to play around with this yourself and eventually you will devise a system.
A note on variability of imports.
import matplotlib.pyplot as plt vs from matplotlib import pyplot
One thing that drove me nuts when I was first starting out with creating visualizations is sometimes (after going through many error messages) python would want me to use import matplotlib.pyplot as plt vs from matplotlib import pyplot.
The answer is that on a basic level all of the ways shown below are interchangeable. [2] Matplotlib is the library and pyplot is the interface. How you store and use them theoretically does not matter to Python. You do have 5 different methods listed below within your tool kit. Just don’t pull a rookie like me and try calling plots in different incarnations in the same notebook. if you define it as plt use plt. If you define it as pyplot use pyplot.d
Plotly Express
Plotly Express is a graphical library whose main feature (other than producing seamless visuals) is that the visualization becomes interactive. It is based out of its parent library Plotly. Plotly Express heavily reduces the amount of code needed to create the visualizations.
Below is the plotly express graph that was made to demonstrate the weekly grosses of the Broadway show Matilda. In the upper right hand corner you can see that there are different icons that make certain applications possible without code like zooming in or taking a photo of it. When you hover the mouse over the data we get a pop up of the exact week and what the grosses were. Extremely user friendly and usable in google docs.
Plotly is extremely robust and complex library. For a more in depth look at how to use plotly, please visit here.
Conclusion
Regardless of which library or packages you use to visually interpret your data, having something that is visually pleasing to the eye is important. I believe that you will always have those quick visualizations you do that are meant for you personally to understand the data on a deeper level. On the other side of the coin you will have stakeholders and other non technical business partners that need a clear and concise visual of the story you are trying to tell.
Resources
[1]
[2]
[3]
|
https://ozbunae.medium.com/creating-an-elegant-plot-17de19a3550c?readmore=1&source=user_profile---------7----------------------------
|
CC-MAIN-2021-39
|
refinedweb
| 1,635
| 62.78
|
Opened 3 years ago
Closed 3 years ago
#18552 closed Uncategorized (needsinfo)
Django test client should not use MULTIPART_CONTENT for POST-requests by default.
Description
Currently, there's this code in django test client:
def post(self, path, data={}, content_type=MULTIPART_CONTENT,
extra):
That caused me to find another bug, related to multipart content :)
Change History (3)
comment:1 Changed 3 years ago by k_bx
- Cc k_bx added
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 3 years ago by lukeplant
comment:3 Changed 3 years ago by aaugustin
- Resolution set to needsinfo
- Status changed from new to closed
I don't see how this is a problem either.
Note: See TracTickets for help on using tickets.
Why shouldn't it send multipart content by default? I believe it makes it easier to send files, and changing might require lots of fixes to any test code that sends files.
|
https://code.djangoproject.com/ticket/18552
|
CC-MAIN-2015-18
|
refinedweb
| 153
| 50.7
|
Posted by
M Rule
at
29.11.11
3
Labels: animated fractals
This past Friday I had the chance to meet Mark Largent, a historian of science at Michigan State University, who after writing an excellent history of American eugenics, is working on a history of the anti-vaccination movement. The anti-vaccination movement is one of the more contentious flashpoints in popular culture, with views on vaccines ranging from the deliberate poisoning of children by doctors, to anti-science nonsense that threatens to reverse a century of healthcare gains. Largent’s methodology is to look at the people involved and try to see the world as they believe it, without doing violence. The question of whether vaccines cause autism is scientifically and socially irrelevant. But it is a proxy for a wider and more important spectrum of beliefs about personal responsibility and biomedical interventions, the interface between personal liberty and public goods, and the political consequences of these beliefs.
Some numbers: Currently, 40% of American parents have delayed one or more recommend vaccines, and 11.5% have refused a state mandated vaccine. 23 states containing more than half the population allow “philosophical exemptions” to mandatory vaccination, which are trivial to obtain. The number of inoculations given to children has increased from 9 in the mid 1980s, to 26 today. As a single father, Largent understands the anti-vaccines movement on a basic level: babies hate shots, and doctors administer dozens of them from seconds after birth to two years old.
The details of “vaccines-cause-autism” are too complex to go into here, but Largent is an expert on Andrew Wakefield, the now-discredited British physician who authored the withdrawn Lancet study which suggest a link between the MMR vaccines and autism, and Jenny McCarthy, who campaigned against the mercury-containing preservative thimerosal in the US. Now, as for the scientific issue, it is settled: vaccines do not cause autism. Denmark, which keeps comprehensive health records, shows no difference in autism cases between the vaccinated, partially vaccinated, and un-vaccinated. We don’t know what causes autism, or why cases of autism are increasing, but it probably is related to more rigorous screening and older mothers, as opposed to any external cause. Certainly, the epidemiological cause-and-effect for vaccines and autism is about as a strong as the link between cellphones and radiation, namely non-existent.. Rather, Largent proposed that we need to have a wider social debate on the number and purpose of vaccines, and the relationship between doctors, parents, and the teachers and daycare workers who are the first line of vaccine compliance.
Now, thinking about this in the context of my studies, this looks like a classic issue of biopolitics and competing epistemologies, and is tied directly into the consumerization of the American healthcare system. According to Foucault, modernity was marked by the rise of biopolitics. “One might say that the ancient right to take life or let live was replaced by a power to foster life or disallow it to the point of death.” While the sovereign state—literally a man in a shiny hat with a sword—killed his enemies to maintain order, the modern state tends to the population like a garden, keeping careful statistics and intervening to maintain population health.
From a bureaucratic rationalist point of view, vaccines are an ideal tool, requiring a minimal intervention, and with massive and observable effects on the rolls of births and deaths, and the frequency and severity of epidemics. Parents don’t see these facts, particularly when vaccines have been successful. What they do see is that babies hate vaccines. I’m not being flip when I say that the suffering of children is of no account to the bureaucratic perspective, the official CDC claim is that 1/3 of babies are “fretful” after receiving vaccines. This epistemology justifies an unlimited expansion of the vaccination program, since any conceivable amount of fretfulness is offset by even a single prevented death. For parents and pediatricians, who must deal with the expense, inconvenience, and suffering of each shoot, the facts appear very different. These mutually incompatible epistemologies mean that pro and anti-vaccine advocates are talking past each other.
The second side of the story is how responsibility for maintaining health has been increasingly shifted onto patients. From the women’s health movement of the 1970s, with Our Bodies, Ourselves, to the 1997 Consumer Bill of Rights and Responsibilities, to Medicare Advantage plans, ordinary people are increasingly expected to take part in healthcare decisions that were previously the sole province of doctors. The anti-vaccine movement has members from the Granola Left and the Libertarian Right, but it is overwhelming composed of upper-middle class women, precisely the people who have seen the greatest increase in medical knowledge and choice over the past few decades. Representatives of the healthcare system should not be surprised that after empowering patients to make their own decisions, they sometimes make decisions against medical advice.
So how to resolve this dilemma? The pro-vaccine advocates suggest we either force people to get vaccinated, a major intrusion of coercive power into a much more liberalized medical system, or we somehow change the epistemology of parents. Both of these approaches are unworkable. Likewise, anti-vaccine advocates should lay off vaccines-cause-autism. They may have valid complaints, but at this point, the science is in, and continuing to push that line really pisses scientists off. Advocates need to understand the standards of scientific knowledge, and what playing in a scientific arena entails.
In the vaccine controversy, as in so many others, what we need is forum that balances both scientific and non-scientific knowledge, so that anti-vaccine advocates can speak their case without mangling science in the process. I don’t know what that forum would look like, who would attend, or how it would achieve this balance, but the need for better institutional engagement between science and society is clear.
Posted by
Michael BF
at
21.11.11
0
Labels: mark largent, politics, science, vaccines
A Shepard tone is an auditory illusion which appears to indefinitely ascend or descend in pitch without actually changing pitch at all.
Shepard tones work because they actually contain multiple tones, separated by octaves. As tones get higher in pitch, they fade out. New tones fade in at the lower pitches. The net effect is that it sounds like all the constituent tones are continually increasing in pitch -- and they are, but pitches fade in and out so that, on average, the pitch composition is constant.
Since 2D quasicrystals can be rendered as a sum of plane-waves, it is possible to form the analogue of a Shepard tone with these visual objects. Each plane wave is replaced with a collection of plane waves, at 2,4,8,16... etc times the spatial frequency of the original plane wave. The relative amplitudes of the plane waves are set so that the spatial frequency stays approximately the same even as the underlying waves are scaled. The result is a quasicrystal that appears to zoom in or out indefinitely, without fundamentally changing in structure. There is no reason to demonstrate this effect using quasicrystals, as it would be evident even with a single plane wave. However, I find the interplay between the infinite scaling and the emergent patterns of quasicrystals to be particularly appealing.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import static java.lang.Math.*;
public class QuasiZoom {
// Defines a gaussian function. We will use this to define the
// envelope of spatial frequencies
public static double gaussian(double x) {
return exp(-x*x/2)/sqrt(2*PI);
}
public static void main(String[] args) throws IOException {
int k = 5; //number of plane waves
int stripes = 3; //number of stripes per wave
int N = 500; //image size in pixels
int divisions=40; //number of frames to divide the animation into
int N2 = N/2;
BufferedImage it = new BufferedImage(N, N, BufferedImage.TYPE_INT_RGB);
//the range of different spatial frequencies
int [] M=new int[]{1,2,4,8,16,32,64,128,256};
//the main ( central ) spatial frequency
double mean=log(16);
//the spread of the spatial frequency envelope
double sigma=1;
//counts the frames
int ss=0;
//iterate over spatial scales, scaling geometrically
for (double sc=2.0; sc>1.0; sc/=pow(2,1./divisions))
{
System.out.println("frame = "+ss);
//adjust the wavelengths for the current spatial scale
double [] m=new double[M.length];
for (int l=0; l<M.length; l++)
m[l]=M[l]*sc;
//modulate each wavelength by a gaussian envelop in log
//frequency, centered around aforementioned mean with defined
//standard deviation
double sum=0;
double [] W=new double[M.length];
for (int l=0; l<M.length; l++) {
W[l]=gaussian((log(m[l])-mean)/sigma);
sum+=W[l];
}
sum*=k;
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
double x = j - N2, y = i - N2; //cartesian coordinates
double C = 0; // accumulator
// iterate over all k plane waves
for (double t = 0; t < PI; t += PI / k){
//compute the phase of the plane wave
double ph=(x*cos(t)+y*sin(t))*2*PI*stripes/N;
//take a weighted sum over the different spatial scales
for (int l=0; l<M.length; l++)
C += (cos(ph*m[l]))*W[l];
}
// convert the summed waves to a [0,1] interval
// and then convert to [0,255] greyscale color
C = min(1,max(0,(C*0.5+0.5)/sum));
int c = (int) (C * 255);
it.setRGB(i, j, c | (c << 8) | (c << 16));
}
}
ImageIO.write(it, "png", new File("out"+(ss++)+".png"));
}
}
}
Posted by
M Rule
at
1.11.11
5
Labels: fractals, graphics, optical illusions, programming, quasicrystal, shepard tone
|
http://wealoneonearth.blogspot.com/2011/11/
|
CC-MAIN-2018-13
|
refinedweb
| 1,644
| 50.46
|
3.3. Concise Implementation of Linear Regression¶
Broad and intense interest in deep learning for the past several years
has inspired both companies, academics, and hobbyists to develop a
variety of mature open source frameworks for automating the repetitive
work of implementing gradient-based learning algorithms. In the previous
section, we relied only on (i)
ndarray for data storage and linear
algebra; and (ii)
autograd for calculating derivatives. In practice,
because data iterators, loss functions, optimizers, and neural network
layers (and some whole architectures) are so common, modern libraries
implement these components for us as well.
In this section, we will show you how to implement the linear regression model from Section 3.2 concisely by using Gluon.
3.3.1. Generating the Dataset¶
To start, we will generate the same dataset as in the previous section.
import d2l from mxnet import autograd, gluon, np, npx npx.set_np() true_w = np.array([2, -3.4]) true_b = 4.2 features, labels = d2l.synthetic_data(true_w, true_b, 1000)
3.3.2. Reading the Dataset¶
Rather than rolling our own iterator, we can call upon Gluon’s
data
module to read data. The first step will be to instantiate an
ArrayDataset. This object’s constructor takes).
# Saved in the d2l package for later use is
working, we can read and print the first minibatch of instances.
for X, y in data_iter: print(X, '\n', y) break
[[ 0.9300072 -1.5918756 ] [-0.5309896 -0.9410424 ] [ 0.732445 0.03482824] [-1.446752 -0.35596415] [-1.3278849 0.5360521 ] [ 0.32338732 -0.2544687 ] [-0.5782212 -0.2026513 ] [-0.3703454 1.2845367 ] [-2.7521503 0.5926551 ] [-0.0538918 -1.022586 ]] [11.463218 6.3573055 5.544941 2.5046496 -0.29630652 5.7129297 3.7305255 -0.87415606 -3.310943 7.574081 ]
3.3.3. Defining the Model¶
When we implemented linear regression from scratch (in :numref``sec_linear_scratch``), we defined our model parameters explicitly and coded up the calculations to produce output using basic linear algebra operations. You should know how to do this. But once your models get more complex, and once you have to do this nearly every day, you will be glad for the assistance. The situation is similar to coding up your own blog from scratch. Doing it once or twice is rewarding and instructive, but you would be a lousy web developer if every time you needed a blog you spent a month reinventing the wheel. will
refer to an instance of the
Sequential class. In Gluon,
Sequential defines a container for several layers that will be
chained together. Given input data, a
Sequential passes it through
the first layer, in turn passing the output as the second layer’s input
and so forth. In the following example, our model consists of only one
layer, so we do not really need
Sequential. But since nearly all of
our future models will involve multiple layers, we will use it anyway
just to familiarize you with the most standard workflow.
from mxnet.gluon import nn net = nn.Sequential()
Recall the architecture of a single-layer network as shown in
Fig. 3.3.1. The layer is said to be fully-connected
because each of its inputs are connected to each of its do not need to tell
Gluon how many inputs go into this linear layer. When we first try to
pass data through our model, e.g., when we execute
net(X) later,
Gluon will automatically infer the number of inputs to each layer. We
will describe how this works in more detail in the chapter “Deep
Learning Computation”.
3.3.4. Initializing the weight vector
and bias will have attached gradients.
from mxnet import init net.initialize(init.Normal(sigma=0.01))
The code above may look straightforward but you should note that something strange is happening here. We are initializing parameters for a network even though Gluon does not yet know how many dimensions the input will have! It might be \(2\) as in our example or it might be \(2000\). Gluon lets us get away with this because behind the scenes, the initialization is actually deferred. The real initialization will take place only when we for the first time attempt to pass data through the network. Just be careful to remember that since the parameters have not been initialized yet, we cannot access or manipulate them.
3.3.5. Defining the Loss Function¶
In Gluon, the
loss module defines various loss functions. We will
the imported module
loss with the pseudonym
gloss, to avoid
confusing it for the variable holding our chosen loss function. In this
example, we will use the Gluon implementation of squared loss
(
L2Loss).
from mxnet.gluon import loss as gloss loss = gloss.L2Loss() # The squared loss is also known as the L2 norm loss
3.3.6. Defining the Optimization Algorithm¶
Minibatch SGD and related variants are standard tools for optimizing
neural networks and thus Gluon supports SGD alongside a number of
variations on this algorithm through its
Trainer class. When we
instantiate the
Trainer, we will specify the parameters to optimize
over (obtainable from our net via
net.collect_params()), the
optimization algorithm did not have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. Once we start working with much more complex models, Gluon’s advantages will grow considerably. However, once we have all the basic pieces in place, the training loop itself is strikingly similar to what we did when implementing everything from scratch.
To refresh your memory: for some number of epochs, we will make a complete pass over the dataset (train_data), iteratively grabbing one minibatch of inputs and the corresponding ground-truth labels. For each minibatch, we minibatch.025064 epoch 2, loss: 0.000091 epoch 3, loss: 0.000051
Below, we compare the model parameters learned by training on finite
data and the actual parameters that generated our dataset. To access
parameters with Gluon, we first access the layer that we need from
net and then access that layer’s weight (
weight) and bias
(
bias). To access each parameter’s values as an
ndarray, we
invoke its
data method. As in our from-scratch implementation, note
that our estimated parameters are close to their ground truth
counterparts.
w = net[0].weight.data() print('Error in estimating w', true_w.reshape(w.shape) - w) b = net[0].bias.data() print('Error in estimating b', true_b - b)
Error in estimating w [[ 0.00061464 -0.00016069]] Error in estimating b [0.00034666]
3.3.8. Summary¶
Using Gluon, we can implement models much more succinctly.
In Gluon, the
datamodule provides tools for data processing, the
nnmodule defines a large number of neural network layers, and the
lossmodule defines many common loss functions.
MXNet’s module
initializerprovides various methods for model parameter initialization.
Dimensionality and storage are automatically inferred (but be careful not to attempt to access parameters before they have been initialized).
3.3.9. Exercises¶
If we replace
l = loss(output, y)with
l = loss(output, y).mean(), we need to change
trainer.step(batch_size)to
trainer.step(1)for the code to behave identically. Why?
Review the MXNet documentation to see what loss functions and initialization methods are provided in the modules
gluon.lossand
init. Replace the loss by Huber’s loss.
How do you access the gradient of
dense.weight?
|
https://d2l.ai/chapter_linear-networks/linear-regression-gluon.html
|
CC-MAIN-2019-51
|
refinedweb
| 1,217
| 55.95
|
Getting Started Guide
- Gordon Morgan
- 2 years ago
- Views:
Transcription
1 TopBraid Composer Getting Started Guide Version 2.0 July 21, 2007 TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 1 of 58
2 Revision History Date Version Revision August 1, Initial version September 20, 2006 Correction to Exercise 18 July 21, Update with respect to the latest TopBraid Composer features. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 2 of 58
3 Table of Contents GETTING STARTED GUIDE...1 VERSION INTRODUCTION CONVENTIONS INSTALLATION AND SET UP REQUIREMENTS INSTALLATION Windows Platform Installation Mac OS X Platform Installation Non Windows Platforms Installation WORKSPACE CONFIGURATION Create an Eclipse Project Download Example Ontologies Becoming familiar with TBC Views Organize the workspace Open existing local ontologies Set up preferences BUILDING YOUR FIRST ONTOLOGY WITH TOPBRAID COMPOSER CREATE CLASSES CREATE PROPERTIES CREATE INSTANCES EXECUTE SPARQL QUERIES EXTEND THE ONTOLOGY WORKING WITH IMPORTS AND MULTIPLE ONTOLOGIES DEFINING CLASSES WITH OWL DL RESTRICTION KEYWORDS BOOLEAN CLASS CONSTRUCTORS ENUMERATED CLASSES COMPLEX CLASS EXPRESSIONS...52 APPENDIX A: SEMANTIC WEB STANDARDS...53 A.1 RDF...53 A.2 RDFS...54 A.3 OWL...55 A.4 SPARQL...57 A.5 SWRL...57 TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 3 of 58
4 Figures Figure 1: Selecting TopBraid Perspective...8 Figure 2: Creating Eclipse Project - step Figure 3: Creating Eclipse Project - step Figure 4: Import dialog...10 Figure 5: Downloading example OWL/RDFS libraries...11 Figure 6: geotravel.owl in TopBraid Composer...12 Figure 7: Using hyperlinks to navigate...13 Figure 8: Opening multiple ontologies...14 Figure 9: Creating a folder in the workspace...15 Figure 10: Creating a linked folder...17 Figure 11: Preferences dialog for classes...18 Figure 12: Create OWL/RDF file dialog...19 Figure 13: Initial screen after creating person ontology...20 Figure 14: Classes view - menu options...21 Figure 15: Creating the first class...22 Figure 16: Initial class hierarchy for the Person Ontology...23 Figure 17: Classes view buttons...24 Figure 18: Buttons and options available for TBC forms...24 Figure 19: Properties view buttons...25 Figure 20: Create new property...25 Figure 21: Create firstname property with an automatically generated rdfs:label...26 Figure 22: Defined firstname property...27 Figure 23: Domain view for the Person class...28 Figure 24: Instances view...31 Figure 25: Resource form for William Shakespeare...31 Figure 26: SPARQL View...32 Figure 27: SPARQL View...34 Figure 28: Inferences for haschild property...35 Figure 29: SPARQL Query Library...36 Figure 30: Imports view buttons...39 Figure 31: Import local OWL/RDF file dialog...40 Figure 32: Travel ontology with import of the person ontology...41 Figure 33: Re-factoring name changes...41 Figure 34: New shakespeare.rdf file...43 Figure 35: Moving resources in to shakespeare.rdf file...44 Figure 36: Confirm move resources dialog...44 Figure 37: Edit Restriction dialog...47 Figure 38: Defining somevaluesfrom Restriction for the Adventurer class...47 Figure 39: Defining Adventurer Class as an intersection of a restriction and Person class.48 Figure 40: Inferred class hierarchy for the Adventurer class...49 Figure 41: Create enumerated class members...51 Figure 42: ActivityRating with the created enumerated class...52 Figure 43: RDF Graph - example Figure 44: RDF Graph - example Figure 45: OWL and RDFS classes...56 TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 4 of 58
5 1 Introduction This guide introduces TopBraid Composer for defining, testing and managing semantic models using the W3C standard languages RDFS, OWL, SWRL and SPARQL. Throughout this document the terms TopBraid Composer, TBC and Composer are used interchangeably. Section 2 provides detailed installation and configuration instructions for TBC. It shows how to open existing ontologies, how to download ontologies from the web as well as how to set up virtual folders for working with ontologies outside of the workspace. Section 3 focuses on building a simple ontology (limited to RDFS vocabulary) and running test queries. It contains an exercise involving RDFS inferencing. Section 4 explains import features and approaches to working with multiple ontologies. Section 5 describes key OWL constructs including restrictions. Appendix A provides additional information on the standards supported by TBC: RDF, RDFS, OWL, SPARQL and SWRL. Readers who are new to these technologies will benefit from starting with the Appendix prior to moving on to section 2. Composer is shipped with a comprehensive Help system. Many features not covered by this guide are explained in the help files. To access them, select Help - > Help Contents menu and then click on TopBraidComposer. 1.1 Conventions Class, property and individual names are written in a sans serif font like this. Names for user interface widgets and menu options are presented in a style like this. Where exercises require information to be typed into TBC a verdana font is used like this. Exercises and required tutorial steps are presented like this: Exercise N: Accomplish this 1. Do this. 2. Then do this. 3. Now do this. Tips and suggestions for using TBC and building ontologies are presented like this. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 5 of 58
6 Potential pitfalls and warnings are presented like this. General notes are presented like this. Advanced features are presented like this. We recommend that readers skip advanced features when they first follow this guide. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 6 of 58
7 2 Installation and set up 2.1 Requirements TopBraid Composer is implemented as an Eclipse plug-in. Eclipse is a powerful open-source platform for all kinds of modeling and programming languages. The integration of Composer into this platform means that you can exploit the benefits of an integrated development environment. Eclipse also provides an update mechanism that allows users to conveniently update plug-ins such as TopBraid Composer when a new version becomes available. System requirements are the same as for the Eclipse 3.2 platform. Eclipse 3.1 will work as well, but we would strongly recommend using Eclipse Installation Windows Platform Installation You can get a TopBraid Composer installer available on: This installer includes a suitable Java Virtual Machine. You can just run the installer to install TopBraid Composer on your computer Mac OS X Platform Installation For Mac OS X, you can get a preconfigured Eclipse 3.2 installation with TopBraid Composer from the zip file available on: Unzip the file, for example to /Applications/TopBraidComposer directory and execute eclipse.exe. By default your workspace directory is located at [TopBraid Composer directory]/workspace. You can change this at File - > Switch Workspace Non Windows Platforms Installation If you want to install TopBraid Composer on any non-windows platform, or you want to add the Composer plugin to an existing Eclipse installation, please follow the steps described on the download web page: 2.3 Workspace Configuration (For Windows installer and Mac OS X zip file, the workspace is initially configured in lines of the following two sections, so if you want, you can skip to the section Becoming familiar with TBC Views) Create an Eclipse Project Now that Eclipse and TopBraid Composer are set up, you need to create an empty project to hold your files. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 7 of 58
8 Project is Eclipse s notion; it is not something TopBraid Composer generally uses. TBC stores all ontologies in either.rdf or.owl files. In addition to the file mode, TBC offers database backends. Using a database backend, ontologies can be stored in a number of supported DBMS systems. Project is used by TopBraid Composer only to store customizable information. Even though Composer does not use.project files, you will need to create at least one project for the Workspace to be operational. When you first start Eclipse, you may see the Welcome screen. Close it. Select Window - > Open Perspective - > Other; then select TopBraid from the dialog. Figure 1: Selecting TopBraid Perspective Select File - > New - > Project... from the menu. On the next screen select General / Project (or Simple / Project in Eclipse 3.1) and enter a suitable name. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 8 of 58
9 Figure 2: Creating Eclipse Project - step 1 You now can: download example ontologies from the web copy your existing files into the workspace set up folders in the workspace create your own ontologies Figure 3: Creating Eclipse Project - step 2 TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 9 of 58
10 2.3.2 Download Example Ontologies Right-click on the project you have just created and select Import... Alternatively, select File - > Import menu. Expand Other to see the select options. Figure 4: Import dialog If you select OWL/RDFS File from the Web, you will need to enter the URL of the file. You can also select OWL/RDFS Library from the Web. TopQuadrant maintains a set of example ontologies on its site. Picking the library option will download these files. As you can see, TBC can import not just web ontologies, but it can also import (convert) information from other sources like UML and XML Schema files. Consult the help files for details on specific import options. Finish the dialog to let TopBraid Composer download some RDF and OWL files. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 10 of 58
11 Figure 5: Downloading example OWL/RDFS libraries After the library of files have been downloaded, go to the Navigator view, open the Examples folder and double-click on any example file to open it, for example geotravel.owl (called travel.owl in older versions). Your screen should now look like the one shown in the figure below. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 11 of 58
12 Figure 6: geotravel.owl in TopBraid Composer Becoming familiar with TBC Views TBC consists of the following main views: The Navigator shows the files in the Eclipse workspace Classes, Properties and Associations views display the hierarchies of the current model Instances shows the instances of the class selected in the Classes View Domain shows those properties that have the selected class in their domain Inheritance shows the superclasses and inherited restrictions of the current class Imports shows the imports of the ontology (hierarchically) Change History shows the recent edit steps Resource Editor is the main work area, it shows the currently selected resource: o It includes tabs for a Form, Graph and Source Code o When selected resource is a class, there is also a Diagram tab o When selected resource is an ontology, there are tabs for Statistics and Overview SPARQL provides an interface to run SPARQL queries Rules shows all rules in the model (either in SWRL or Jena format) File Registry shows a list of all files together with their namespaces Basket can be used as a flexible drag-and-drop area for many purposes TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 12 of 58
13 In the default configuration not all the views are shown. To display a hidden view, select Window - > Show view. All Composer views can be dragged and rearranged as necessary. They can also be resized, collapsed and expanded. To move a view, click on the tab with the view s name (the tab color will become blue) and move it to the desired location. There is always one selected ontology model, and one selected ontology resource (class, property, individual, etc). The selected resource must always be from the selected model. The Resource Editor displays the currently selected resource, which is also shown in the toolbar. Exercise 1: Navigating in TBC 1. Select Accommodation class. This can be done in a number of ways: a. Double-click on Accommodation in the Classes view. b. Single click on a gold circle icon in front of Accommodation. c. Alternatively, if a tree or list of resources has the keyboard focus, you can press Alt+UP/DOWN to change the global selection. d. Finally, you can enter Accommodation in the toolbar field right above the classes view. Here, CTRL+Space will help you enter a full name if you only enter the first few characters. 2. Now select BackpackersDestination class. 3. Note the expression in the owl:equivalentclass widget in the Class Form. Hover over hasaccommodation property in the expression and press CTRL. As shown in the next figure, hasaccommodation will become hyperlinked. Click on it. You should now have hasaccommodation displayed in the form. Figure 7: Using hyperlinks to navigate TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 13 of 58
14 4. Notice backward and forward arrows on Composer's tool bar. They enable quick navigation between current and previously selected resources. Click on the backward arrow. Now click on the forward arrow. Exercise 2: Switching between ontologies 1. Go to the Navigator view and open the Pizza ontology (pizza.owl). Your screen should look similar to the one below. Figure 8: Opening multiple ontologies 2. Note that pizza.owl is your currently selected ontology, but geotravel.owl is still open. Switch back to the travel ontology by clicking on the tab with its name. 3. Close the travel ontology. This can be done by either clicking on a cross next to its name or by using File - > Close menu Organize the workspace Exercise 3: Create folders in the workspace 1. Select Navigator view, right-click on the project you have just created and select New -> Folder 2. Alternatively, select File - > New -> Folder menu 3. Give your new folder a name of your choosing and click Finish 4. You will see a new folder appear in the Navigator view. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 14 of 58
15 5. Open your operating system s file management system (such as Windows Explorer) and navigate to your workspace directory, observe that a new folder has been created Figure 9: Creating a folder in the workspace Open existing local ontologies Copy ontologies in to the workspace Exercise 4: Copy file to the workspace and open it 1. Place a file you want to add to the workspace in to a copy buffer (by selecting any of the copy commands supported by your operating system) 2. Select Navigator view, right-click on the folder you have just created and select Paste 3. You can now double click on the file to open it TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 15 of 58
16 You can also copy the file to any of the workspace folders using your computer s file management system such as Windows Explorer. However, you will not see the file in the Navigator view until you refresh the appropriate folder. Right click on a folder and select Refresh If you update a file using a different program, TBC will not know that the file version has changed unless you do a refresh Create linked folders Sometimes, you may want to work with files that are stored outside of the workspace. For example, you may be sharing CVS or another version control system with the other team members. Instead of duplicating these files in the workplace, you can access them in place by using linked folders. Exercise 5: Create a linked folder 1. Select Navigator view, right-click on the project you have just created and select New -> Folder 2. Alternatively, select File - > New -> Folder menu 3. Give your new folder a name of your choosing and click Advanced 4. Check Link to folder in the file system 5. Click Browse 6. Select any folder 7. Click Finish You can now see and use the folder as if it was part of the workspace. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 16 of 58
17 2.3.6 Set up preferences Figure 10: Creating a linked folder TBC is highly configurable. What is shown in many of the views is governed by the user preferences. Preferences are accessible from Window - > Preferences menu. Expand the tree under TopBraid Composer to see available preferences dialogs. Preference dialog for Classes view is shown in the next diagram. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 17 of 58
18 Figure 11: Preferences dialog for classes TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 18 of 58
19 3 Building Your First Ontology with TopBraid Composer This chapter describes how to create a very simple ontology about people and their children. Exercise 6: Create a new file 1. Select Navigator view, right-click on any of the folders you have created and select New - > OWL/RDFS File 2. Type in the Base URI and the file name in the Create dialog 3. Click Finish Figure 12: Create OWL/RDF file dialog After a short amount of time, a new file will be created. 3.1 Create classes When a new file is created, the screen should resemble the screen in the next figure. The initial Classes view should contain two classes - owl:thing and owl:nothing. Depending on your setup, rdfs:resource is maybe its root class, but you can change this in the drop down menu of the Classes view. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 19 of 58
20 Figure 13: Initial screen after creating person ontology OWL classes are interpreted as sets of individuals (or sets of objects). The class owl:thing represents the set containing all individuals. Because of this all classes are subclasses of owl:thing. Let s add some classes to the ontology. Exercise 7: Create classes Person, FemalePerson and MalePerson 1. Right click on owl:thing 2. Press the Create subclass button shown in the next figure. This button is used to create a new class as a subclass of the selected class (in this case we want to create a subclass of owl:thing). TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 20 of 58
21 Figure 14: Classes view - menu options 3. The default name shown in Create classes dialog will be Thing_1. Rename it to Person and click OK. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 21 of 58
22 Figure 15: Creating the first class 4. Drag and drop rdfs:comment from the Properties view into the Annotations section of the Class Form. 5. Type Human being in the rdfs:comment widget. Click OK or simply press Enter. 6. Change the datatype of Human being : Click on the menu button next to Human being and select Change datatype In the Change datatype to... dialog, pick xsd:string and click OK. This will change the datatype of Human being to xsd:string. 7. Repeat the steps 2 and 3 to add the classes FemalePerson and MalePerson, ensuring that Person is selected before the Create subclass button is pressed so that the classes are created as subclasses of Person. 8. Click on MalePerson in Classes view to show its Class Form. On the Class Form, hover your mouse over the icon near Person, and click on the plus sign that appears. This will show the nested Class Form for Person inside the Class Form for MalePerson. The class hierarchy and the Class Form for MalePerson should now look as the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 22 of 58
23 Figure 16: Initial class hierarchy for the Person Ontology 9. Notice a star in front of ontology name above the Class Form - *person.owl. This means that ontology has been modified, but not saved. Select File - > Save. In Composer classes have a gold circle icon displayed in front of the class name. Selected class (the one currently shown in the form) has an arrow overlaying the gold circle icon. Observe that MalePerson in Figure 16 has an arrow in the icon. Classes view has a number of buttons as explained in the next figure. Add new class as a subclass of selected class Add new class as a sibling of selected class Show Classes view menu Delete selected property TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 23 of 58
24 Figure 17: Classes view buttons Form has a number of buttons and options; these are explained in the next figure. Figure 18: Buttons and options available for TBC forms 3.2 Create Properties OWL Properties represent relationships between two individuals. There are two main types of properties, Object properties and Datatype properties. Object properties link an individual to an individual. Datatype properties link an individual to an XML Schema Datatype value. OWL also has a third type of property Annotation properties. Annotation properties are typically used to store information that is irrelevant for reasoning tools, for example to add information (metadata data about data) to classes, individuals and object/datatype properties. In exercise 5 we have used an annotation property rdfs:comment to add a comment to the Person class. In Composer, properties have rectangular icons displayed in front of their names. Object properties are indicated using blue icons, datatype properties have green icons and annotation properties have yellow icons. The property currently shown in the form has an arrow overlaying the rectangular icon. The Properties view has a number of buttons as explained in the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 24 of 58
25 Add new property Delete selected property Show Properties view menu Figure 19: Properties view buttons Properties may be created using the Create property button in the Properties view shown in the next figure. Irrespective of what kind of property is being created, the same button is used. The property type is selected in the Create property dialog. Exercise 8: Create datatype properties called firstname and lastname 1. Press Add new property button. Create property dialog will appear as shown in the next figure. 2. Select owl:datatypeproperty. Rename the new property to firstname. Add row button Figure 20: Create new property 3. Click the add row button in the Annotations Template section 4. A Select annotation screen will pop up. Select rdfs:label and click OK. 5. Type {name} in the Initial Value field. Your screen should now look like the figure below. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 25 of 58
26 Figure 21: Create firstname property with an automatically generated rdfs:label 6. Click OK. 7. Observe that the new property now has an automatically generated first name label 8. Add Person class to the domain of the newly created property. This can be done in one of the following ways: a. Drag Person class and drop it over rdfs:domain in the form b. Click on a Show widget menu button next to rdfs:domain and select Add empty row. (Show widget menu button is located next to each widget on the form as shown in Figure 18.) Type Person and click OK. c. Click on a Show widget menu button next to rdfs:domain and select Add existing... Select Person from the tree in the dialog and click OK. 9. Add xsd:string to the range of the newly created property. Click on a Show widget menu button next to rdfs:range and select Set to xsd:string. 10. Your screen should now look like the one shown in the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 26 of 58
27 Figure 22: Defined firstname property 11. Repeat the steps above to create lastname property. 12. Select Person class. 13. Click on the Domain view. Observe (as shown in the next figure) that the newly created properties appear in the view. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 27 of 58
28 Figure 23: Domain view for the Person class Properties can be organized as hierarchies. Exercise 9: Create object properties called hasdaughter, hasson and haschild 1. Press the Add new property button. Create property dialog will appear. 2. Select owl:objectproperty. Rename the new property to hasdaughter and click OK. 3. Set the domain of the newly created property to be Person and range to be FemalePerson. 4. Add another object property called hasson. 5. Set domain of the newly created property to be Person and range to be MalePerson. 6. Add a third object property called haschild. 7. Make haschild a parent of hasdaughter and hasson. It can be done in either of the following ways: a. Select hasdaughter. Drag and drop haschild over rdfs:subpropertyof widget. b. In the Properties view select hasson, drag and drop it under haschild. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 28 of 58
29 Declaring domain and ranges has certain meaning in RDFS. For example, the definitions we have just made mean that individuals that are subjects of an RDF triple that has predicate hasdaughter (or are used on the left hand side of the hasdaughter property) will be inferred to be members of the class Person. Any individuals that are objects of such triple (or are used on the right hand side of the hasdaughter property) will be inferred to be members of the class FemalePerson. Consult RDFS section of Appendix A for additional information. It is possible to specify multiple classes as the domain or range for a property. One can, for example, drag and drop multiple classes over rdfs:domain widget. Multiple properties are interpreted as intersection. For example, if the domain of a property has two classes MalePerson and FemalePerson, any instance that is in the domain of the property will be inferred as being of both types. See example in the figure below. If you want to say that domain is a union of both sets where an instance can be either a MalePerson or a FemalePerson, you should put MalePerson and FemalePerson on the same line and type or between them as shown in the figure below. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 29 of 58
30 Try both definitions and click on the Source Code tab to see the difference between the two versions. To delete the extra domain click Show widget menu button at the end of the line you want to delete and select Delete value. Alternatively you can just delete the text from the line and click OK. Keep in mind that while you have full control over your own ontology and can ensure unions for multiple domains and ranges, merging ontologies that use the same properties and specify domains or ranges for them, will result in the intersections of domains and ranges. This is often not the desired or expected behavior. For this and other reasons we recommend that domains and ranges be used judiciously. 3.3 Create instances We are now ready to add a few instances to the ontology. Exercise 10: Create instances of the Person class: SusannaShakespeare, JudithShakespeare, HamnetShakespeare and WilliamShakespeare. 1. Select FemalePerson class 2. Click on the Instances view. Press the Add new instance button shown in the next figure. When the Create FemalePerson dialog pops up, replace the default name of new instance with SusannaShakespeare. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 30 of 58
31 Add new instance Delete instance Show Instances view menu Figure 24: Instances view 3. Repeat for JudithShakespeare 4. Select the MalePerson class and add instances HamnetShakespeare and WilliamShakespeare. At this point your screen should look similar to the one shown in the next figure. Figure 25: Resource form for William Shakespeare 5. Drag HamnetShakespeare over hasson widget in the William Shakespeare form. 6. State that Judith and Susanna are William s daughters. This can be done in a number of ways. Try each to experience different ways of working with TBC: a. Use a basket: Select FemalePerson class Click on the Instances view Drag JudithShakespeare and SusannaShakespeare into the Basket view If you do not see the Basket view, go to Window - > Show view and select Basket view TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 31 of 58
32 Select the MalePerson class. Click on the Instances view and select WilliamShakespeare Drag JudithShakespeare and SusannaShakespeare from the Basket view over hasdaughter widget in the William Shakespeare form. b. Use add existing menu option: Click on a Show widget menu button next to hasdaughter and select Add existing Select JudithShakespeare and SusannaShakespeare from the Add existing screen and click OK c. Type in the required information: Click on a Show widget menu button next to hasdaughter and select Add empty row. Type in JudithShakespeare Repeat for Susanna trying an auto-complete feature. Type in Sus and hold the CTRL key while pressing SPACE. You can easily change the type of any resource after it has been created. Let s say, for example, that you have created SusannaShakespeare as an instance of a Person. You want to say she is a FemalePerson. Simply select SusannaShakespeare and drag FemalePerson over the rdf:type widget. 3.4 Execute SPARQL Queries SPARQL is a proposed standard for querying RDFS/OWL data. TBC comes with a built-in query engine. Let s try some queries now. Exercise 11: Run a default query. 1. Click on the SPARQL view. Its options and layout are explained in the next figure Figure 26: SPARQL View TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 32 of 58
33 2. A sample query is already in the query panel. It will retrieve all triples of the following pattern x rdfs:subclassof y and will display x and y. In other words, it will get all resources that are subclasses of another class. The result will include resources (subjects) as well as the classes they are subclasses of (object). 3. Run the query. Observe that results include classes we have just created as well as built-in OWL and RDFS classes. Let s write are query to retrieve all people who have daughters. Following example above a query to return all parents with their daughters should look like: SELECT?subject?object WHERE {?subject hasdaughter?object } There is one issue with this query. The SPARQL syntax requires all names to be explicitly qualified with a namespace. Even if we talk about resource names from the default namespace like hasdaughter, we need to use a prefix. In the simplest case we need to enter :hasdaughter with a leading : character. Alternatively we could either use a fully qualified URI or create a prefix for the namespace we are using and append it to the property name, for example, person:hasdaughter. Introducing a new prefix is particularly useful if you are working on a project that consists of multiple namespaces and modules. To see how this works in Composer, let s create a prefix for the namespace Exercise 12: Create a namespace prefix. 1. In the form click on the Show form menu button (shown on Figure 18). Select Navigate to ontology. 2. You will see an Ontology Overview form. Press Add button. 3. Type person in the Prefix column. Type in the Namespace URI column. 4. Press Enter. Exercise 13: Query for all parents of females. 1. Click on the SPARQL view and type the query. 2. Press Run query button. Your screen should look similar to the one shown in the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 33 of 58
34 Figure 27: SPARQL View 3. Change person:hasdaughter to person:haschild. 4. Run the query. You should receive no results Why have we received no results? hasdaughter and hasson are sub properties of haschild, therefore according to RDFS inference rules (explained in Appendix A), the query should have returned William Shakespeare with all his children. There is a simple explanation. We have been making queries over the asserted (or stated) triples only, but not on the inferred triples. Let s now run the inferencing and see how it changes query results. Exercise 14: Run inferences and query for all parents. 1. Select Inference - > Run Inferences menu option 2. Inferred triples will appear in the Inferences view. Your screen should look similar to the one shown in the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 34 of 58
35 Figure 28: Inferences for haschild property 3. In SPARQL view, click on the Toggle between using currently configured inferences or not using them button (shown on Figure 26). This will enable the inferences to be used in the next query. Then run the query from the previous exercise again. You should see William Shakespeare with all three of his children. 4. Click on WilliamShakespeare, observe that his form now has inferred properties. These are shown in the light blue-gray background. TBC maintains asserted and inferred graphs. If you were to close the ontology now and re-open it again, you will see that all inferred statements disappear. It is possible, however, to make the individual inferred statements persistent by turning them in to assertions. An entire inferred graph can also be saved: By clicking on a Show widget menu button next to an inferred statement, you can select Assert inferred statement option. Alternatively, menu option Inference - > Save inference graph will save the entire graph. 5. Save the query for future use: Open Imports view and at the view, click on the world button with the plus sign, which is the Import from URL button. (Imports view will be explained later in detail) At the dialog, Import from URL, enter the following URL and click on OK : TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 35 of 58
36 After you add the import, you will see sparql:query property at the Properties view. Select person:haschild property. Add sparql:query to the form by dragging and dropping it or by right clicking in the body of the Annotations section of the form and selecting sparql:query. Enter the query in the comment field. Press OK or ENTER. There are two ways to run this query: a. Click on the Show widget menu button next to the text you have just entered. Select Execute as SPARQL query. b. Select the Query Library tab at the SPARQL view, where you will see this and other saved queries in the ontology. Make sure that the checkbox near this query is checked and the Toggle between using currently configured inferences or not using them button is selected. Run the query using Run query button. The Query Library tab will appear with the result as in the following figure: Figure 29: SPARQL Query Library TBC will recognize SPARQL syntax in the sparql:query fields and provide menu options to execute them. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 36 of 58
37 3.5 Extend the ontology In this section we will add two more object properties and observe the inferences they entail. Exercise 15: Create an object property called hasspouse 1. Press the Add new property button. The Create property dialog will appear. 2. Expand the tree under owl:objectproperty. Select owl:symmetricproperty. 3. Rename the new property to person:hasspouse. 4. Set the rdfs:domain field of the newly created property to be Person 5. Observe that the owl:inverseof property and rdfs:range of hasspouse are automatically inferred Prior to this exercise only RDFS vocabulary has been used in the person ontology (see Appendix A for more information on RDFS and OWL). This is the first time an OWL construct is being used. Even though we have defined classes as OWL classes and properties as OWL properties, we have not done any modeling that required OWL expressivity. All the exercises thus far could have been accomplished without using any of the OWL statements. For example, instead of creating subclasses of owl:thing, we could have created subclasses of rdfs:resource which are declared as RDFS classes. Composer can be configured as RDFS-only editor by hiding all of OWL constructs. This can be done by selecting Window -> Preferences then making appropriate selections under TopBraid Composer -> Classes View and Properties View. Symmetric property entails the following inferences: 1. If property p is symmetric and there is triple a p b, it will be inferred that b p a. In our example, if a hasspouse b then b hasspouse a. 2. From this rule it can be concluded that if p rdfs:domain a then p rdfs:domain b. Similarly, if p rdfs:range a then p rdfs:range b. Most inferences will only appear in Composer, if you explicitly run the inferencing. However, there are a limited number of trivial inferences that Composer performs interactively or just in time. These are: Coordination of inverses. If it is stated that property p owl:inverseof q, TBC will infer that property q owl:inverseof property p Coordination of domains and ranges of inverse properties. If it is stated that p rdfs:domain a and p owl:inverseof q, TBC will infer that q rdfs:range a Since saying that p rdf:type owl:symmetricproperty is the same as saying p owl:inverseof p, TBC will (as demonstrated by the previous exercise) infer that p owl:inverseof p and if p rdfs:domain a then p rdfs:range a Because TBC maintains these automatic inferences we recommend that if you use inverse properties, you should specify domains and ranges only for the properties going in one direction and let TBC maintain the domains and ranges for their inverses. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 37 of 58
38 Exercise 16: Prove that the second inference is true 1. Prove the second inference rule for symmetric properties as described in the General Note. Use information in the Appendix A and/or any other sources on RDFS and OWL. Exercise 17: Create object property called hasfamilymember 1. Press Add new property button. Create property dialog will appear. 2. Select owl:objectproperty. Rename the new property person:hasfamilymember. 3. Make haschild and hasspouse subproperties of hasfamilymember When subproperties have their domains and ranges defined, it is usually unnecessary and even not advisable to define domain and ranges for the parent property. Another ontology design pattern (less commonly used, but applicable in some cases) is to define domain and range for a parent property and leave out domain and range definitions for subproperties. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 38 of 58
39 4 Working with Imports and multiple ontologies In this chapter, we will connect geotravel.owl and person.owl models connecting them to describe the fact that people may be interested in different leisure activities. The geotravel.owl ontology in the Examples folder of the downloaded library already describes different activities. Rather than replicate this information in the person.owl, we will show how to import and re-use it. The end goal may be to create an application that will recommend vacation destinations to people based on their preferences and the preferences of their family members. OWL ontologies may import one or more other OWL ontologies. When an ontology imports another ontology, not only can classes, properties and individuals be referenced by the importing ontology, the axioms and facts that are contained in the ontology being imported are actually included in the importing ontology. OWL allows ontology imports to be cyclic so for example travel ontology may import person ontology and person ontology may import travel ontology. For our exercise we have decided to import the person ontology. Notice the distinction between referring to classes, properties and individuals in another ontology using namespaces, and completely importing an ontology. Exercise 18: Import person ontology and make changes 1. In the Navigator view, open geotravel.owl file 2. Click on the Imports view and press Import local file button shown in the next figure Import local file Import from URL Show import view menu Download model to local file Remove selected import Figure 30: Imports view buttons 3. When the Import local file dialog pops up, expand the workspace folders until you ve located person ontology, select it and click OK. The dialog should look similar to the following figure. Alternatively you can drag and drop the person.owl file from the Navigator into the Imports view. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 39 of 58
40 Figure 31: Import local OWL/RDF file dialog 4. Your screen should now look similar to the one shown in the next figure. Note that some classes and properties (such, for example, Person) are displayed using washed out icons and fonts. These are the resources that come from imported model. 5. Create an object property called hasfavoriteactivity. Set its domain to Person and its range to Activity. You have just created a bridge between two models! 6. Save your changes using File -> Save TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 40 of 58
41 Figure 32: Travel ontology with import of the person ontology 7. Select the Person class and modify its name to be person:humanbeing. Press ENTER. 8. The dialog shown in the next figure will pop up. Press OK. Figure 33: Re-factoring name changes 9. Close all ontologies you have open by using File -> Close All. 10. A dialog will pop-up offering to save the changes. Since we do not want to save the most recent change, click on Deselect All then press OK. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 41 of 58
42 When working with multiple ontologies, it is important to know where the new statements and/or changes to the old statements are saved. Composer follows these rules: New statements are added into the currently selected ontology. The property hasfavoriteactivity was added to geotravel.owl. Forms for imported resources can be edited. Any changes to the existing statements are written into the ontology they come from. For example, if we were to modify the fact that FemalePerson is a subclass of Person (essentially remove the triple that says person:person rdfs:subclassof person:femaleperson) the change would go into the person.owl file. If we were to say that domain of hasson is no longer a Person, but Parent (a new class we can define for this purpose), the location where the new triple will go depends on how the change is made: o If we were to overtype Person with Parent, the change would be saved in the o person.owl as an update to the previously existing triple in that file If we were to delete the entry about the domain and then add a new one, the deletion would be done in the person.owl and the new triple would be saved in the geotravel.owl When we changed the URI of the person:humanbeing class to person:humanbeing, the change was made to the Person class definition in the person.owl. TBC resolved and updated all the references to this class in the person.owl. It also scanned to see if there are any other ontologies that import it. When working with modular imported ontologies, it is, therefore, possible to intentionally or accidentally make changes to imported files. If imported files come from the web, such changes will be lost when you close the model. With local files they can be saved. Composer keeps a log of all changes accessible from the Change History view. Unsaved changes can be rolled back using Edit - > Undo. When working with the local models that belong to other parties, a good practice is to lock them (make them read only) to prevent accidental updates. This can be done by clicking on the file in the Navigator view and pressing the lock button. The person ontology has some general (schema level) information about people. It also has some very specific information about the Shakespeare family. Since we are interested in the general cases of relationships between people and their travel interests, it makes sense to separate information about the Shakespeare family into a file of its own. Exercise 19: Move resources between ontologies 1. In the Navigator view, create a new RDF file, call it shakespeare.rdf. Invent a base URI of your choice. 2. Import person.owl. Your screen should now look similar to the one shown in the next figure. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 42 of 58
43 Figure 34: New shakespeare.rdf file 3. Select File - > Save All menu option. Notice that stars in front of the file names disappear. 4. Open person.owl 5. Select MalePerson class and click on the Instances view. 6. Drag WilliamShakespeare and HamnetShakespeare into the Basket. 7. Select FemalePerson class. 8. Drag JudithShakespeare and SusannaShakespeare into the Basket. 9. Select all 4 resources in the Basket and drag and drop them over the Shakespeare.rdf file in the Navigator as shown in the next figure TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 43 of 58
44 Figure 35: Moving resources in to shakespeare.rdf file 10. Confirm move resources dialog will pop up. Press Yes. Figure 36: Confirm move resources dialog 11. Observe that classes in the person.owl no longer have instances associated with them. 12. Switch to shakespeare.rdf file and locate William Shakespeare resource. Observe that connections between him and his children are still in place. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 44 of 58
45 Notice that in the previous exercise all 4 individuals were moved at once. Let s examine what would happen if we were to move person:williamshakespeare first and then move person:susannashakespeare: Saying yes to the should the namespace be adjusted question would change the record id of person:williamshakespeare to shakespeare:williamshakespeare Record id for person:susannashakespeare would stay the same. Therefore, the Shakespeare.rdf file would now have the following triple: shakespeare:williamshakespeare person:hasdaughter person:susannashakespeare If we were now to move Susanna and say yes to the namespace adjustment, her record id would change to shakespeare:susannashakespeare and connection between her and William would be lost. If you are able to move all resources at once, Composer will maintain relationships between connected resources as their URIs are modified. Say yes to the adjusting the namespace question. If you are not able to move all the connected resources at once, for all the moves except for the very first one, say no to the adjusting the namespace question and modify the resource ids so that they include correct namespaces manually after the move. As demonstrated in the exercise 18, if you change the URI of a resource, TBC will check to see if there are any files that import this model and may therefore be impacted by renaming. The dialog with (potentially) impacted models will be shown. Under your direction, TBC will propagate the change to all affected files. For a more granular control over moving operation use the Triples view access by selecting Window -> Show View -> Triples. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 45 of 58
46 5 Defining Classes with OWL DL As noted before, all RDFS declarations about properties are global. If it is stated that Person is in the domain of haschild property, this declaration remains true everywhere haschild property is used. In other words, it defines the property haschild and not the class Person. Let s consider the following example: We have already said that Person hasfavoriteactivity Activity We now want to define a special class of people adventurers and say that at least some of their favorite activities are adventure activities If we re-use the same property and say that Adventurer hasfavoriteactivity Adventure then any statement of the form x hasfavoriteactivity y, will result in the inferences: x rdfs:type Adventurer and y rdfs:type Adventure. Everyone who has a favorite activity will become an adventurer and any activity liked by anyone will become an adventure This is where OWL restrictions come in. Unlike domains and ranges, restrictions define classes. They are used to restrict the individuals that belong to a class. OWL supports the following restrictions: Quantifier Restrictions allvaluesfrom 1 and somevaluesfrom 2 Cardinality Restrictions mincardinality, cardinality and maxcardinality hasvalue Restrictions In addition to selecting a type of restriction, a decision will need to be made whether restriction is to be declared using rdfs:subclassof or owl:equivalentclass statements. Consider the difference: Saying that US Citizen is a subclass of all things for which the value of nationality property equals (hasvalue) USA, means that: o if it is known that an individual is US Citizen, it can be inferred that his nationality is USA Saying that US Citizen is equivalent to all things for which the value of nationality property equals (hasvalue) USA, means that: o if it is known that an individual is US Citizen, it can be inferred that his nationality is USA AND o if it is known that an individual s nationality is USA, it can be inferred that he is US Citizen. Let s use example introduced in the beginning of the section and create a restriction. Exercise 20: Create somevaluesfrom restriction using the Edit Restriction dialog 1. Create Adventurer class in the person ontology 2. Open the travel ontology and select Adventurer class 3. At the Class Form, near the owl:equivalentclass widget, click on the Show widget menu button (shown in Figure 18) and select Create Restriction The Edit Restriction dialog will pop up. 4. At the dialog, select hasfavoriteactivity property from the On Property tree and somevaluesfrom (some) from the Restriction Type options. At the Filler, enter the value: Adventure. Click on OK. The dialog will look as in the following figure: 1 Also called universal quantifiers 2 Also called existential quantifiers TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 46 of 58
47 Figure 37: Edit Restriction dialog 5. Alternatively, you can enter the above restriction by adding an empty row in the owl:equivalentclass widget and entering your value. Your screen should now look like the one shown in the next figure: Figure 38: Defining somevaluesfrom Restriction for the Adventurer class TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 47 of 58
48 We want to further qualify a definition of Adventurer by saying that adventurers are people that like some adventure activities. One option would be to make Adventurer a subclass of Person. This, however may result in unexpected inferences any time it is asserted that x hasfavoriteactivity y and y rdf:type Adventure, it would be inferred that x rdf:type Person, even if it is known that x is, for example, a dog. Instead, let s define Adventurer as an equivalent class to all people who like some adventures. Exercise 21: Combine somevaluesfrom restriction with a Boolean operator 1. Place your cursor right after the Adventure and type and person:person 2. Press OK or ENTER. The Class Form should now look like the one shown in the next figure. Figure 39: Defining Adventurer Class as an intersection of a restriction and Person class 3. Select Inference - > Run Superclass Inferences only and notice that, as shown, in the next figure, it has been inferred that Adventurer is a subclass of Person. 4. Notice the list of inferred triples in the Inferences view. TopBraid Composer, Copyright 2006 TopQuadrant, Inc. 48 of 58,
TopBraid Application Development Quickstart Guide. Version 3.3
TopBraid Application Development Quickstart Guide Version 3.3 October 27, 2010 2 TopBraid Application Development Quickstart Guide Introduction TopBraid Application Development Quickstart Guide TOC 3 Contents
USER GUIDE. Unit 2: Synergy. Chapter 2: Using Schoolwires Synergy
USER GUIDE Unit 2: Synergy Chapter 2: Using Schoolwires Synergy Schoolwires Synergy & Assist Version 2.0 TABLE OF CONTENTS Introductions... 1 Audience... 1 Objectives... 1 Before You Begin... 1 Getting
Database Forms and Reports Tutorial
Database Forms and Reports Tutorial Contents Introduction... 1 What you will learn in this tutorial... 2 Lesson 1: Create First Form Using Wizard... 3 Lesson 2: Design the Second Form... 9 Add Components
Microsoft SharePoint 2010 End User Quick Reference Card
Microsoft SharePoint 2010 End User Quick Reference Card Microsoft SharePoint 2010 brings together the people, documents, information, and ideas of the University into a customizable workspace where everyone
Avaya Network Configuration Manager User Guide
Avaya Network Configuration Manager User Guide May 2004 Avaya Network Configuration Manager User Guide Copyright Avaya Inc. 2004 ALL RIGHTS RESERVED The products, specifications, and other technical information...
Chapter 15: Forms. User Guide. 1 P a g e
User Guide Chapter 15 Forms Engine 1 P a g e Table of Contents Introduction... 3 Form Building Basics... 4 1) About Form Templates... 4 2) About Form Instances... 4 Key Information... 4 Accessing the
Appointment Scheduler
EZClaim Appointment Scheduler User Guide Last Update: 11/19/2008 Copyright 2008 EZClaim This page intentionally left blank Contents Contents... iii Getting Started... 5 System Requirements... 5 Installing
SonicWALL SSL VPN File Shares Applet
SonicWALL SSL VPN File Shares Applet Document Scope This document describes how to use and manage the SonicWALL SSL VPN File Shares Applet feature. This document contains the following sections: Feature
MICROSOFT OUTLOOK 2010 WORK WITH CONTACTS
MICROSOFT OUTLOOK 2010 WORK WITH CONTACTS Last Edited: 2012-07-09 1 Access to Outlook contacts area... 4 Manage Outlook contacts view... 5 Change the view of Contacts area... 5 Business Cards view... 6 where
Banner Document Management Suite (BDMS) Web Access Help
May 10 th, 2011 Banner Document Management Suite (BDMS) Web Access Help Division of Information Technology AppXtender Web Access Help: For questions regarding AppXtender Web Access, please contactBuilder 2.1 Manual
SiteBuilder 2.1 Manual Copyright 2004 Yahoo! Inc. All rights reserved. Yahoo! SiteBuilder About This Guide With Yahoo! SiteBuilder, you can build a great web site without even knowing HTML. If you can
Developing Rich Web Applications with Oracle ADF and Oracle WebCenter Portal
JOIN TODAY Go to: OTN Developer Day Oracle Fusion Development Developing Rich Web Applications with Oracle ADF and Oracle WebCenter Portal Hands on Lab (last update, June
IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules
IBM Operational Decision Manager Version 8 Release 5 Getting Started with Business Rules Note Before using this information and the product it supports, read the information in Notices on page 43.
Editor Manual for SharePoint Version 1. 21 December 2005
Editor Manual for SharePoint Version 1 21 December 2005 ii Table of Contents PREFACE... 1 WORKFLOW... 2 USER ROLES... 3 MANAGING DOCUMENT... 4 UPLOADING DOCUMENTS... 4 NEW DOCUMENT... 6 EDIT IN DATASHEET...
SAS Business Data Network 3.1
SAS Business Data Network 3.1 User s Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. SAS Business Data Network 3.1: User's Guide. Cary,
Google Docs Basics Website:
Website: Google Docs is a free web-based office suite that allows you to store documents online so you can access them from any computer with an internet connection. With Google
Colligo Email Manager 6.0. Connected Mode - User Guide
6.0 Connected Mode - User Guide Contents Colligo Email Manager 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Email Manager 2 Checking for Updates 3 Updating Your License
Web Ambassador Training on the CMS
Web Ambassador Training on the CMS Learning Objectives Upon completion of this training, participants will be able to: Describe what is a CMS and how to login Upload files and images Organize content Create
DbSchema Tutorial with Introduction in SQL Databases
DbSchema Tutorial with Introduction in SQL Databases Contents Connect to the Database and Create First Tables... 2 Create Foreign Keys... 7 Create Indexes... 9 Generate Random Data... 11 Relational Data
Microsoft Visual Studio Integration Guide
Microsoft Visual Studio Integration Guide MKS provides a number of integrations for Integrated Development Environments (IDEs). IDE integrations allow you to access MKS Integrity s workflow and configuration
Ansur Test Executive. Users Manual
Ansur Test Executive Users Manual April 2008 2008 Fluke Corporation, All rights reserved. All product names are trademarks of their respective companies Table of Contents 1 Introducing Ansur... 4 1.1 About
Virtual Communities Operations Manual
Virtual Communities Operations Manual The Chapter Virtual Communities (VC) have been developed to improve communication among chapter leaders and members, to facilitate networking and communication among
State of Ohio DMS Solution for Personnel Records Training
State of Ohio DMS Solution for Personnel Records Training 1 Contents LOGGING IN AND THE BASICS... 3 LOGGING INTO THE DMS... 3 NAVIGATING THE UNITY CLIENT... 4 CREATING PERSONAL PAGES... 6 ADDING WEB LINKS
Timeless Time and Expense Version 3.0. Copyright 1997-2009 MAG Softwrx, Inc.
Timeless Time and Expense Version 3.0 Timeless Time and Expense All rights reserved. No parts of this work may be reproduced in any form or by any means - graphic, electronic, or mechanical, including
Most of your tasks in Windows XP will involve working with information
OFFICE 1 File Management Files and Folders Most of your tasks in Windows XP will involve working with information stored on your computer. This material briefly explains how information is stored in Windows
Colligo Email Manager 6.2. Offline Mode - User Guide
6.2 Offline Mode - User Guide Contents Colligo Email Manager 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Email Manager 3 Checking for Updates 4 Updating Your License
Business Insight Report Authoring Getting Started Guide
Business Insight Report Authoring Getting Started Guide Version: 6.6 Written by: Product Documentation, R&D Date: February 2011 ImageNow and CaptureNow are registered trademarks of Perceptive Software,
Word basics. Before you begin. What you'll learn. Requirements. Estimated time to complete:
Word basics Word is a powerful word processing and layout application, but to use it most effectively, you first have to understand the basics. This tutorial introduces some of the tasks and features that
NJCU WEBSITE TRAINING MANUAL
NJCU WEBSITE TRAINING MANUAL Submit Support Requests to: (Login with your GothicNet Username and Password.) Table of Contents NJCU WEBSITE TRAINING: Content Contributors...
Database File. Table. Field. Datatype. Value. Department of Computer and Mathematical Sciences
Unit 4 Introduction to Spreadsheet and Database, pages 1 of 12 Department of Computer and Mathematical Sciences CS 1305 Intro to Computer Technology 15 Module 15: Introduction to Microsoft Access Objectives:
Results CRM 2012 User Manual
Results CRM 2012 User Manual A Guide to Using Results CRM Standard, Results CRM Plus, & Results CRM Business Suite Table of Contents Installation Instructions... 1 Single User & Evaluation Installation
PDF Web Form. Projects 1
Projects 1 In this project, you ll create a PDF form that can be used to collect user data online. In this exercise, you ll learn how to: Design a layout for a functional form. Add form fields and set...
BUSINESS OBJECTS XI WEB INTELLIGENCE
BUSINESS OBJECTS XI WEB INTELLIGENCE SKW USER GUIDE (Skilled Knowledge Worker) North Carolina Community College Data Warehouse Last Saved: 3/31/10 9:40 AM Page 1 of 78 Contact Information Helpdesk If you
Chapter 15 Using Forms in Writer
Writer Guide Chapter 15 Using Forms in Writer OpenOffice.org Copyright This document is Copyright 2005 2006 by its contributors as listed in the section titled Authors. You can distribute it and/or modify
Alkacon. OpenCms 8 User Manual
Version: 1.3 Date: Wednesday, November 23, 2011 Table of Content Table of Content... 2 1 Why choose OpenCms 8?... 4 1.1 Create an entire website within minutes with OpenCms 8... 4 2 Getting Started...
Web Intelligence User Guide
Web Intelligence User Guide Office of Financial Management - Enterprise Reporting Services 4/11/2011 Table of Contents Chapter 1 - Overview... 1 Purpose... 1 Chapter 2 Logon Procedure... 3 Web 5.1. User Guide
5.1 User Guide Contents Enterprise Email Management for SharePoint 2010 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Email Manager 2 Managing SharePoint Sites 5 Adding
User Guide for TASKE Desktop
User Guide for TASKE Desktop For Avaya Aura Communication Manager with Aura Application Enablement Services Version: 8.9 Date: 2013-03 This document is provided to you for informational purposes only.
Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc.
Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable for any problems arising
ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE
ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE UPDATED MAY 2014 Table of Contents Table of Contents...
MAS 500 Intelligence Tips and Tricks Booklet Vol. 1
MAS 500 Intelligence Tips and Tricks Booklet Vol. 1 1 Contents Accessing the Sage MAS Intelligence Reports... 3 Copying, Pasting and Renaming Reports... 4 To create a new report from an existing
tools that make every developer a quality expert
tools that make every developer a quality expert Google: Copyright 2006-2010, Google,Inc.. All rights are reserved. Google is a registered trademark of Google, Inc. and CodePro AnalytiX
Scribe Online Integration Services (IS) Tutorial
Scribe Online Integration Services (IS) Tutorial 7/6/2015 Important Notice No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, photocopying,
Software Application Tutorial
Software Application Tutorial Copyright 2005, Software Application Training Unit, West Chester University. No Portion of this document may be reproduced without the written permission of the authors.
Book Builder Training Materials Using Book Builder September 2014
Book Builder Training Materials Using Book Builder September 2014 Prepared by WDI, Inc. Table of Contents Introduction --------------------------------------------------------------------------------------------------------------------
Table of Contents 3 Table of Contents ABOUT THIS DOCUMENTATION 4 HOW TO ACCESS EPISERVER HELP SYSTEM 4 EXPECTED KNOWLEDGE 4 ONLINE COMMUNITY ON EPISERVER WORLD 4 COPYRIGHT NOTICE 4 EPISERVER ONLINECENTER
ORACLE BUSINESS INTELLIGENCE WORKSHOP
ORACLE BUSINESS INTELLIGENCE WORKSHOP Creating Interactive Dashboards and Using Oracle Business Intelligence Answers Purpose This tutorial shows you how to build, format, and customize Oracle Business
SPARQL UniProt.RDF. Get these slides! Tutorial plan. Everyone has had some introduction slash knowledge of RDF.
SPARQL UniProt.RDF Everyone has had some introduction slash knowledge of RDF. Jerven Bolleman Developer Swiss-Prot Group Swiss Institute of Bioinformatics Get these slides!
Search help. More on Office.com: images templates
Page 1 of 14 Access 2010 Home > Access 2010 Help and How-to > Getting started Search help More on Office.com: images templates Access 2010: database tasks Here are some basic database tasks that you can
Microsoft Outlook 2003 Basic Guide
Microsoft Outlook 2003 Basic Guide Table of Contents Introduction... 2 Getting Help... 2 Exploring Outlook... 3 Drop-Down Menus... 3 Navigation Pane... 4 Folder Pane... 7 Reading Pane... 7 Toolbars...
Quality Companion 3 by Minitab
Quality Companion 3 by Minitab Contents Part 1. Introduction to Quality Companion 3 Part 2. What's New Part 3. Known Problems and Workarounds Important: The Quality Companion Dashboard is no longer available.
Getting Started with Access 2007
Getting Started with Access 2007 Table of Contents Getting Started with Access 2007... 1 Plan an Access 2007 Database... 2 Learning Objective... 2 1. Introduction to databases... 2 2. Planning a database...
Florence School District #1
Florence School District #1 Training Module 2 Designing Lessons Designing Interactive SMART Board Lessons- Revised June 2009 1 Designing Interactive SMART Board Lessons Lesson activities need to be designed
MadCap Software. SharePoint Guide. Flare 11.1
MadCap Software SharePoint Guide Flare 11.1 Copyright 2015 MadCap Software. All rights reserved. Information in this document is subject to change without notice. The software described in this document
1.5 MONITOR. Schools Accountancy Team INTRODUCTION
1.5 MONITOR Schools Accountancy Team INTRODUCTION The Monitor software allows an extract showing the current financial position taken from FMS at any time that the user requires. This extract can be
Lenovo Online Data Backup User Guide Version 1.8.14
Lenovo Online Data Backup User Guide Version 1.8.14 Contents Chapter 1: Installing Lenovo Online Data Backup...5 Downloading the Lenovo Online Data Backup Client...5 Installing the Lenovo Online Data
Intro to Web Development
Intro to Web Development For this assignment you will be using the KompoZer program because it free to use, and we wanted to keep the costs of this course down. You may be familiar with other webpage editing
Outlook basics. Identify user interface elements that you can use to accomplish basic tasks.
Outlook basics Outlook is a powerful tool for managing your e-mail, contacts, calendars, and tasks. To use it most effectively, you first have to understand the basics. This tutorial introduces some
designvue manual v1 Index This document contains the following sections:
1 designvue manual Index This document contains the following sections: 1) Introduction 2) designvue 3) Creating an IBIS node 4) Changing the type of an IBIS node 5) Changing the status of an IBIS node
LogMeIn Network Console Version 8 Getting Started Guide
LogMeIn Network Console Version 8 Getting Started Guide April 2007 1. About the Network Console... 2 2. User Interface...2 3. Quick Start... 2 4. Network & Subnet Scans...3 5. Quick Connect...3 6. Operations...
Table of Contents. 1. Content Approval...1 EVALUATION COPY
Table of Contents Table of Contents 1. Content Approval...1 Enabling Content Approval...1 Content Approval Workflows...4 Exercise 1: Enabling and Using SharePoint Content Approval...9 Exercise 2: Enabling
DCA. Document Control & Archiving USER S GUIDE
DCA Document Control & Archiving USER S GUIDE Decision Management International, Inc. 1111 Third Street West Suite 250 Bradenton, FL 34205 Phone 800-530-0803 FAX 941-744-0314 Copyright 2002,
Teamstudio USER GUIDE
Teamstudio Software Engineering Tools for IBM Lotus Notes and Domino USER GUIDE Edition 30 Copyright Notice This User Guide documents the entire Teamstudio product suite, including: Teamstudio Analyzer
Microsoft Office Access 2007 Basics
Access(ing) A Database Project PRESENTED BY THE TECHNOLOGY TRAINERS OF THE MONROE COUNTY LIBRARY SYSTEM EMAIL: TRAININGLAB@MONROE.LIB.MI.US MONROE COUNTY LIBRARY SYSTEM 734-241-5770 1 840 SOUTH ROESSLER
|
http://docplayer.net/13948673-Getting-started-guide.html
|
CC-MAIN-2018-22
|
refinedweb
| 10,856
| 57.16
|
.
Last updated 2006-02-02.
The GNU Pascal Coding Standards were designed by a group of GNU Pascal project volunteers. The aim of this document is extending the GNU Coding Standards with specific information relating Pascal programming. As a matter of fact, the information contained in the GNU Coding Standards mainly pertains to programs written in the C language. On the other hand, they also explain many of the rules and principles that are useful for writing portable, robust and reliable programs. Most of those general topics could be shared with this document with just a few specific notes, thus cross references are provided which will lead you to the more extensive information contained in the GNU Coding Standards.
This release of the GNU Pascal Coding Standards was last updated 2006-02-02.
The GNU Pascal Coding Standards are available as part of the GPC distribution – in binary distributions as info files, in source distributions also as Texinfo files from which further formats such as HTML, PostScript and PDF can be generated. An HTML version is also available on GPC's home page,.
Corrections or suggestions for this document should be sent to the GNU Pascal Compiler Documentation mailing list, gpc-doc@gnu.de. If you make a suggestion, please include a suggested new wording for it; our time is limited. A context diff to the “source” Texinfo file would be very appreciated, if at all possible. If you can't provide the context diff, please feel free to mail your suggestion anyway.
These people are the tyrants who are imposing their coding style to the community so far: Peter Gerwinski `peter(at)gerwinski.de', Frank Heckenbach `frank(at)pascal.gnu.de', Markus Gerwinski `markus(at)gerwinski.de', Dominik Freche `dominik.freche(at)gmx.net', Nicola Girardi `nicola(at)g-n-u.de'.
This chapter from the GNU Coding Standards discusses how you can make sure that GNU software avoids legal difficulties, and other related issues. See Intellectual Property.
This chapter discusses some of the issues you should take into account when designing your program.
We support the idea that a variety of programming languages is a good thing and that different languages are appropriate for different kinds of tasks. Unlike the GNU Coding Standards (see Source Language), we do not try to persuade you to use C, Pascal or any single language for everything.
If you're reading this, you have probably already decided to use Pascal for some project or are considering using it. This documentation will suggest you how to format your Pascal code when you do so.
You can link a C library or C object code to your Pascal program or unit. Please note the description in the GPC manual on how to do so (see Other Languages).
In particular, to access C libraries, we strongly recommend using C wrappers. This is a portability issue. There might be changes in different versions of the library which might affect direct `external' declarations in Pascal code. You should update the wrappers so that Pascal programs or units work with whatever version of the library you have.
There are times when you deal with large packages and you can't easily retain compatibility to different versions of the packages themselves. In this case, you can link directly to the library you're going to work with, and link a supplementary C file which does nothing but the version check. This is an example:
#include <foo.h> #if FOO_MAJOR != 1 || FOO_MINOR != 2 #error The GPC interface for libfoo was only written for libfoo-1.2. #error Please get libfoo-1.2 or check for a version of the GPC interface #error matching your version of libfoo. #endif
Note the use of `!=' instead of `<' or `>', to perform a very strict version check. Please keep in mind that this is alright if there is only one implementation of a library, i.e., you can do this with GTK, but you can't with libc, libm, curses etc.
An automatic header translator is planned which would make the C wrappers superfluous. This is a highly non-trivial job and it's not sure that it's at all possible, so it will at least take some time to be available.
You can assume the GNU C Compiler is used to compile the wrappers and, in general, any C code snippet you link to your Pascal code. The reason for such an assumption is that only GNU C Compiler is guaranteed to have all conventions compatible to GNU Pascal Compiler on every platform they run on, as they share the same backend. Also, The GNU Pascal Compiler is always built together with the GNU C Compiler, so `gcc' can be assumed to be available wherever `gpc' is.
Many GNU Pascal facilities are provided which extend the standard Pascal language. Whether to use these extensions in implementing your program is a tedious question.
On the one hand, using the extensions can make a cleaner program. On the other hand, people will not be able to build the program unless the GNU Pascal Compiler is available. This might cause the program not to compile with other compilers.
In general, it is best to retain compatibility to other compilers or to the language standards, if this compatibility is easy to achieve. In general, sadly enough, to achieve compatibility you get considerable drawbacks. For example, you might have to add lots of `{$ifdef}'s to cater for some non-standard compilers, which make the code harder to read, write, test and maintain. Moreover, `{$ifdef}'s themselves are non-standard extensions, so you don't win much this way.
In the end, we suggest not to bother too much about compatibility. All of the GNU Pascal Compiler interfaces (compiler and Run Time System) are open. This means they can be implemented for other compilers when needed or even the same sources can be used provided the license is preserved (read more about the GNU General Public License at), rather than crippling the code by not using the extended features. A (limited) example of this strategy is in the `gpc-bp' unit for Borland Pascal, distributed with the GNU Pascal Compiler. You might want to look at its interface to see what exactly it contains. It's easy to extend it by more compatibility features when needed, though there are features that cannot easily be emulated (in particular those that have a special syntax).
Please do not use the following features, especially the ones that were implemented just for backward compatibility:
Str (Foo, s); s := 'Hello ' + s;
FillChar (s, SizeOf (s), 0);
to clear a string, is wrong in GNU Pascal and inefficient even in Borland Pascal, since the following could be used:
s := '';
This would only clear the length field of the string `s'.
The GNU Coding Standards have nice statements on this topic. See Using Extensions.
This chapter from the GNU Coding Standards describes conventions for writing robust software. It also describes general standards for error messages, the command line interface, and how libraries should behave. We encourage you to read that part of the GNU Coding Standards. See Program Behavior.
Here are special notes for Pascal programming, anyway.
The choice between signal functions, discussed in the GNU Coding Standards, is done in the Run Time System so you needn't care about it.
Another discrepancy with the GNU Coding Standards is the default behavior for error checks that detect “impossible” conditions. We don't suggest just aborting. This implies that every user can be a programmer, but we don't believe this is realistic. Our advice is to print a reasonable error message so that users can report bug descriptions to programmers who didn't notice the bug themselves or could not reproduce it.
Also, the GNU Coding Standards suggest checking every system call for an error return. That applies to C. In Pascal, error checking is often automatic, thus you needn't bother with error checking. Many I/O routines don't return a value (for example, `Reset'), but those that do should usually be checked.
Of course you can disable automatic error checks and see to them for yourself. In fact, some errors might cause the program to automatically abort with an error message. Instead, especially in units or modules, you might want to report errors and give the user a chance to intervene and fix things up. To do so, you must use the `{$I-}' compiler directive, and check the value of `IOResult' (see IOResult) or the global error variables such as `InOutRes' (see InOutRes). Note that I/O routines return immediately if `InOutRes' is set, so it's not necessary to check it after each operation, so the following is possible:
{$local I-} Rewrite (f, 'bla'); WriteLn (f, 'foo'); WriteLn (f, 'bar'); WriteLn (f, 'baz'); Close (f); {$endlocal} if InOutRes <> 0 then begin WriteLn (StdErr, GetIOErrorMessage); ... end;
However in your code you might want to also check `Rewrite' and other opening calls, which are the most likely to fail, and you will avoid unnecessary further calls.
There is a set of routines in the GPC unit for naming temporary files, configuration files and many other file name related stuff. The advantages of using these are that they work for different kinds of systems (for example Unix and DOS), and that future problems can be corrected in one place in the Run Time System rather than in several different programs or units.
As far as libraries are concerned, we suggest that you don't put each routine in a separate file. Hopefully someday the GNU Pascal Compiler will do this automatically on the linker level. At the moment, we believe that the convenience to the programmer is much more important than binary size. We also recommend not using a name prefix, as name conflicts can be resolved by qualified identifiers (`UnitName.RoutineName').
This chapter provides advice on how best to use the Pascal language when writing software. Of course, the rules apply to published code only – if you for example want to comment things out with old style comments like `(* this one *)', you should do it temporarily and remove it before distributing your code. But since you never know if and when you are going to publish your code, it's a good idea to stick to the rules from the beginning.
Pascal code file names should have the `.pas' suffix. The file name without the suffix should usually correspond to the name of the program/unit/module, but all in lower case. There should be only one program/unit/module in a file.
Code must compile with the `-Wall' flag, with and without the `-O3' flag with no warnings. (See Compiler Directives, for how to intentionally disable certain warnings if really necessary.)
Don't use the automatic `Result' variable in functions. If you want one, just declare it:
function Foo (...) = Bar: Integer;
Use the declaration with `=', not without it, unless you want to be strictly PXSC compatible.
If a function returns a `Boolean' to indicate success, `True' should mean success and `False' failure, unlike some C routines where `0' means success.
Avoid `goto' and similar statements, like `Exit', `Return', `Break', `Continue'. Avoid `goto' at any price (except possibly a non-local `goto' to return from deeply nested, recursive functions in case of error). Avoid the others if this is possible with reasonable effort. If it would require an additional `Boolean' variable, this counts as an excuse for using those statements if you really want. Note that often, code becomes significantly simpler by not using `Break' etc. and instead using a better loop condition or a different kind of loop.
Never modify `for' loop counters, or rely on their value after the loop. (Well, that's not merely a coding style, that's the definition of Pascal. Doing such things will produce undefined results.)
Never rely on undefined behavior. For example, that global variables seem to be initialized to `0' at the start of the program, or perhaps sometimes newly allocated memory seems to be initialized, or memory after deallocation still seems to hold some values, or that `for' loop counters seem to have a certain value after the loop – none of these is guaranteed, and the behaviour may change when you change compiler or its version, or when you change platform. Undefined means undefined, and the fact that such things might seem to work on all systems you have checked and with 42 other compilers means exactly nothing.
In comparisons put the “more variable” expression on the left side:
for i := 1 to 10 do if a[i] = Foo then for j := 1 to 10 do if b[j] = a[i] then ...
Considering the second line of the above example, the expression on
the left (
a[i]) varies each turn, but the right side
(
Foo) does not. (In this case we assume that
Foo is a
constant or a function which doesn't depend on
i or some
other global data. Otherwise it might make sense to put
Foo
on the left, and perhaps use an extra comment to point this out.)
The last line of the above example might look strange, because
b[j] and
a[i] might look as though they have the same
level of “variableness”. But in fact,
j ranges more often
than
i, i.e. each time
i changes,
j has already
changed 10 times.
Avoid code duplication. It is easy to copy the code, but it becomes a maintenance nightmare to change several similar places. Use routines or subroutines, units or modules, whatever. Plan each part of the code so that it can be extended. Don't pull too clever tricks in places that will likely be changed later.
Do not surround single statements with `begin' and `end', unless you have to avoid the dangling else problem or the single line statement forms a whole routine body! See the following examples:
if foo then begin if bar then baz end { Avoid the dangling else problem. } else qux { Single line statement. }
Do not write empty unit initializers. This is what not to do:
... procedure Foo; begin ... end; begin end.
Instead, simply:
... procedure Foo; begin ... end; end.
Do not write unused declarations, unless in interfaces which are meant to be used by the importer.
Remember that `Boolean's are `Boolean's. Please use `if Foo then' instead of `if Foo = True then', and `if not Foo then' instead of `if Foo = False then'. Also, use `until False' in place of `until 1 = 0' – this looks smarter. Another common situation is `Foo := Expression' instead of `if Expression then Foo := True else Foo := False'.
Avoid duplicate global identifiers, i.e. don't overload a built-in identifier, although the GNU Pascal Compiler allows this, and don't use the same global identifier in several units or modules. (Thanks to “qualified identifiers” such identifiers pose no problem to the compiler but still can confuse humans.)
We discourage the use of global variables for non-global purposes
(e.g., the use of a variable
Counter used as a counter in
various local routines). Declare a counter variable for each routine
that needs it, instead. In general, this also allows for better
optimization of the code generated.
When you need an infinite loop (which may be left with
`Break'), we suggest you use a
repeat rather than a
while loop because it shifts your code less to the right (at
least, if there's more than one statement in the loop). That is:
repeat ... until False
Instead of:
while True do begin ... end
As stated in the GNU C library documentation (see Consistency Checking), when you're writing a program, it's often a good idea to put in checks for violations of basic assumptions. Consider the following procedure in Pascal:
procedure DoSomethingOnAPString (StrPtr: PString);
You may implicitly assume that the above procedure will never be
called with
nil as its argument, but it is safer to check for
the “impossible condition”, i.e. check that `StrPtr' is other
than
nil, like this:
procedure DoSomethingOnAPString (StrPtr: PString); begin Assert (StrPtr <> nil); ... end;
When this check fails, the program produces a runtime error. You then may infer that the code which calls this procedure is buggy (or that you need to extend this particular routine), so this may indeed be helpful in locating the problem. In other words, checking for basic assumptions at the beginning of a routine body or other strategic places is the right way to make sure a function isn't being misused.
The GNU C library provides the
assert macro for this kind of
checks. GNU Pascal provides a Pascal counterpart which is called
Assert, which behaves a little differently.
Assert
won't abort your program, but rather cause a runtime error
(see Assert) which, e.g., you can catch using the
`Trap' unit (see Trap).
Once you think your program is debugged, you can disable the error
checks performed by the
Assert routine by recompiling with
the `--no-assertions' switch. No change to the source code is
needed in order to disable these checks. Side-effects in the
argument to `Assert' are still evaluated (unlike in C), so it
is alright to write:
Assert (MyFunction (Foo, Bar) > 0)
This will always call
MyFunction, but only make sure that its
result is positive if `--no-assertions' is not given.
However, it is recommended that you don't disable the consistency checks unless you can't bear the program to run a little slower.
First of all, avoid unnecessary spaces at the end of the lines. Also remember not to save the file with TAB characters, as different editors or different configurations will interpret them with a different amount of spaces, thus breaking indentations. (If you use GNU Emacs, the `untabify' function comes in handy; if you use VIM, the option `expandtab' (`:set et'); in PENG, the option `Expand tabs' can be used.)
Please avoid the use of any control characters, except newline, of course. This means no form feed characters (`#12'), i.e. new page characters. They are recommended in the GNU Coding Standards to separate logical parts of a file, but don't use them at least in Pascal code. No `SUB' character (`#26') either, misused as an end-of-file indicator by DOS. Older DOS editors put that character at the end of each file for no good reason, though even the FAT file system knows about the end of a file by its own.
We recommend a maximum line length of 68 characters, so that it can be printed in TeX with default font on A4, or 78 characters, for 80-column wide screens. This is not a fixed rule because breaking lines too often decreases readability of source code.
Use empty lines between blocks. Blocks are long comments, `type', `const', `var', `label' sections, routine bodies, unit/module initializers/finalizers, `program', `unit', `interface', `implementation', `module', `export', `uses', `import' lines, global compiler directives. As far as long comments that refer to the following declaration, put only an empty line before the comment, not between the comment and the declaration itself. A special exception is between blocks within the same routine – do not use empty lines there. For example:
procedure Short; var Foo: Integer; Bar: Char; begin ... end;
But remember to use empty lines to separate subroutines, like the following:
procedure Long; const ... var variables used by Sub ... procedure Sub; var ... begin ... end; var variables not used by Sub ... begin ... end;
Note that you shouldn't put an empty line after the main routine declaration, unless a subroutine declaration immediately follows. Otherwise the main routine declaration would look like a forward declaration.
Notice that in the code snippet above we separated local variables (or constants) before and after the subroutine – this is not mandatory.
Of course, what we said for subroutines is also valid for sub-subroutines at any depth.
An empty line should be put between declarations of the same type, where appropriate, to separate them logically. In case there is a comment before the declaration, the empty line must be before the comment. Otherwise, the empty line goes before the declaration.
Empty lines can be used in long comments to separate paragraphs.
No empty lines at the beginning or ending of a file, only one newline at the ending. No multiple empty lines.
The comments should be placed in braces like this:
{ This is a nice comment. }
Do not use the old style comment between brackets and asterisks, like this:
(* This is an ugly comment. One you mustn't write. *)
Also, do not use comments introduced by the double slash:
// Another kind of comment not to write.
Although ISO Pascal explicitly allows for mixed comments, the GNU Pascal Compiler doesn't even accept it unless you turn it on the option with the appropriate compiler directive `{$mixed-comments}' – but you don't want to do it. Here are a couple of examples of mixed comments, which you should not follow:
(* This ... } { ... and that. *)
Also, try to avoid nested comments, like `{ { This one } }'. These are alright if you want to put some TeX in a comment or something more exotic. Whatever reason you have to use nested comments, you need to turn on the option with the appropriate compiler switch, which is `{$nested-comments}'. Do not use the `--nested-comments' command line option. Put all such options in the source, so that someone else trying to compile it doesn't have to figure out what command line switches are needed, and because command line options would affect all source files, e.g. when compiling a project with multiple units/modules.
Please write the comments in your.
You should adopt “French Spacing”, i.e. only one space at the end of a sentence. This way, you can't use GNU Emacs `M-a' and `M-e' key combination to move through sentences. We hope that you can live without that. Also, please put just one space after the comment opening brace and before the closing brace.
If a comment regards only one line of code, possibly write it after the line of code, in the same line, separated from the code with two spaces. This is also allowed for the interface section of a unit and for global variables. Most often you are likely to write this sort of comment beside record/object fields. In other cases, comments go in one or more lines of their own, like this:
{ foo bar baz }
Or longer:
{ foo bar baz }
Or with paragraphs:
{ foo bar baz qux }
The comments need to be placed before the code they describe, and they need to get the same indentation level. This example should make this clear:
{ My types. } type ... type { My first types. } Foo = Integer; ... begin { My first statement. } Bla; { Start of loop. } repeat { Body of loop. } ... { Finish when Something happens. } until Something end;
Note the position for the comment to `until'.
Comments describing a global declaration should be on one or more lines of their own, immediately before the declaration. For example:
{ This is Foo. It does this and that. } procedure Foo;
Do not write “trivial” comments, like the ones listed in the examples above. You should avoid comments by writing clear code. Linus Torvalds points this out strongly in the Kernel Coding Style:.
(Note that we otherwise deviate quite a bit from Linus's coding style.)
“Tricky” code is worth being commented. We define “tricky” the code that does non-obvious things, relies on non-obvious assumptions, has non-obvious implications, there is anything to note when changing it, is not what it looks at first sight, there is a side effect, or requires other parts of the source file to be changed simultaneously with it. Tricky code should be used very sparingly.
In the case that a comment refers to some other place in the code, either in the same file or in a different file, please refer to it not by line number (this will change too often) but by routine name or context. Also, think whether it is useful to put a comment in the other place pointing back. (Not always, but sometimes this has proved useful to us.)
To comment out parts of code that should not be compiled, you need to surround it with `{$if False} ... {$endif}' rather than using a comment.
To separate logical parts within big modules or units, you can use a special comment – we suggest this fixed pattern as it's easily searchable:
{@section Name of the section} {@subsection Name of the subsection}
Note that no space follows the opening brace nor predeces the closing brace in this case.
A module or unit or library should have a comment for each of its interface declarations, so that the interface part of the source file is a reliable source of documentation. This is optional for any declarations introduced only in the implementation section or in `program's. Of course, several related declarations (e.g., groups of constants) can share a comment.
A utility called `pas2texi' will be written to build Texinfo files from Pascal comments. This will allow certain kinds of markup within comments. They will be described in the documentation of `pas2texi' and/or in future versions of this document.
You can use “fixme” comments, to point out things to be fixed in the code, or in a library (or module, or unit, or compiler used) which directly affect the code, requiring a work-around. These comments are prepended by at least two `@' – add as many `@' as the urgency of the issue increases.
These comments may contain more or less obscure details about the problem, especially if the root of the problem is elsewhere. For example, the comment `{ @@fjf226 }' declares the following code a work-around to a GNU Pascal Compiler problem which is demonstrated by the GNU Pascal Compiler test program `fjf226.pas'. (It is a file you can find in the GNU Pascal Compiler source package.)
“Fixme” comments should not be mixed with ordinary comments. If you need both kinds, use them separately, even if directly after each other. They can be used everywhere, even within statements, since they are temporary in nature. Most normally they happen to fall in the body, unless they influence interfaces. In particular, interfaces that are likely to be changed should have a `@@' comment immediately before their description comment.
Please start each file with a comment containing, in this order:
In general, you should follow this order for declaration blocks:
You may deviate from this order when it is necessary or makes the code more readable. This is an example where the order can't be respected:
type TSomething = record This, That: Integer end; const SomeConst = SizeOf (TSomething);
The rules above apply to declaration blocks within routines, too.
When there are several, more or less independent parts, especially in a large unit or module, you may apply this order within each part. Do not put, for example, constants of all parts together. You have to keep the code readable.
Variables that are used only in the main program must be declared globally in Pascal, although GNU Pascal offers an extension for declaring variables at arbitrary places in the code (see var). In this case, in contrast to the previous general rule, it is often better to put their declaration just before the main program's `begin', after all routines etc., especially when there are more than a few such variables and the size of the source file is not small. Thus, the variable declaration block is easier to see and change for the programmer when editing the main program, and you make sure that routines don't use them accidentally.
When you declare a type together with its pointer type, declare the pointer first. It is easier to recognize especially if the type is a long record or object. Also, it makes possible using recursive structures (i.e., using pointers to a type within this type). You should prepend a `T' to the type name and a `P' to the associated pointer type. See the example:
type PMyInt = ^TMyInt; TMyInt = Integer; PStrList = ^TStrList; TStrList = record Next: PStrList; s: TString end;
Note that the `Next' field is specified first. We suggest always putting it as the first field in recursive types, as it allows some generic list routines and may be a little more efficient to walk the list, i.e. no offsets.
We suggest putting all pointer types within each `type' declaration first, although we don't consider this mandatory. This is an example:
type { Pointer types } PFoo = ^TFoo; PBar = ^TBar; PBaz = ^TBaz; { Some custom integer types } TFoo = Integer attribute (Size = 16); TBar = Cardinal attribute (Size = 16); TBaz = Cardinal attribute (Size = 32);
Within object types you can have three declaration areas. There are three reserved words for introducing these areas: `public', `protected', `private'. Within each of these areas follow this order:
In the object implementation part, put the routine bodies in the same order in which they appear in the declaration in the interface. This also applies to units and modules, in which the implementation should reflect the interface declarations.
Do not use the trailing `;' at the end of a block, i.e. before `end', `until', etc. except `case' – the last branch before the `else' branch (or the last branch if there is no `else' branch) should have a `;', to avoid problems like:
case ... Foo: if Bar then { later inserted } begin ... end { if there's no semicolon here ... } else { ... this will be mistaken as the `then''s `else' } ...
(Same if the `if' was there for longer and the `else' branch of the `case' is later inserted.)
In an object, it may look strange to omit the `;' after the last item which is most often a method. Therefore we allow it, and for consistency also in records.
Reserved words should be all lower case, including directives, i.e. words that are reserved only in some contexts, like `protected'. If you use directives as identifiers (which is likely to cause you pain) outside of their contexts, write them like identifiers.
As a special exception, you can use capitalized `File' when used as a type of its own, i.e. an untyped file, unlike `file of Char'. The same can't be said for `procedure' as a type (Borland Pascal style) since `File' can be a proper type, while `procedure' is a type constructor, i.e.:
procedure Foo (var a: File); { This works. } procedure Foo (var a: procedure); { This doesn't. }
Next issue is capitalization of identifiers. There's no difference between built-in and user-defined identifiers. Only the first letter should be capital, or, if there are concatenated words or acronyms, the first letter of each word should be capital – do not use underscores. Acronyms that have become part of the natural language can be written like that. For example, `Dos' or `DOS'; but always `GPC', not `Gpc'. Here are some examples of identifiers: `Copy', `Reset', `SubStr', `BlockRead', `IOResult', `WriteLn', `Sqr', `SqRt', `EOF', `EOLn'.
These rules apply to constants identifiers, too, unlike C macros.
Also note that very small identifiers can be written lower case, like `i' or `s1' or `xx'. Such short identifiers should be used only locally. They can be used for parameters of global routines, because the scope of such parameters is local as well, and their names in fact don't matter at all to the caller. The use of such identifiers in a global context should be avoided, especially in units or modules or libraries (because the author doesn't know in which contexts they will be used).
Please be consistent with your capitalization. You know that Pascal will not hurt you if you change capitalization for an identifier throughout the code, but please stick to the same capitalization.
For identifiers for the values of enumeration types and for blocks of constants, i.e. places where you introduce a lot of identifiers, it can be useful to use a two-letter lower-case prefix and `_', in contrast to the previous rules:
type TFooBar = (fb_Foo, fb_Bar, fb_Baz, fb_Qux);
{ My Foos } const mf_Foo = 1; mf_Bar = 3; mf_Baz = 42;
In object oriented code (especially.
As far as macros are concerned, we strongly recommend that you do not use them. Please, do not use macros in your programs. Try to avoid using macros in your programs, because they are evil. We believe you must not use macros in your code. Said that, if you still dare to use a macro, write it capitalized entirely and separate words with underscores. Since macros do not follow Pascal's scoping, it makes sense to write them differently. This applies to conditionals, too.
We generally suggest using as few compiler directives as reasonably possible, because they make the code harder to understand (e.g., when checking for side-effects) and to modify (e.g., when moving parts of code into or out of the scope of compiler directives). The directives should be invoked like in the example:
{$your-compiler-directive}
Definitely not this way (see Comments):
(*$do-not-use-such-a-compiler-directive*)
Also, definitely not this way, which is dependent on line breaks, unlike Pascal normally is:
#your-compiler-directive
Same goes for macro definitions:
{$define ...}
This also saves the ending backslash before line breaks, in contrast to `#define'. But you will not use macros, will you? (see Capitalization)
As far as spacing is concerned, don't type a space before the closing brace, as there can't be one after the opening brace. If you concatenate many directives together, don't put a space between each of them, a single comma is enough.
No comments should be inserted within the directives. Write them separately, instead, like this:
{$X+} { We need extended syntax. }
Borland Pascal allows mixing comments with directives, but it's really a misuse.
Short forms for calling the directives are alright, but long forms are at least as good, not to say preferred. Short forms must be written in caps, while long forms in lower case (except for case-sensitive arguments like messages and file names – of course, file names must always be treated as case-sensitive, even on DOS, to preserve code portability).
You can combine several directives, also mixing short and long ones, in a single call, for example like the following:
{$gnu-pascal,I-,X+}
Any unit or module should have `{$gnu-pascal,I-}' or `{$gnu-pascal,I+}' near the beginning (after the head comment with description and license). `{$gnu-pascal}' lets the unit be compiled without dialect options even if the main program is compiled with some. `{$I-}' or `{$I+}' indicates to the user (even though one of them is the default) whether the unit handles/returns input/output errors or lets them cause runtime errors. The former is preferable for most units. For programs, this item is optional. Routines that return input/output errors should have the attribute `iocritical' (see attribute):
procedure CriticalRoutine; attribute (iocritical);
`{$W-}' (no warnings) must only be used locally and must have a “fixme” comment (see Comments) because it indicates a problem with the code or the compiler.
Please, don't disable warnings when you're just too lazy to write the code that does not produce warnings.
Any compiler flags that are not set globally (for example, together with `{$gnu-pascal}', see above) should be set with `{$local ...}'. In other words, not this way:
{$I-} Reset (f); {$I+}
But this way:
{$local I-} Reset (f); {$endlocal}
The former is wrong if `{$I-}' was set already. Even if a programmer might know and take into account which is the global setting, this might be changed sometime, or part of the code may be copied or moved. The latter form is safer in these cases.
To make it even clearer, from the last two rules it follows:
{$local W-} Foo; {$endlocal} { @ GPC produces a superfluous warning }
Again, try to avoid local directives. `{$I-}' is sometimes needed. `{$X+}' might be used if really, really necessary (as locally as possible): avoid pointer arithmetics.
Don't use `{$X+}' to ignore function results, don't use `{$ignore-function-results}', either. It is too easy to ignore a result one should not ignore. Sometimes, especially when linking to a foreign C library, you might have to deal with functions which have a superfluous result, which you probably don't want to check. You can declare such functions with the `ignorable' attribute, so that their results are silently ignored.
Also use dummy variables if you want to ignore the result of a particular call to a function whose result should in general not be ignored. But in such cases check carefully whether the result can really be ignored safely. If, however, an unexpected result would indicate an “impossible” situation, it's usually better to check the result and print a warning or abort in the unexpected case, at least if `DEBUG' is defined (see Compiler Directives).
Linker directives, i.e. `{$L}' for libraries and C (or other language) source files should be put near the start in programs and shortly after the `implementation' line in units or modules. Several libraries and C source files in one directive are possible when they belong logically together (for example, a library and its C wrappers), but not for separate things. This directive should not be mixed with other directives (which doesn't even work if `L' comes first – the other way around it might work, but shouldn't be used). The external declaration of the library or C routines should immediately follow the directive (except in a unit or module for those that go in the interface). Using `{$L}' in programs is often not a good idea, making a unit is often better for abstraction and reuse.
Conditional compilation might be useful sometimes, but you should use as few `{$ifdef}''s as possible, as they decrease readability. When conditionals are used for differences between systems, check for features (for example, `__BYTES_LITTLE_ENDIAN__') or groups of systems (for example, `__OS_DOS__') rather than individual systems, to better cater for systems you don't know or that may not even exist yet.
If possible (this might not be available), use the predefined constants (for example, `BytesBigEndian', `OSDosFlag') rather than defines – for code that is possible (the “always false” branch will be optimized away, but you still get its syntax checked as an additional benefit besides not using the preprocessor); for type declarations it is usually not possible and you have to use the defines. A good example is the declaration of `TWindowXY' in the CRT unit. See:
TWindowXYInternalCard8 = Cardinal attribute (Size = 8); TWindowXYInternalFill = Integer attribute (Size = BitSizeOf (Word) - 16); TWindowXY = packed record {$ifdef __BYTES_BIG_ENDIAN__} Fill: TWindowXYInternalFill; y, x: TWindowXYInternalCard8 {$elif defined (__BYTES_LITTLE_ENDIAN__)} x, y: TWindowXYInternalCard8; Fill: TWindowXYInternalFill {$else} {$error Endianness is not defined!} {$endif} end;
The `DEBUG' flag should be used for (and only for) code to help debugging, i.e. code which doesn't change the real functionality. Programs must compile with and without setting `DEBUG'. The latter may run slower and may produce useful additional messages in a suitable form, i.e. clearly marked as debug messages, for example prefixed with `DEBUG: ', and may abort when it detects erroneous or dubious conditions.
Conditionals can also be used to make different versions of some code, for example, using GMP numbers if a condition is satisfied and using normal integers or reals otherwise (GMP is a library for working with arbitrarily large numbers). In this case, the name and meaning of all such defines used in a file must be explained in a comment near the top. (For examples, see the `__BP_TYPE_SIZES__', `__BP_RANDOM__' and `__BP_PARAMSTR_0__' in the System unit.) The code must compile with any combination of those conditionals set, which means you have to test exponentially many cases – here is a good reason to keep their number as small as possible.
Another similar use of conditionals is to select between different implementations. You should adopt this strategy only if all of the implementations are really supported or planned to be supported. Otherwise, you'd better move the old implementations into your “museum” and keep the code clean. The notes about code compilation of the previous rule apply here as well.
When you need to deal with complicated conditionals use Pascal syntax, i.e. format the conditionals according to the rules for Pascal code, rather than C syntax. This is a silly example:
{$if defined (Foo) or False}
Instead, this is an example not to follow:
{$if defined (Foo) || 0}
Or even worse:
#if defined (Foo) || 0
A special conditional can be used to comment out code temporarily. Here's the appropriate syntax:
{$if False} ... {$endif}
A standard conditional statement should be used in programs or units or modules you distribute to make sure that the appropriate version of the GNU Pascal Compiler is used. You can follow this template:
{$if __GPC_RELEASE__ < 20020510} {$error This unit requires GPC release 20020510 or newer.} {$endif}
In general, no multiple spaces should be used except for indentation and as indicated below.
A single space goes before and after operators, and `:=' and `..' as well as `:' in `Write', `WriteLn' and `WriteStr'; after the comma and other `:'. This example ought to make it clearer:
var Foo: Integer; ... begin Foo := 42; WriteLn (Foo + 3 : 5, ' bar') end;
No space should go before unary `-'. In fact, these are the correct forms: `x - 1', `-x', `-1'.
A space must go before the opening parenthesis (`(') and after the closing parenthesis (`)'), unless adjacent to further parentheses, brackets, `^', `;', `,'. In other words, a space goes between identifiers or keywords and the opening brace (`('). (All the other spaces in this example are implied by the previous rule already.) See:
Foo (Bar^(Baz[Qux * (i + 2)]), Fred (i) + 3);
For indexing arrays actually don't use a space before the opening bracket, i.e. `Foo[42]' rather than `Foo [42]'. However, insert a space before the opening bracket in array declarations, like:
Foo: array [1 .. 42] of Integer;
A space goes before the opening bracket of a set constructor in some situations – those brackets should be treated like parentheses, unlike the brackets used in array indexing. For example:
x := [0, 2 .. n];
But:
Foo ([1, 2, 3]);
No spaces for `.' and `^':
Rec.List^.Next^.Field := Foo
As we already pointed out, a single space goes after the opening brace and before the closing brace in comments, but not in compiler directives. Also, and we said this already too somewhere in the manual, two spaces go before comments after a line of code. For example:
Inc (x); { Increment x. }
Optionally use additional spaces to make “tabular” code. In our opinion, this increases readability a lot because the human eye and brain is trained to recognize such structures, and similiarities and differences between the lines can be easier seen, and when changing the code, it's easier to find related places. An application of this principle can be seen in interface declarations (not so much applicable when separated by comments, but, for example, when described by a shared comment above them all):
function Pos (const SubString, s: String): Integer; function LastPos (const SubString, s: String): Integer; function PosCase (const SubString, s: String): Integer; function LastPosCase (const SubString, s: String): Integer; function CharPos (const Chars: CharSet; const s: String): Integer; function LastCharPos (const Chars: CharSet; const s: String): Integer; function PosFrom (const SubString, s: String; From: Integer): Integer; function LastPosTill (const SubString, s: String; Till: Integer): Integer; function PosFromCase (const SubString, s: String; From: Integer): Integer; function LastPosTillCase (const SubString, s: String; Till: Integer): Integer;
Also possible:
procedure Foo; function Bar ...; procedure Baz;
And, of course:
const FooBar = 1; Baz = 2; Quux = 3;
The same “tabular” strategy used in interfaces and const declarations can be used in initializers:
const Foo: TBarArray = (('Foo' , 3), ('Bar baz', 42), ('' , -1));
And in `case' statements:
case ReadKeyWord of kbLeft : if s[n] > l then Dec (s[n]) else s[n] := m[n]; kbRight : if s[n] < m[n] then Inc (s[n]) else s[n] := l; kbUp : if n > 1 then Dec (n) else n := 5; kbDown : if n < 5 then Inc (n) else n := 1; kbHome : s[n] := l; kbEnd : s[n] := m[n]; kbPgUp, kbCtrlPgUp: n := 1; kbPgDn, kbCtrlPgDn: n := 5; kbCR : Done := True; end
And optionally in other code:
WriteCharAt (1, 1, 1, Frame[1], TextAttr); WriteCharAt (2, 1, w - 2, Frame[2], TextAttr); WriteCharAt (w, 1, 1, Frame[3], TextAttr);
A line break is optional after local `const', `type', `var' declarations if they contain only a single declaration (but it is possible to have multiple identifiers in a single line).
procedure Baz; var Foo, Bar: Integer; begin ... end;
Of course, this is also accepted:
procedure Baz; var Foo, Bar: Integer; begin ... end;
But don't follow this example:
procedure Baz; var Foo, Bar: Integer; Qux: Real; begin ... end;
If you have many declarations you can break lines several ways. The following is the preferred form for `var' declarations:
var Foo, Bar, Baz, Qux, Quux, Corge, Grault, Garply, Waldo, Fred, Plugh, Xyzzy, Thud: Integer;
or:
var Foo, Bar, Baz, Qux, Quux, Corge, Grault, Garply, Waldo: Integer; Fred, Plugh, Xyzzy, Thud: Integer;
This one, instead, is more suitable to `record' and public `object' fields, especially if there's a comment for many or each of them:
var Foo, Bar, Baz, Qux: Integer;
No line break after `var' declarations within statement blocks, because they allow only one declaration, and doing a line break would look like further ones were allowed.
Foo := Bar; var Baz: array [1 .. Foo] of Integer;
As they are a GNU Pascal extension, use these declarations sparingly, for example for variables whose size depends on values computed within the routine, or for variables within unit or module initializers or finalizers to avoid global variables, although you might want to think about using a subroutine.
Do not insert a line break after `label'. This is how you should declare labels:
label Foo, Bar, Baz;
And, for completeness, here's how not to do it:
label Foo, Bar, Baz;
Several declarations in different lines don't even work:
label Foo; Bar; Baz;
Here's an example on how to use line breaks within a case statement.
case foo: begin ... end; bar, baz .. qux: ... else ... end;
Or (“tabular”):
case foo: begin ... end; bar, baz .. qux: ... else ... end;
Long statements or declarations should be broken either always before operators or always after them (where the extent of always is at least one routine) or after a comma, with indentation such as to make the meaning clear:
if (x = y) and (foo or (bar and (baz or qux)) or fred) then
or:
if (x = y) and (foo or (bar and (baz or qux)) or fred) then
Here's how to use line breaks within if-then-else statements. Another use for it is where you would use a `case' statement if it was possible, but it isn't possible (for example because the types are not ordinal, or the values to be compared to are not constant, or the comparison involves a function (`StrEqualCase', or there are additional conditions).
if ... then a else if ... then b else c
If `a' and non-`a' are main cases, and `b' and `c' are sub-cases of non-`a', use the following (the distinction might be a matter of taste sometimes):
if ... then a else if ... then b else c
The following (biologically quite incomplete) example contains a mixture of both forms which we consider reasonable:
if Habitat = 'Water' then { Animals living in water } WriteLn ('Is it a fish?') else if Habitat = 'Air' then { Animals living in air } WriteLn ('Is it a bird?') else { Animals living on land } if Legs = 8 then WriteLn ('Is it a spider?') else WriteLn ('Is it a gnu?')
The main cases are determined by the habitat, and the number of legs determines some sub-cases.
For normal control loops here's a brief list of possibilities:
for ... do ...
while ... do ...
repeat ... until ...
If there is only one command after the `if' clause, or in a `for' or `while' loop, or between `repeat' and `until', and if that command is short enough, you can put the statement on one line only, like this:
if ... then ...
for ... do ...
while ... do ...
repeat ... until ...
Here's how to behave when `begin' and `end' are involved.
if ... then begin ... end
for ... do begin ... end
while ... do begin ... end
The indentation is 2 characters wide, for each `begin', `then', `else', `case', `do' (`for', `while', `with', `to begin', `to end'), `repeat', `record', `object', `type', `const', `var', `label'.
The bodies and local variables etc. of global routines must not be indented, just like global variables etc. Each subroutine (header and body) and its declarations, on the contrary, must be indented.
program Prog; var GlobalVar: Integer; procedure GlobalProc; var LocalVar: Integer; procedure LocalProc; var LocalLocalVar: Integer; begin WriteLn ('This is a local procedure.') end; begin WriteLn ('This is a global procedure.') end; begin WriteLn ('This is the main program.') end.
Variant records should be indented as follows:
type Foo = record NonVariant: Foo; case Discriminant: Bar of Val1: (Variant1: Baz; Variant2: Qux); Val2: (Variant3: Fred) end; var Foo: record [ as above ] end = [ initializer ]
Bigger indentation, i.e. more than 2 characters wide, can be used to break statements or declarations or to get a “tabular” code.
Conditionals (`{$ifdef}') should be on the same indentation level as the code they affect:
begin {$ifdef DEBUG} WriteLn ('Debugging version'); {$endif} ... end;
Short conditionals which affect only an expression can be written within a single line:
Foo := {$ifdef DEBUG} 'debug' {$else} 'release' {$endif};
If they are intentionally used in a way contrary to normal syntactic rules, put them where they seem to fit best and write a comment:
begin { Do the code unconditionally if debugging } {$ifndef DEBUG} if SomeCondition then {$endif} begin ... end end;
Most times you will find a nicer and not less efficient way of writing the same statements. In this case, it can be done this way:
begin if {$ifdef DEBUG} True {$else} SomeCondition {$endif} then begin ... end end;
Or better yet:
{ globally } const DebugFlag = {$ifdef DEBUG} True {$else} False {$endif}; begin if DebugFlag or SomeCondition then begin ... end end;
Most rules we have covered so far do not apply within strings. In general, messages contained in strings should follow the GNU Coding Standards, for example, put quoted names within ``' and `'', although this means you have to double the `'' in a Pascal string. See Errors, for more information.
Normally you should use strings enclosed in single quotes, like `'this nice string that you are reading''. Use strings in double quotes when you need C style escape sequences like `"\t"'. Note that `NewLine' (`"\n"') is predefined, so using `NewLine' is preferable unless you have to use a C style string for other purposes.
You can use multiline strings like the following:
WriteLn ('Hello world')
or (perhaps preferable, especially if the text in the string contains paragraphs and/or indentation itself):
WriteLn ( 'Hello world')
However, it is also possible to use:
WriteLn ('Hello' + NewLine + 'world')
(Note that the above example won't compile without using the
GPC unit.)
Or, of course:
WriteLn ('Hello'); WriteLn ('world')
When you want to check if a string is empty, use this syntax:
if s = '' then ...
The GNU Pascal Compiler will eventually optimize it to the following more efficient test, hence you can use the previous, shorter one with no regret:
if Length (s) = 0 then ...
The same applies for `<>', of course, and even for assignments where `s := ''' is the recommended form and will be optimized by GPC to `SetLength (s, 0)'.
Please note the description in the GPC manual on how to do so (see I18N).
This section of the GNU Coding Standards also applies to GNU Pascal. Remember that `mmap' actually means `MemoryMap' in this context. See Mmap.
We recommend reading the respective section in the GNU Coding Standards, as it applies to this context, too. See Documentation. There are some notes worth writing here, though.
As far as man pages are concerned, it would be nice to have a man page referring to the Info documentation. There is a GNU program, called `help2man', which generates a man page based on the `--help' and `--version' outputs of a program. It works well, except that it always prints `FSF' which is not correct for all programs compiled with the GNU Pascal Compiler, but the output can easily be changed (for example, automatically using `sed').
However, don't put too much effort in man pages. They might be feasible initially, but keeping them up to date together with the Texinfo files means a lot of work. On top of that, if you don't keep them updated, they are likely to cause more confusion than they help.
On the one hand, if man pages are shortened too much they are likely to miss important information. On the other hand, if not shortened, they get hard to navigate.
In other words, devote to Info (i.e., Texinfo) documentation.
Please read the respective chapter in the GNU Coding Standards. Note that the huge auto-tools effort of C is not needed for normal GNU Pascal programs. Also Makefiles are often not necessary in GNU Pascal. See Managing Releases.
For your Pascal project you probably won't need large `Makefile's and you won't need to use `autoconf' or `automake'. You can give the `--automake' to the GNU Pascal Compiler so that it takes care of dependencies for you. (As of this writing, the GNU Pascal Compiler's `automake' feature has some slight bugs, but they will be fixed. Also, there is a plan for a utility called `gp', which is now under development, which will simplify the compilation process a lot more. Stay tuned. In any case, you usually don't need to write complex `Makefile's yourself.)
A simple Makefile may be in order, like:
GPC_FLAGS=-O2 all: foo foo: foo.pas unit1.pas gpc --automake $(GPC_FLAGS) foo.pas mostlyclean: -rm -f *.o *.gpi *.gpd core clean: mostlyclean -rm -f foo distclean: clean extraclean: distclean -rm -f *~* maintainer-clean: extraclean
You may, however, want to put other rules into a `Makefile' to build documentation, data files, making distributions or whatever. Such things are outside of the scope of this text. You can usually do the Pascal compilations with a single `gpc --automake' call per program.
Routines are `procedure's, `function's, `constructor's, `destructor's or (user-defined) operators.
Declarations are those parts of a program that “announce” the existence and properties of certain objects like constants, types, variables, routines, units, modules and the program.
Statements are those parts of a program that actually “do” something. A single statement is an assignment, a procedure call, a jumping statement (`goto', `Exit', `Return', `Break', `Continue'), an assembler statement, or a compound statement (`begin' ... `end', `if', `case', `repeat', `while', `for', `with') which in turn may contain one or several statements.
Identifiers are those language elements that give names to objects like routines, constants, types, variables, units, modules. They can be (locally) redefined, unlike keywords which are part of fixed syntactic constructs (for example `if' ... `then' ... `else') and cannot be redefined. Macros are no language elements at all since they are expanded by the preprocessor and never seen by the compiler.
Endianness means the order in which the bytes of a value larger than one byte are stored in memory. This affects, e.g., integer values and pointers while, e.g., arrays of single-byte characters are not affected. (see Endianness)
Note: Other items may be inserted here when it appears useful. If you'd like a definition of some other term, let us know.
MemoryMap: MemoryMap
|
https://www.mirbsd.org/htman/sparc/manINFO/gpcs.html
|
CC-MAIN-2016-22
|
refinedweb
| 9,283
| 61.46
|
{-# (..) , foldMapReduce, foldMapReduce1 , foldReduce, foldReduce1 , pureUnit , returnUnit , Count(..) ) where import Control.Applicative import qualified Data.Monoid as Monoid import Data.Semigroup as Semigroup import Data.Semigroup.Foldable import Data.Semigroup.Instances () import Data.Hashable import Data.Foldable import Data.FingerTree import qualified Data.Sequence as Seq import Data.Sequence (Seq) import qualified Data.Set as Set import Data.Set (Set) import qualified Data.IntSet as IntSet import Data.IntSet (IntSet) import qualified Data.IntMap as IntMap import Data.IntMap (IntMap) import qualified Data.Map as Map import Data.Map (Map) #ifdef LANGUAGE_DeriveDataTypeable import Data.Data #endif --import Text.Parsec.Prim -- | This type may be best read infix. A @c `Reducer` m@ is a 'Semigroup' ' class n) => c -> f n pureUnit = pure . unit newtype Count = Count { getCount :: Int } deriving ( Eq, Ord, Show, Read #ifdef LANGUAGE_DeriveDataTypeable , Data, Typeable #endif ) instance Hashable Count where hash = hash . getCount hashWithSalt n = hashWithSalt n . getCount instance Semigroup Count where Count a <> Count b = Count (a + b) times1p n (Count a) = Count $ (fromIntegral n + 1) * a instance Monoid Count where mempty = Count 0 Count a `mappend` Count b = Count (a + b) instance Reducer a Count where unit _ = Count 1 Count n `snoc` _ = Count (n + 1) _ `cons` Count n = Count (n + 1) instance (Reducer c m, Reducer c n) => Reducer c (m,n) where unit x = (unit x,unit x) (m,n) `snoc` x = (m `snoc` x, n `snoc` x) x `cons` (m,n) = (x `cons` m, x `cons` n) instance (Reducer c m, Reducer c n, Reducer c o) => Reducer c (m,n,o) where unit x = (unit x,unit x, unit x) (m,n,o) `snoc` x = (m `snoc` x, n `snoc` x, o `snoc` x) x `cons` (m,n,o) = (x `cons` m, x `cons` n, x `cons` o) instance (Reducer c m, Reducer c n, Reducer c o, Reducer c p) => Reducer c (m,n,o,p) where unit x = (unit x,unit x, unit x, unit x) (m,n,o,p) `snoc` x = (m `snoc` x, n `snoc` x, o `snoc` x, p `snoc` x) x `cons` (m,n,o,p) = (x `cons` m, x `cons` n, x `cons` o, x `cons` p) instance Reducer c [c] where unit = return cons = (:) xs `snoc` x = xs ++ [x] instance Reducer c () where unit _ = () _ `snoc` _ = () _ `cons` _ = () instance Reducer Bool Any where unit = Any instance Reducer Bool All where unit = All instance Reducer (a -> a) (Endo a) where unit = Endo instance Reducer a (Seq a) where unit = Seq.singleton cons = (Seq.<|) snoc = (Seq.|>) instance Reducer Int IntSet where unit = IntSet.singleton cons = IntSet.insert snoc = flip IntSet.insert -- left bias irrelevant instance Ord a => Reducer a (Set a) where unit = Set.singleton cons = Set.insert -- pedantic about order in case 'Eq' doesn't implement structural equality snoc s m | Set.member m s = s | otherwise = Set.insert m s instance Reducer (Int, v) (IntMap v) where unit = uncurry IntMap.singleton cons = uncurry IntMap.insert snoc = flip . uncurry . IntMap.insertWith $ const id instance Ord k => Reducer (k, v) (Map k v) where unit = uncurry Map.singleton cons = uncurry Map.insert snoc = flip . uncurry . Map.insertWith $ const id instance Monoid m => Reducer m (WrappedMonoid m) where unit = WrapMonoid
|
http://hackage.haskell.org/package/reducers-0.1.5/docs/src/Data-Semigroup-Reducer.html
|
CC-MAIN-2015-48
|
refinedweb
| 536
| 68.16
|
Apache Geronimo - OSGi EEG RFC 124 support? (2 messages)Support appears to be growing in the Geronimo community for an OSGi EEG RFC 124 "A Component Model for OSGi" implementation. From the postings on the Geronimo community it looks as if the goal is for it to be implemented alongside the current J2EE support Geronimo is known for. Pretty leading edge stuff. Would the Java community consider this as valuable as I would?
Threaded Messages (2)
- RFC 124 ~= Spring by Neil Bartlett on April 20 2009 09:50 EDT
- Re: RFC 124 ~= Spring by Gary Struthers on April 20 2009 11:21 EDT
RFC 124 ~= Spring[ Go to top ]
If you put it like that, I doubt you'll see much interest because not many people know that RFC 124 is the specification that essentially standardises Spring within OSGi. So I think that Geronimo implementing RFC 124 essentially means they will build an alternative (but compatible) implementation of the core of the Spring Framework.
- Posted by: Neil Bartlett
- Posted on: April 20 2009 09:50 EDT
- in response to Nero Mada
Re: RFC 124 ~= Spring[ Go to top ]
I skimmed RFC 124 and it looks like Spring context xml with an OSGI namespace. What's the point? If I have to use Spring, is this OSGI xml a substitute for Spring's or would I have to have 2 equivalent xml files. Can't I just use Peaberry and ignore RFC 124 and Spring?
- Posted by: Gary Struthers
- Posted on: April 20 2009 11:21 EDT
- in response to Neil Bartlett
|
http://www.theserverside.com/discussions/thread.tss?thread_id=54306
|
CC-MAIN-2017-04
|
refinedweb
| 261
| 67.08
|
X++, C# Comparison: Loops [AX 2012]
Updated: March 30, 2011
Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012
This topic compares the loop features between X++ and C#.
Similarities
The following features are the same in X++ and C#:
Declarations for variables of the int primitive data type. Declarations for other primitive types are almost the same, but the types might have different names.
while statement for loops.
break statement to exit a loop.
continue statement to jump up to the top of a loop.
<= (less than or equal) comparison operator.
Differences
The following table lists X++ features that are different in C#.
The X++ code samples in this topic use the print function to display results. In X++ you can use the print statement can display any primitive data type without having to call functions that convert it to a string first. This makes print useful in quick test situations. Generally the Global::info method is used more often than print. The info method can only display strings. Therefore the strfmt function is often used together with info.
A limitation of print is that you cannot copy the contents of the Print window to the clipboard (such as with Ctrl+C). Global::info writes to the Infolog window which does support copy to the clipboard.
The while keyword supports looping in both X++ and C#.
X++ Sample of while
Output
The output in the X++ Print window is as follows:
1 2 3 4
C# Sample of while
using System; public class Pgm_CSharp { static void Main( string[] args ) { new Pgm_CSharp().Rs002a_CSharp_ControlOFlowWhile(); } void Rs002a_CSharp_ControlOFlowWhile() { int nLoops = 1; while (nLoops <= 88) { Console.Out.WriteLine( nLoops.ToString() ); Console.Out.WriteLine( "(Press any key to resume.)" ); // Paused until user presses a key. Console.In.Read(); if ((nLoops % 4) == 0) break; ++ nLoops; } Console.Beep(); Console.In.Read(); } }
Output
The console output from the C# program is as follows:
[C:\MyDirectory\] >> Rosetta_CSharp_1.exe 1 (Press any key to resume.) 2 (Press any key to resume.) 3 (Press any key to resume.) 4 (Press any key to resume.)
The for keyword supports looping in both X++ and C#.
X++ Sample of for
In X++ the counter variable cannot be declared as part of the for statement.
static void JobRs002a_LoopsWhileFor(Args _args) { int ii; // The counter. for (ii=1; ii < 5; ii++) { print ii; pause; // You must click the OK button to proceed // beyond a pause statement. // ii is always less than 99. if (ii < 99) { continue; } print "This message never appears."; } pause; // X++ keyword. }
Output
The output in the X++ Print window is as follows:
1 2 3 4
C# Sample of for
using System; public class Pgm_CSharp { static void Main( string[] args ) { new Pgm_CSharp().Rs002a_CSharp_ControlOFlowFor(); } void Rs002a_CSharp_ControlOFlowFor() { int nLoops = 1, ii; for (ii = 1; ii < 5; ii++) { Console.Out.WriteLine(ii.ToString()); Console.Out.WriteLine("(Press any key to resume.)"); Console.In.Read(); if (ii < 99) { continue; } Console.Out.WriteLine("This message never appears."); } Console.Out.WriteLine("(Press any key to resume.)"); Console.In.Read(); } }
Output
The console output from the C# program is as follows:
1 (Press any key to resume.) 2 (Press any key to resume.) 3 (Press any key to resume.) 4 (Press any key to resume.) (Press any key to resume.)
Announcements: New book: "Inside Microsoft Dynamics AX 2012 R3" now available. get your copy at the MS Press Store
|
https://msdn.microsoft.com/EN-US/library/cc967439.aspx
|
CC-MAIN-2015-11
|
refinedweb
| 573
| 68.36
|
KeychainDB: Persistent Database in the Keychain
Hey all,
Thought I'd share a quick script I wrote called KeychainDB. It's a wrapper for the keychain which allows you to use the keychain as a persistent (i.e. maintains state across app launches) database.
Each entry in the keychain has three elements: service, username, and password. KeychainDB behaves like a Dictionary, using the username field to store keys and the password field to store values. Service is set to DB_NAME, 'KeychainDB' by default, for all entries to separate KeychainDB entries from legitimate password storage.
Since a KeychainDB behaves like a dict, it's very easy to use and requires no knowledge of database systems. Example usage:
from keychaindb import KeychainDB kdb = KeychainDB() kdb['test'] = 'example' kdb['test'] #=> 'example'
If you kill and reopen Pythonista and execute
kdb['test'], KeychainDB will return
'example'.
Hope this is helpful and you guys can use it in interesting ways. The script is located here:
- filippocld223
Very good!
|
https://forum.omz-software.com/topic/404/keychaindb-persistent-database-in-the-keychain/2
|
CC-MAIN-2020-29
|
refinedweb
| 163
| 64.1
|
getsid - get the process group ID of a session leader
The getsid() function shall obtain the process group ID of the process that is the session leader of the process specified by pid. If pid is (pid_t)0, it specifies the calling process.
Upon successful completion, getsid() shall return the process group ID of the session leader of the specified process. Otherwise, it shall return (pid_t)-1 and set errno to indicate the error.
The getsid() function shall.
None.
exec(), fork(), getpid(), getpgid(), setpgid(), setsid(), the Base Definitions volume of IEEE Std 1003.1-2001, <unistd.h>
First released in Issue 4, Version 2.
Moved from X/OPEN UNIX extension to BASE.
|
http://pubs.opengroup.org/onlinepubs/009604499/functions/getsid.html
|
CC-MAIN-2018-30
|
refinedweb
| 112
| 74.9
|
$ cnpm install fileswap-stream
Write to a writable file-stream that swaps out it's underlying file resources according to swapper and naming functions. This can be used for a persistent log stream - just stream to it 24/7 and let it swap out to new files whenever you trigger it to.
var swapStream = require('../.') var RandomStream = require('random-stream') var options = { namer : namer , swapper : swapper , tdelta : 1000 , path : "." , fsops : { flags: "a" , encoding: "utf8" } } var ss = swapStream(options) RandomStream().pipe(ss) function namer () { return "t-" + getSecond() + "-" + getMinute() } function swapper () { return Math.round(parseInt(getSecond()) / 5) } function getMinute () { var d = new Date() return ('0' + d.getMinutes() ).slice(-2) } function getSecond () { var d = new Date() return ('0' + d.getSeconds() ).slice(-2) }
this will produce the files full of random junk
t-27-07 t-28-07 t-33-07 t-38-07 t-43-07
every five seconds as determined by
swapper.
Other useful ideas for
swapper would be function that returns a different value every day:
function swapper () { var d = new Date() return ('0' + d.getUTCDate() ).slice(-2) }
Another way to think about
swapper is that it is a function that is repeatedly called to check it's return value against the last value. When the return value changes, the underlying file resource being written to is swapped out with a new name given by
namer. In this way the fileswap could be controlled by anything, such as time, network or CPU heat.
The options object has the following fields
Called to name the new file. This function is called everytime the internal swapper is triggered. The function should return a string.
Called every
tdelta milliseconds and compared to the previous return value of
swapper. If it is different, a new file resource is contructed with name supplied by
namer.
The delay time for checking the
swapper states.
Optional path parameter for creating new files.
options passed to
fs.createWriteStream(fsops)
npm install swapfile-stream
MIT
|
https://developer.aliyun.com/mirror/npm/package/fileswap-stream
|
CC-MAIN-2020-50
|
refinedweb
| 325
| 66.44
|
Group a list by the values in Python
Here we will learn how to group a list by the values in Python.
List grouping based on values converts a list to the lists of list according to the values taken as the second element. The first element will be string followed by its values.
For example,
Consider the marks obtained by 8 students in Computer Science. Now we have to group the students who have obtained the same marks.
[[“Aisha”,30], [“Bhavs”,40],[“Cat”, 35],[“Sam”,40],[“Andre”,35],[“Trina”,40],[“Robbie”,30],[“Beck”,35]] will give output as [[‘Aisha’, ‘Robbie’], [‘Cat’, ‘Andre’, ‘Beck’], [‘Bhavs’, ‘Sam’, ‘Trina’]]
In this tutorial, I will take a tour of how to group a list using 2 methods.
Group a list in Python using itemgetter and groupby function
Here, We import 2 functions in Python namely “itemgetter” and “groupby”
itemgetter : builds a callable which assumes an iterable object as the input given, and gets the n-th item out of it.
You can refer to to get more ideas about itemgetter function.
groupby: takes up a few mixtures of object splitting, function applying, and blending the results.
from operator import itemgetter from itertools import groupby list = [["Aisha",30], ["Bhavs",40],["Cat", 35],["Sam",40],["Andre",35],["Trina",40],["Robbie",30],["Beck",35]] list.sort(key=itemgetter(1)) res = [[x for x,y in z] for k,z in groupby(list,key=itemgetter(1))] print(res)
Here, we are grouping a list based on their values.
Output
[['Aisha', 'Robbie'], ['Cat', 'Andre', 'Beck'], ['Bhavs', 'Sam', 'Trina']]
List grouping in Python using set function
In this method, there is no need of importing any modules. We just extract all the values in a list and store the unique values using the set function. Later we iterate through the unique values and if the values of the other list are matching, then we append the list to the result.
Finally, we print the result.
stu_details = [["Aisha",30], ["Bhavs",40],["Cat", 35],["Sam",40],["Andre",35],["Trina",40],["Robbie",30],["Beck",35]] all_values = [list[1] for list in stu_details] unique_values = set(all_values) result = [] for value in unique_values: this_group = [] for list in stu_details: if list[1] == value: this_group.append(list[0]) result.append(this_group) print(result)
As we can see Aisha and Robbie have scored 30 marks, so we can group them together. Similarly, Bhavs, Sam, and Trina have scored 40 marks and those 3 are grouped together. So we are grouping the student’s lists based on their marks obtained.
Output
[['Bhavs', 'Sam', 'Trina'], ['Cat', 'Andre', 'Beck'], ['Aisha', 'Robbie']]
By the following two methods, You can group a list based on the values given.
Also read: Random Singly Linked List Generator using Python
|
https://www.codespeedy.com/group-a-list-by-the-values-in-python/
|
CC-MAIN-2021-43
|
refinedweb
| 454
| 69.41
|
One great side-effect of traveling is that you notice how places (including your home) smell. You normally don't notice it because you get very used to it. But it's really yet another dimension that a place can have. Here are a few smells I encountered (of course this is subjective):
Example: If you live in Canada, and think that Americans are conservative and mean spirited, go to Central America, where almost all public transportation is regularly interrupted by police/military searches of passengers and luggage.
If you live in Europe and think of the United States as one big city with suburbs, fly to Casper,Wyoming, rent a car and drive North or West. For a long time. Try and find a group of people larger than 15 after dark.
At the same time things that you will probably hate when you travel will then endear you to your "old" home. If it is too big (Shaghai), then Atlanta will seem smaller. If it is too small (much of Western America), you will appreciate London. If it is too expensive (Hong Kong), you will probably long for inexpensive places like Iowa.
Food will either taste better when you return (by comparison) or you will now have a taste for food you never liked before (Cajun food anyone?). Either way, your life will have changed.
Movies, literature, music all change your perspective, but nothing touches travel. Better, worse, bothered, inspired, ..... who knows ... but you will not be the same.
Travel changes. Period.
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com
|
http://everything2.com/title/Travel+changes+your+brain
|
CC-MAIN-2016-44
|
refinedweb
| 270
| 72.66
|
13 January 2010 07:26 [Source: ICIS news]
By Judith Wang and ?xml:namespace>
SHANGHAI (ICIS news)--China’s surprise monetary tightening may temper the buoyant mood in regional trading of petrochemicals on concerns that it would translate to weaker demand, analysts and industry sources said on Wednesday.
But its real negative impact would likely be concentrated on speculative trades, where excess money could be flowing, analysts said.
The country’s central bank – the People’s Bank of China (PBoC) took clear moves recently to curb aggressive lending, which may have boosted the local banks’ new loan portfolio by nearly CNY10,000bn ($1,460bn) in 2009 based on estimates of analysts.
The polymers futures market was spooked, with contracts for linear low density polyethylene for May delivery closing 3.7% lower on the Dalian Commodity Exchange (DCE) on Wednesday morning trade.
Imported LLDPE was selling as low as CNY11,300/tonne ex-warehouse in north
On Tuesday, PBoC announced that it would raise its reserve requirement - a portion of deposit that banks must park with the central bank – by 50 basis points to 16% on 18 January.
It has also raised the interest rates for its one-year bills by eight basis points to 1.8434%.
These followed an unexpected increase in PBoC’s three-month bills late last week, which may have signalled an end to
“Investors’ money will be drained in near term, but in the long run, the impact will not be significant,” Xu Chao, an analyst from Shanghai-based brokerage house Dalu Futures Company.
The recent move should soak up more than CNY200bn in liquidity from the financial system, according to Ma Jun, Hong Kong-based chief economist for Greater China at Deutsche Bank.
“In the longer term, there will be a minimal impact on the petrochemicals market, which is not just dependent on the domestic economy but [on] the overall global economy and the fundamentals for petrochemicals in
“A tightening of the monetary policy helps reduce an asset bubble, and this is good for the Chinese economy in the long term,” said Shum.
For the whole of 2009, the economy may beat its own expectations and post an annual growth of 8.5%, said David Cohen, chief economist at research firm Action Economics.
The country witnessed an unparalleled credit boom in 2009, likely logging a year-on-year growth of 30%, he said. In the first week of January 2010 alone, Chinese banks reportedly expanded their loan pie by another CNY600bn.
“The government wants to moderate credit growth this year,” said Cohen, citing that the higher auction rates for the central bank’s three-month bills and the hike in reserve requirement on Tuesday were consistent with this aim.
Analysts expect Chinese banks to cut their lending by about a fifth this year to CNY7,500bn.
“The general picture shows policy makers are getting a little more wary of [economic] overheating. But I think the withdrawal of economic stimulus would be slow,” he added.
Policymakers across the world need to toe the line of supporting the economic recovery underway and being mindful of inflationary pressures and of bloating the fiscal deficits, analysts said.
Aggressive policy actions are more likely in the second half of the year, when the strength of the economic recovery was ascertained, they said.
($1 = CNY6.83)
With additional reporting by Dolly Wu, Felicia Loo and Chow Bee Lin
|
http://www.icis.com/Articles/2010/01/13/9325153/speculative-trades-at-risk-as-china-moves-to-curb-lending.html
|
CC-MAIN-2015-06
|
refinedweb
| 567
| 54.97
|
Next, we combine what we have learned about convection and diffusion and apply it to the Burger's Equation. This equation looks like —and is— the direct combination of both of the PDE's we had been working on earlier.$$ \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2} $$
We can discretize it using the methods we have developed previously in steps 1-3. It will take forward difference for the time component, backward difference for space and our 2nd order combination method for hte second derivatives. This yields:$$ \frac{u^{n+1}_i - u^n_i}{\Delta t} + u_i^n \frac{u^{n}_i - u^n_{i-1}}{\Delta x} = \nu \frac{u^{n}_{i+1} -2u^n_i + u^n_{i-1}}{\Delta x^2} $$
Given that we have full initial conditions as before we can solve for our only unknown $u^{n+1}_i$ and iterate through the equation that follows:$$ u^{n+1}_i = u^n_i - u^n_i \frac{\Delta t}{\Delta x} (u^n_i - u^n_{i-1}) + .
The Burger's equation is way more interesting than the previous ones. To have a better feel for its properties it is helpful to use different initial and boundary conditions than what we have been using for the previous steps.}
Our boundary conditions will be:$$ u(0) = u(2 \pi) $$
This is a periodic boundary condition which we must be careful with.
Evaluating this initial condition by hand would be relatively painful, to avoid this we can calculate the derivative using sympy. This is basically mathematica but can be used to output the results back into Python calculations.
We shall start by loading all of the python libraries that we will need for hte project along with a fix to make sure sympy prints our functions in latex.
# Adding inline command to make plots appear under comments import numpy as np import sympy import matplotlib.pyplot as plt import time, sys %matplotlib inline sympy.init_printing(use_latex =True)
We shall start by defining the symbolic variables in our initial conditions and then typing out the full equation.
x, nu, t = sympy.symbols('x nu t') phi = (sympy.exp(-(x - 4 * t) **2 / (4 * nu * (t+1))) + sympy.exp(-(x - 4 *t - 2 * np.pi)**2 / (4 * nu * (t + 1)))) phi
phiprime = phi.diff(x) phiprime
In python code: expression for $ \frac{\partial \phi}{\partial x} $ we can finish writing the full initial condition equation and then translating it into a usable python expression. To do this we use the lambdify function which takes a sympy simbolic equation and turns it into a callable function.func = sympy.utilities.lambdify((t,x,nu), u) print(ufunc(1,4,3))
3.4917066420644494
#New initial conditions grid_length = 2 grid_points = 101 nt = 150 dx = grid_length * np.pi / (grid_points - 1) nu = .07 dt = dx * nu #Dynamically scaling dt based on grid size to ensure convergence #Initiallizing the array containing the shape of our initial conditions x = np.linspace(0,2 * np.pi, grid_points) un = np.empty(grid_points) t = 0 u = np.asarray([ufunc. ])
plt.figure(figsize=(11, 7), dpi= 100) plt.plot(x, u, marker='o', lw=2) plt.xlim([0, 2 * np.pi]) plt.ylim([0, 10]); plt.xlabel('x') plt.ylabel('u') plt.title('Burgers Equation at t=0');
This new function is known as a
sawtooth function.
The biggest difference between this step and the previous ones is the use of periodic boundary conditions. If you have experimented with steps 1-2 you would have seen that eventually the wave moves out of the picture to the right and does not show up in the plot.
With periodic BC, what happens now is that when the wave hits the end of the frame it wraps around and starts from the beginning again.
Now we will apply the discretization as outlined above and check out the final results.
for n in range(nt): #Runs however many timesteps you set earlier(nt* dt , xi, nu) for xi in x])
plt.figure(figsize=(11, 7), dpi=100) plt.plot(x,u, marker ='o', lw=2, label='Computational') plt.plot(x, u_anal, label='Analytical') plt.xlim([0, 2* np.pi]) plt.ylim([0,10]) plt.xlabel('x') plt.ylabel('u') plt.title('Burgers Equation at t=10'); plt.legend();
#Imports for animation and display within a jupyter notebook from matplotlib import animation, rc from IPython.display import HTML #Generating the figure that will contain the animation fig, ax = plt.subplots() fig.set_dpi(100) fig.set_size_inches(9, 5) ax.set_xlim(( 0, 2*np.pi)) ax.set_ylim((0, 10)) comp, = ax.plot([], [], marker='o', lw=2,label='Computational') anal, = ax.plot([], [], lw=2,label='Analytical') ax.legend(); plt.xlabel('x') plt.ylabel('u') plt.title('Burgers Equation time evolution from t=0 to t=10'); #Resetting the U wave back to initial conditions u = np.asarray([ufunc(0, x0, nu) for x0 in x])
#Initialization function for funcanimation def init(): comp.set_data([], []) anal.set_data([], []) return (comp,anal,)
#Main animation function, each frame represents a time step in our calculation def animate(j):(j * dt, xi, nu) for xi in x]) comp.set_data(x, u) anal.set_data(x, u_anal) return (comp,anal,)
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nt, interval=20) anim.save('../gifs/1dBurgers.gif',writer='imagemagick',fps=60) #HTML(anim.to_jshtml())
This concludes our examination of 1D sims and boy oh boy was this cool! This last model in particular shines in the animation showing the behavior and properties of the burghers equation quite well.
Next, we will start our move to 2D but before this a quick detour on array operations on NumPy.
|
https://nbviewer.jupyter.org/github/Angelo1211/CFDPython/blob/master/Lessons/S04_Burgers_EQ.ipynb
|
CC-MAIN-2019-43
|
refinedweb
| 948
| 58.28
|
Struts <s:include> - Struts
? or struts doesnt execute tags inside fetched page?
the same include code which i used in a static page it works out well.
Please help me...Struts Hello guys,
I have a doubt in struts tag.
what am i
please tell me
please tell me class Person{
int age;
String name;
void Person1(String g,int a){
name=g;
age=a;
}
Person(String s,int i){
name=s;
age=i;
}
void..., but its compiled and run successfully, please tell me the reason
please tell me about command line arguments in java?
please tell me about command line arguments in java? please tell me about command line arguments in java?
Hi Friend,
The command-line... to run.
For more information, visit the following links:
http
Explain about threads:how to start program in threads?
Explain about threads:how to start program in threads? import java.util.*;
class AlphabetPrint extends Thread
{
public void print... is created by extending the Thread class.
Threads have three stages in its life
s:textfield - Struts
s:textfield I am using the s:textfield tag in an appication and need to display % after the field on the same line. How would I go about doing...!
--------------------------------------------
Read for more information.
please tell me
please tell me i have created one table,when i close and again login, table name will be there, but its content not displayed, showing as no rows selected, please tell me the reason
please tell me
please tell me select * from emp order by 5 desc;
in the above what is the meaning of 5, and what its functionality
please tell me
please tell me class Sample{
private double num1,num2;
void sample... : constructor Sample(double,double)
location: class Sample
Sample s = new Sample(10.6,20.0);
^
1 error, please tell me the resolution for this...
Struts Books
covers everything you need to know about Struts and its supporting technologies... (not as good but bigger) struts book and the two complement quite well. Go...:
The Jakarta Struts Model 2 architecture and its supporting
please tell me
please tell me what is the source code if user give wrong user name password the explorer will show incorrect username password
<...);
}
%>
For more information, please go through the following link:
http
Top 10 Tips for Good Website Design
a slide show with text and reciprocating image can as well be a good approach... to consider as well, that is to say in providing good variety of reciprocating images... well connected social face of a website is more preferred than those sites ... A JUMP START
; element is used to tell the Spring container about the class and
how it should... in the spring
container and configures its property greeting with the value 'Good... specifies the bean?s fully qualified class name.
Within the <bean>
please tell me about command line arguments in java?
please tell me about command line arguments in java? please tell me about command line arguments in java?
Hi Friend,
The command-line arguments are the arguments which are sent to the program being called. You can
Struts Articles
.
4. The UI controller, defined by Struts' action class/form bean... application. The example also uses Struts Action framework plugins in order to initialize the scheduling mechanism when the web application starts. The Struts Action
state;
}
Please can you tell me where i am doing wrong?
Please its urgent... you.
Please visit for more information. S:select tag is being used in my jsp, to create a drop down
About Struts processPreprocess method - Struts
About Struts processPreprocess method Hi java folks,
Help me... that the request need not travel to Action class to find out that the user... will abort request processing.
For more information on struts visit
Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing
Everything you need to Know about Outsourcing
Introduction
Let us start... be described in many ways, but at its simplest it is allocating work to a third party... that considers outsourcing has its own specific business situations
Struts Tutorials
issues with Struts Action classes. Ok, let?s get started.
StrutsTestCase... Struts application, specifically how you test the Action class.
The Action class... into a Struts enabled project.
5. Struts Action Class Wizard - Generates Java
does anybody could tell me who's the author - Struts
.
would you please tell how could I contact with the author ?
thanks...does anybody could tell me who's the author does anyone would tell...)
I'd like to translate this tutorial into Chinese , to make more and more
How to use Arraylist object in <s:dobleselect> .... struts 2? - Struts
How to use Arraylist object in .... struts 2? Hi Members,
I saw the example of tag in roseindia, its very useful. But I have to use my ArrayList....
Please be patient and answer my question.
Regards,
Prabhu
PHP and MySQL Work Well Together
website will have more interactive features for its visitors and can be altered... well for a dynamic site. It can help to understand this by seeing what... in today’s increasingly interactive world.
The PHP script will be used to create
Struts - Struts
Struts hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean... for later use in in any other jsp or servlet(action class) until session Quick Start
and
then maps the incoming request to a Struts actionclass. The Struts action class... of the application fast.
Read more: Struts Quick
Start...Struts Quick Start
Struts Quick Start to Struts technology
In this post I
Struts Book - Popular Struts Books
, Struts makes applications more manageable and maintainable.
Since its donation... sophisticated Struts
1.1.This book covers everything you need to know about Struts and its supporting technologies, including JSPs, servlets, Web applications
Struts Roseindia
JavaBean is used to input properties in action class
Struts 2 actions can... support validation and localization of coding offering more
utilization.
Struts... the execution of Action.
Features of Struts 2
Simple and easy web app
struts
struts which is the best book to study struts on own?
please tell me..., Richard Hightower
3)Struts in Action By Ted N. Husted, Cedric Dumoulin, George Franciscus, David Winterfeldt
4)Struts Kick Start By: James Turner; Kevin
Implementing more than one Job Details and Triggers
the job's name,
trigger's
name and its firing times with day and date as well... Implementing more than one Job Details and Triggers... will learn how to implement
more than one triggers and jobs with a quartz
More About Simple Trigger
More About Simple Trigger
....
The SimpleTrigger properties contains: a
start-time and end-time, a repeat... the start-time and end-time, if we want to specify the end-time
then we can use a repeat
Struts - Framework
Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary... using the View component. ActionServlet, Action, ActionForm and struts-config.xml
please tell me
please tell me why we are using http protocol in servlets
please tell me
please tell me which cmd we use to clear the screen in sql prompt
please tell me
please tell me why we use public static main(String ar){} in java instead of main
Struts - Framework
to learn
and can u tell me clearly sir/madam? Hi
Its good.../struts/". Its a very good site to learn struts.
You dont need to be expert...Struts Good day to you Sir/madam,
How can i start
please tell me
please tell me what are the topics in core and advaced java....
Hi Friend,
Please visit the following links:
Thanks
Hi Friend,
Please visit
please tell me
please tell me Actually i am working on a Project tiitle is JavaMail System,
1)How to configure java mail API and a Demo Program for Sending mail and Receiving using JSP and Servlet
i have problem with this query... please tell me the resolution if this .........
i have problem with this query... please tell me the resolution if this ......... select length(ename)||' charecters exist in '||initcap(ename)||'s name'
as "names and length" from emp
load more with jquery
box its is going to display php posts and after that when i click on load more...load more with jquery i am using jquery to loadmore "posts" from my......
whats the problem can any one will say please?
here is my code
please tell me
please tell me import java.lang.Thread;
class Current{
public static...()
location: class java.lang.Thread
Thread t = Thread.CurrentThread();
^
1 error
tell me the resolution please....
import
please tell me
please tell me class Person{
string name;
int age;
void...: '}' expected
^
2 errors, i got this error....... please tell me the resolution...("my age is"+age);
}
}
class Hash(){
public static void main
please tell me
please tell me import java.io.*;
class SString{
public static void...));
^
1 error
please tell me the resolution of this
Hi...: cannot find symbol
symbol : class Bufferedreader
location: class SString
please tell me
please tell me class Person{
int age;
String name;
void talk(){
System.out.println("my name is:"+name);
System.out.println("my age is:"+age);
}
}
class Hash{
public static void main(String ar[]){
Person raju=new Person
start date and end date validation in javascript - Ajax
start date it has to display alert message. pls tell how to do this. i searched...start date and end date validation in javascript hi, i am doing web surfing project. pls guide me for my project. i want start date and end
please tell me
please tell me
<tr>
<td><html:hidden<... get all values in action page
please tell me
please tell me class Reserve extends Thread{
int available=1...;
^
please tell me the resolution
class Reserve...){}
}
else
System.out.println("sorry, no berths");
}
}
class Train
Logic error? HELP PLEASE! :(
Logic error? HELP PLEASE! :( Hello Guys! i have a huge problem. What... there is a text box which user to key in their email and one more dropdownlist which get... have a method to get the date using the email and other method to get its personal
please tell me
please tell me class Producer extends Thread
{
StringBuffer sb... got error as Inner class connot have static declarations
pls tell me...){}
}
sb.notify();
}
}
}
class Consumer extends Thread
{
Producer prod;
Consumer
Tell me - Java Beginners
Tell me
how to create a valid.js file please tell me and give the write code
Thanks Hi friend,
Please give details for requirement of this "valid.js" file.
For read more information
http
please tell me
please tell me Blockquote
Blockquote> BlockquoteBlockquote
how to get images from ms access databases to jsp pagesBlockquote
Jsp get image from ms access database
Create a table named user(id,name,address,image
Struts
Struts Tell me good struts manual
Global positioning system issues
, it does have its share of disadvantages as
well. Here, we discuss the same.
Price....
Security
One more disadvantages regarding its dependence to satellite signals.... Some devices very abruptly tell the
drier about a turn and there are chances
about connectivity
about connectivity hello i am basavaraj,will any one tell me how to use hibernate in struts.
Please visit the following link:
Struts Hibernate Integration
I'm using struts2,i'm asking about the radio button validation <s:radio>
I'm using struts2,i'm asking about the radio button validation ... {
alert("Please choose your gender");
return false;
} }
i tested with other components this javascript code and it worked well
About RoseIndia.net
About RoseIndia.Net
RoseIndia.Net is global services ... to its
customers through its software solutions and services. We are providing... in
offshore development.
The company plans to achieve its
Tell me - Struts
Directory Structure for Struts Tell me the Directory Structure for Struts
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good... variables.
For more information, visit the following link:
http
Tell me - Struts
Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning
More About the CronTrigger
More About the CronTrigger
The CronTriggers are more useful than the
SimpleTrigger, if we... for minutes
and seconds, 0 to 31 for Day-of-Month but here, we should
more
Software graduates please tell me,
Software graduates please tell me, How to view image on Frame in swing(or)awt in Java
Struts 2 RequiredFieldValidator - Struts
Validation in Struts 2. Please follow the following instruction:1. index.jsp...;/action> <action name="requiredFieldValidatorError1" class="...; </head> <body> <s:form name="validate" action
Struts integration with EJB in JBOSS3.2
is to write about EJB2.0 in JBOSS3.2 using STRUTS FRAMEWORK.
The EJB...;
connection. In such situations messaging is more reliable.
What is Struts... Enterprise level applications
Struts provide its own Controller component
Intro please - Hibernate
Intro please Hi,
Anyone please tell me why we go for hibernate? is there any advanced feature?
Thanks,
Prabhakaran. Hi friend,
Here is the detail information about hibernate.
Read for more
About Project
About Project Hello friends i want to make a project on face reconization
this is my first projct
so please help me that how i start my projct
please tell me some working with image codeing.
thanks
please tell the reason for java.lang.ArrayIndexOutOfBoundsException: in program
please tell the reason for java.lang.ArrayIndexOutOfBoundsException: in program public class Test {
public static void show(int x[][],int y... error occurs. Here is a code that satisfies your requirement.
public class test
How Struts Works
the
container gets start, it reads the Struts Configuration files and loads it
into memory in the init() method. You will know more about the Struts... application. This file has all the information about
many types of Struts
Hi.. - Struts
Hi.. Hi,
I am new in struts please help me what data write... will help you. Please visit for more information:... contains tags used to create struts input forms, as well as other tags generally
s per ur answer
s per ur answer i cannot understand please explain in detail
3)Create web.xml and classes folder inside the WEB_INF folder of web application....
For more information, visit the following link:
about db - Struts
About DB in Struts I have one problem about database. i am using netbeans serveri glassfish. so which is the data source struts config file should be? Please help me
About Struts 2.2.1 Login application
;
<s:form
Next... the action class
to there name space. for example -
<action name="... class file is LoginAction.java it is mapped with its full package is
name
Struts 1 Tutorial and example programs
to the Struts Action Class
This lesson is an introduction to Action Class...
Dispatch Action Example
Here in this example you will learn more about... Revisited,
Struts API
Tell us what you think about our Struts Tutorial
S - Java Terms
S - Java Terms
... to convert a string
into its integer equivalent. To demonstrate the conversion... and converts it into
its integer equivalent value.
Software Quality
Tell me - Struts
Directory Structure with example program Execution Tell me the Directory Structure with example program Execution
ABUT A FUNCTION AND ITS USE - Java Beginners
ABUT A FUNCTION AND ITS USE SIR CAN U PLEASE TELL ME ABOUT parseInt(args[i]) a)its use? b)when it is used? C)is it related to string class
configuration - Struts
configuration Can you please tell me the clear definition of Action....
Action class:
An Action class in the struts application extends Struts...://
struts
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks
about enum - Java Beginners
about enum hi all,
please tell me about "enum" and explain with example. And its use in OOP.
Thanks
Tomcat Quick Start Guide
Tomcat Quick Start Guide
... fast tomcat jsp tutorial, you will learn all the essential steps need to start... Java and JSP concepts. Even though, If you want to learn the same, please visit our
pls tell me the difference between the run() and start() in threads in java....
pls tell me the difference between the run() and start() in threads in java.... difference between the run() and start() in threads in java
More About Triggers
More About Triggers
... the misfire is occurred. More descriptions about
the misfire instruction will provide... of the org.quartz.impl.HolidayCalendar
class. The Calendar object integrated
Hsi Deepak - Struts
Hsi Deepak hai deepak i have a small query about struts... that any other way is there to run the struts program? please clarify my dout. ... you are using Maven Server.
You can learn more about Maven2 from roseindia
About desing patterns - Struts
About desing patterns How many Design Patterns are there supported by Java? and what is an Application Controller? what is the difference between Front Controller & Application Contorller? Hi Friend,
Please visit
|
http://www.roseindia.net/tutorialhelp/comment/332
|
CC-MAIN-2014-52
|
refinedweb
| 2,840
| 75
|
I am writing a simple struct array program. A string is given and I want to parse it. A string consists of few characters.
For example, A string "a:bc:D:E" has 5 unique characters. Colon ":" tells that that character has a value.
Struct array size is 256 ((option[256])) which includes all ASCII characters.
From given string, I want to find the characters and fill the struct array with value “1” at their ASCII position. If character is not present in the string then assign the value “0”.
Further I want to set “hasVal" filed of this struct. For example, a = 1 (has colon in the given string), b = 0 (no colon after "b" in the string), c =1, D = 1, E =1.
Lastly, print this structure as shown in the expected output.
I am not very good in programming. I just started learning C language. I tried this but i am not getting expected result. I apologize if I am not able to convey my problem statement.
Any help is much appreciated. Thanks in advance.
sample.c
#include <stdio.h>
#include <stdlib.h>
#define MAX_CHAR 256
typedef struct {
int hasVal;
char *defaultVal;
char *desc;
} validOpt;
validOpt option[MAX_CHAR] = {};
char *optStr = "a:bc:D:E";
int main() {
int i;
for(i = 0; *(optStr + i); i++)
{
/* Not Sure how to check this....
* check the "char" and ":",
* if both are present, set the field "hasVal" to 1 or "0".
*/
if((optStr[i]++) == ":")
option[optStr[i]--].hasVal = 1;
else
option[optStr[i]--].hasVal = 0;
}
printf(“Printing structure…\n”);
printf("\n");
for(i=0; i< MAX_CHAR; i++)
{
if(option[optStr[i]].hasVal == 1) {
printf(" %d -- %c\n", i , option[optStr[i]].hasVal);
}
}
return 0;
}
[rock12/C_Prog]$ ./sample
Printing structure…
1) If user enters invalid character, give an error.
For Example, "q" -> not valid option
2) For Valid options, print:
a - 1
b - 0
c - 1
D - 1
E - 1
if(option[optStr[i]].hasVal == 1) { printf(" %d -- %c\n", i , option[optStr[i]].hasVal);
This condition is not enough. You need to distinguish between characters which appear and don't appear in the string, and the characters which have or don't have values.
You need an additional variable, for example
is_shown to figure out if the character occurs or not.
#define MAX_CHAR 256
char range is from zero to 128. In your example it seems it is sufficient to go from
A to
z. But let's keep it at 128.
for(i = 0; *(optStr + i); i++) { if((optStr[i]++) == ":") { ... } }
There is an error above. When you reach the last character in the string you increment once more to check the next characters, it goes over bound.
Modify the code as follows:
typedef struct { int is_show; int hasVal; char *defaultVal; char *desc; } validOpt; #define MAX_CHAR 128 int main() { validOpt option[MAX_CHAR] = { 0 }; char *optStr = "a:bc:D:E"; while (*optStr) { char ch = *optStr; option[ch].hasVal = 0; if (ch != ':') option[ch].is_show = 1; if (*(optStr + 1)) if (*(optStr + 1)== ':') option[ch].hasVal = 1; *optStr++; } printf("Printing structure\n\n"); int i; for (i = 0; i < 128; i++) if (option[i].is_show == 1) printf(" %c -- %d\n", i, option[i].hasVal); return 0; }
Output is as follows:
D -- 1 E -- 0 a -- 1 b -- 0 c -- 1
You have to sort it to get it the way you want. Note that
E is not set, because there is no
: after
E which is the last character.
|
https://codedump.io/share/JJwJBb6EhCaw/1/set-struct-array-in-c
|
CC-MAIN-2017-13
|
refinedweb
| 574
| 76.72
|
im building a usb box with 2x buttons and Sparkfun Micro Pro. One button sends SPACE bar to computer, and other button sends ESC.
i coppied the code from Sparkfun Website but the code as one key code written only...
can anyone help write the code to include the 2 buttons? maybe we could write it to use pin 10 for key ESC.
the code im using is:
#include <Keyboard.h>
int buttonPin = 9; // Set a button to any pin
void setup()
{
pinMode(buttonPin, INPUT); // Set the button as an input
digitalWrite(buttonPin, HIGH); // Pull the button high
}
void loop()
{
if (digitalRead(buttonPin) == 0) // if the button goes low
{
Keyboard.write(' '); // send a ' ' to the computer via Keyboard HID
delay(1000); // delay so there aren't a kajillion z's
}
}
As you already noticed i have no clue how this language work
Any help is highly appreciated!
Thanks!
|
https://forum.sparkfun.com/viewtopic.php?t=46582&who_posted=1
|
CC-MAIN-2018-26
|
refinedweb
| 148
| 62.17
|
Hi,
I am currently taking C++ classes, and I have been given a simple (erm...) task of creating a C++ application that takes one input which is a number to one-decimal point e.g. 1.8, 1.9, 2.0, 2.1!
A user enters a one decimal point number between 0 and 20, and the app uses a loop to create a table up to that number in the format (if a user enters 1.3 then) -:
____ 1.0 | 1.1 | 1.2 | 1.3
1.0__1.0___1.1__ 1.2___1.3
1.1__1.1___1.21_ 1.32__1.43
1.2__1.2___1.32__1.44__1.56
1.3__1.3___1.43__1.56__1.69
(I had to use underscores to align the table because double spaes get removed in my post!)
I need to hand this in, and i've been burning my brain for hours!!!
I've currently got this going:
Thanks in advance.Thanks in advance.Code:#include <iostream> #include <string> using namespace std; int main() { double MAX = 20.00; cout << "Please enter scale of the multiplication table..." << endl; cin >> MAX; for(double i = 0.00; i < MAX; i = i + 0.1) { cout << i << "\t"; }; return 0; }
VD
|
http://cboard.cprogramming.com/cplusplus-programming/17806-cplusplus-multiplication-table-generator.html
|
CC-MAIN-2015-06
|
refinedweb
| 200
| 84.37
|
On Fri, 2011-05-13 at 15:59 +0530, Asankha C. Perera wrote:
> Hi Oleg
> >> If this is possible, it will be fine. However it seems like this is hard
> >> coded in NHttpConnectionBase.createContentDecoder() .. is there an easy
> >> way I did not notice to register my own decoder?
> > What I was thinking of was something along the lines
> >
> > public class ContentDecoderChannel implements ReadableByteChannel {
> >
> > ...
> > Would that work the problem around for you?
> Thanks! that will work fine.. Lets get on with the release :)
>
> asankha
>
Great! Shall include the fix in 4.1.1 or not?
Oleg
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org
|
http://mail-archives.apache.org/mod_mbox/hc-dev/201105.mbox/%3C1305282958.31660.19.camel@ubuntu%3E
|
CC-MAIN-2017-34
|
refinedweb
| 114
| 71.61
|
#include <sys/sunddi.h> ddi_taskq_t *ddi_taskq_create(dev_info_t *dip, const char *name, int nthreads, pri_t pri, uint_t cflags);
void ddi_taskq_destroy(ddi_taskq_t *tq);
int ddi_taskq_dispatch(ddi_taskq_t *tq, void (* func)(void *), void *arg, uint_t dflags);
void ddi_taskq_wait(ddi_taskq_t *tq);
void ddi_taskq_suspend(ddi_taskq_t *tq);
boolean_t ddi_taskq_suspended(ddi_taskq_t *tq);
void ddi_taskq_resume(ddi_taskq_t *tq);
Solaris DDI specific (Solaris DDI)
Pointer to the device's dev_info structure. May be NULL for kernel modules that do not have an associated dev_info structure.
Descriptive string. Only alphanumeric characters can be used in name and spaces are not allowed. The name should be unique.
Number of threads servicing the task queue. Note that the request ordering is guaranteed (tasks are processed in the order scheduled) if the taskq is created with a single servicing thread.
Priority of threads servicing the task queue. Drivers and modules should specify TASKQ_DEFAULTPRI.
Should pass 0 as flags.
Callback function to call.
Argument to the callback function.
Possible dflags are:
Allow sleeping (blocking) until memory is available.
Return DDI_FAILURE immediately if memory is not available.
Pointer to a task queue (ddi_taskq_t *).
Pointer to a thread structure.
A kernel task queue is a mechanism for general-purpose asynchronous task scheduling that enables tasks to be performed at a later time by another thread. There are several reasons why you may utilize asynchronous task scheduling:
You have a task that isn't time-critical, but a current code path that is.
You have a task that may require grabbing locks that a thread already holds.
You have a task that needs to block (for example, to wait for memory), but a have a thread that cannot block in its current context.
You have a code path that can't complete because of a specific condition, but also can't sleep or fail. In this case, the task is immediately queued and then is executed after the condition disappears.
A task queue is just a simple way to launch multipledi_taskq_create() function creates a task queue instance.
The ddi_taskq_dispatch() function places taskq on the list for later execution. The dflag argument specifies whether it is allowed sleep waiting for memory. DDI_SLEEP dispatches can sleep and are guaranteed to succeed. DDI_NOSLEEP dispatches are guaranteed not to sleep but may fail (return DDI_FAILURE) if resources are not available.
The ddi_taskq_destroy() function waits for any scheduled tasks to complete, then destroys the taskq. The caller should guarantee that no new tasks are scheduled for the closing taskq.
The ddi_taskq_wait() function waits for all previously scheduled tasks to complete. Note that this function does not stop any new task dispatches.
The ddi_taskq_suspend() function suspends all task execution until ddi_taskq_resume() is called. Although ddi_taskq_suspend() attempts to suspend pending tasks, there are no guarantees that they will be suspended. The only guarantee is that all tasks dispatched after ddi_taskq_suspend() will not be executed. Because it will trigger a deadlock, the ddi_taskq_suspend() function should never be called by a task executing on a taskq.
The ddi_taskq_suspended() function returns B_TRUE if taskq is suspended, and B_FALSE otherwise. It is intended to ASSERT that the task queue is suspended.
The ddi_taskq_resume() function resumes task queue execution.
The ddi_taskq_create() function creates an opaque handle that is used for all other taskq operations. It returns a taskq pointer on success and NULL on failure.
The ddi_taskq_dispatch() function returns DDI_FAILURE if it can't dispatch a task and returns DDI_SUCCESS if dispatch succeeded.
The ddi_taskq_suspended() function returns B_TRUE if taskq is suspended. Otherwise B_FALSE is returned.
All functions may be called from the user or kernel contexts.
Addtionally, the ddi_taskq_dispatch function may be called from the interrupt context only if the DDI_NOSLEEP flag is set.
|
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-taskq-dispatch-9f.html
|
CC-MAIN-2015-22
|
refinedweb
| 604
| 57.98
|
This article is a pithy introduction to the concept of Behavior-Driven Development. In it, I present the way that the C# model of a bank account, with its trademark Deposit() and Withdraw() methods, can be created using BDD techniques. To do this, I will use the NBehave framework for BDD, and MbUnit as the underlying test framework.
Deposit()
Withdraw()
Please note that for this introductory article, I'm avoiding the issue of mock objects – something that shouldn't be avoided in real projects, of course. Also, I will not talk about the issue of code coverage which is, of course, also important if you want to get your tests to cover all possible scenarios.
Okay, you know what TDD is, right? TDD, also known as Test-Driven Development, is a very simple concept. In TDD, you write a Unit Test for some not-yet-implemented functionality, see it fail, add the necessary functionality, then see it succeed. Using TDD, your Unit Test for a withdrawal from a bank account might look like this:
[Test]
public void WithdrawalTest()
{
Account a = new Account(100);
a.Withdraw(30);
Assert.AreEqual(a.Balance, 70);
Assert.Throws<InsufficientFundsException>(a.Withdraw(100));
}
This Unit Test is fine, but it tells you very little about what you are actually testing. What's essentially happening above is a comparison of states. For example, after the first withdrawal, you're comparing the Balance state of the account with the value 70. There's no notion of “the balance being reduced by the amount withdrawn” here. This test is very mechanical, and is not particularly descriptive.
Balance
Enter the notion of BDD. Basically, BDD is designed around the idea that, instead of describing the code under test as some sort of final state machine, you actually give it qualities related to its behavior. What this means is that you describe, in English, what each step in the Unit Test does, and associate the particular code with this step. You also provide additional metadata about the user story, i.e., what the particular user would want to happen in this test, and why it's important for them.
Sounds confusing? We'll take a look at a practical example in a moment, but before that, let's briefly discuss the libraries you need to add BDD to your application.
First of all, you need NBehave, the library that actually allows BDD in the first place. Although the NBehave homepage is here, you should get the latest version from their Google Code repository. This is essential, because only version 0.4 contains the support for MbUnit that we need. If you get an earlier version, you won't be able to get MbUnit and NBehave to play together.
NBehave runs on top of a ‘conventional' Unit Testing framework, and I'm going to use MbUnit for this article. You can get MbUnit here, as part of the Gallio automation framework. The download link is right on the front page, so I'd just go for that.
As with conventional Unit Testing, you need some sort of test runner to actually run the tests and report results. If you have ReSharper, you've got nothing to worry about, since Gallio comes with a ReSharper plug-in for running all sorts of Units Tests, MbUnit included. If you haven't got ReSharper, you can use Gallio itself as it comes with its own test runner. All you have to do is open the assembly containing your Unit Tests, and you're set.
Let's define an entity that models a bank account. It will have the following features:
Withdraw(int amount)
InsufficientFundsException
Deposit(int amount)
Transfer(Account from, Account to, int amount)
For the above methods, let us also agree to throw an ArgumentException if the amount passed to any function is non-positive.
ArgumentException
amount
This very simple model is enough for us to get started with. In fact, I can code the interface for this class right now, since we need it anyway (without an interface, we cannot write tests – even failing ones).
public sealed class Account
{
private int balance;
public int Balance
{
get { return balance; }
set { balance = value; }
}
public void Deposit(int amount)
{
}
public void Withdraw(int amount)
{
}
public static void Transfer(Account from, Account to, int amount)
{
}
}
For the sake of completeness, here is the InsufficientFundsException class:
public class InsufficientFundsException : Exception
{
public InsufficientFundsException(int requested, int available)
{
AmountAvailable = available;
AmountRequested = requested;
}
public int AmountRequested { get; set; }
public int AmountAvailable { get; set; }
}
With the model in place, we can finally get started with BDD! Hooray!
Let's get started by adding the references to the project:
First, add a reference to the MbUnit assembly. It's in the GAC, so no searching is required. We only need the MbUnit.Framework namespace from this assembly for some essential attributes, such as [SetUp] – everything else is handled directly by NBehave.
MbUnit.Framework
[SetUp]
We need three assemblies from NBehave – these are NBehave.Narrator.Framework, NBehave.Spec.Framework, and NBehave.Spec.MbUnit. The first of these assemblies imports the API for the so-called Narrator – an interface which mimics a narrator telling the user story. Basically, you'll be using the code to say things like “as a user, I want account withdrawal to work properly”. The second framework contains the base definition of a SpecBase class – this class is important because our test classes will need to derive from it. Finally, the third framework is – you've guessed it – a bridge between MbUnit and NBehave. In actual fact, this framework relies on NBehave.Spec.Framework, in that it contains a specialization of SpecBase for MbUnit.
SpecBase
One thing to note about NBehave.Spec.MbUnit is that, in addition to the SpecBase class, which we simply subclass and forget about (for the most part), this assembly also contains extension methods that mirror, to some extent, the Assert functionality of MbUnit. Here's what I mean:
Assert
// in MbUnit, we write this
Assert.AreEqual(a.Balance, 70);
// but in NBehave, we write this
a.Balance.ShouldEqual(70);
The above statements are equivalent, but NBehave's version is perhaps more clear in expressing what it actually means. Please note, however, that at the moment, the extension methods which enable this behavior do not, at the time of this writing, cover every single functionality of MbUnit's Assert class. Thus, however nice this syntax is, you won't always be able to use it – particularly, if you rely on the more advanced functionality of MbUnit.
To cut the story short, here is the skeleton of my test class:
[
Author("Dmitri", "dmitrinesteruk@gmail.com"),
Wrote("Account operation tests"),
TestsOn(typeof(Account)),
For("Banking system")
]
public class AccountTest : SpecBase
{
public Account account;
public Account account2;
[SetUp]
public void Initialize_before_each_test()
{
account = new Account();
account2 = new Account { Balance = 100 };
}
}
There isn't anything interesting happening here. Our test class derives from SpecBase, and is decorated with fairly standard MbUnit test attributes that have been redefined (see the next section) to make them more readable. In the test class itself, I create two accounts – one empty, and one with $100 on it. You'll notice that although I'm using the old-fashioned [SetUp] attribute, the method name is a bit strange. In fact, it's one of TDD conventions to have method names that actually describe what's going on, in English rather than some shorthand notation. So, that's precisely what's being done here.
It all begins with a story. Once upon a time, NBehave developers wrote the (somewhat clever) class they called Story. This class was designed to describe a particular set of usage scenarios which you are trying to test. For example, the story would describe the Deposit action on the account as follows:
Story
[Story, That, Should("Increase account balance when money is deposited")]
public void Deposit_should_increase_account_balance()
{
Story story = new Story("Deposit");
story.AsA("User")
.IWant("The bank account balance to increase by the amount deposited")
.SoThat("I can deposit money");
// scenarios here
}
What's going on here, then? First of all, the attributes that decorate the test method are really using the familiar xUnit testing attributes (e.g., [Test]), but for BDD, many people (myself included) redefine them using C#'s using syntax:
[Test]
using
using That = MbUnit.Framework.TestAttribute;
using Describe = MbUnit.Framework.CategoryAttribute;
using For = MbUnit.Framework.CategoryAttribute;
using Wrote = MbUnit.Framework.DescriptionAttribute;
using Should = MbUnit.Framework.DescriptionAttribute;
The only reason for these redefinitions is to make the tests more readable. This trend permeates NBehave – get used to it. Now, let's take a look at what we did with the Story class. There is no Unit Test here! All we're doing is describing a user story, using English and NBehave's fluent interface. As you'll see later, most of NBehave uses a fluent interface, i.e., an interface where each function returns this, allowing long chains of calls to be made.
this
Just to be completely clear, we wrote the above story definition in order to reflect a particular requirement. For example, the requirements state that a user deposits money and their bank account grows accordingly, so we wrote just that. This story definition will appear in the output of our Unit Tests, making it somewhat easier to identify what it is we are testing.
We've created a story definition, so let's write a Unit Test and see it fail. In order to test it, we have to provide something called a scenario – a description of one possible thing that can happen. For example, you might withdraw from an empty bank account, or one that doesn't have enough money. Or you might withdraw an amount you actually have. Or you might try to withdraw a negative amount. All these cases are scenarios, and to get 100% coverage, you would need to test each one. However, let us start with something simple:
story.WithScenario("Money deposit")
.Given("My bank account is empty", () => { account.Balance = 0; })
.When("I deposit 100 units", () => account.Deposit(100))
.Then("The account balance should be 100", () => account.Balance.ShouldEqual(100));
All right, so this snippet probably exposes 99% of what BDD is about. Essentially, we're defining a scenario (a money deposit) and then describing the preconditions, the test itself, and the post-conditions – all in one C# statement! Here's what happens in our code:
WithScenario()
Given()
string
Action
When()
Then()
You have probably noticed by now that there's a fair bit of lambda syntax in the test. This is because the parameters in each of the clauses are of Action variety, so using a lambda syntax is somewhat more concise than using the delegate keyword.
delegate
Since we're doing BDD here, let's run the test to see it fail. On my system, I get the following error message:
*** DebugTrace ***
Story: Deposit
Narrative:
As a User
I want The bank account balance to increase by the amount deposited
So that I can deposit money
Scenario 1: Money deposit
Given My bank account is empty
When I deposit 100 units
Then The account balance should be 100 - FAILED
MbUnit.Core.Exceptions.NotEqualAssertionException:
Equal assertion failed: [[0]]!=[[100]]
Can you see how the test runner took the specification we wrote and actually output it as a readable scenario? It also shows us the point where it fails, so we don't have to decipher cryptic MbUnit messages (they are still available, so if you feel like it, be my guest). So, now that we have a failing test, let's add the missing functionality and try again:
public void Deposit(int amount)
{
balance += amount;
}
Simple enough. Now, when we run the test, it succeeds. That's all there is to TDD/BDD, really! But, let's look at a more complex scenario – depositing a negative amount. We should get an exception, and our bank account balance should remain unchanged. Here's how such a test would look:
story.WithScenario("Negative amount deposit")
.Given("My bank account is empty", () => { account.Balance = 0; })
.When("I try to deposit a negative amount", () => { })
.Then("I get an exception",
() => typeof(Exception).ShouldBeThrownBy(() => account.Deposit(-100)))
.And("My bank account balance is unchanged",
() => account.Balance.ShouldEqual(0));
There are two things to note here. First, we use the ShouldBeThrownBy() extension method to ensure that when calling Deposit() with a negative amount, we do, in fact, get an exception. Also, we use the And() method to make sure that, in addition to the exception being called, the account balance remains unchanged. Running the test on our code, we get the following output:
ShouldBeThrownBy()
And()
*** DebugTrace ***
Story: Deposit
Narrative:
As a User
I want The bank account balance to increase by the amount deposited
So that I can deposit money
Scenario 1: Money deposit
Given My bank account is empty
When I deposit 100 units
Then The account balance should be 100
Scenario 2: Negative amount deposit
Given My bank account is empty
When I try to deposit a negative amount
Then I get an exception - FAILED
The output is more or less expected, but we've had to break the paradigm somewhat in order to get the output we have here. Let me explain. First of all, I failed to provide an Action for when I was expected to actually deposit the amount. Instead, I used an empty lambda:
.When("I try to deposit a negative amount", () => { })
Basically, I cannot attempt a negative deposit in this clause, because I also intend to catch the exception and check that it is of the type I'm expecting – something more suitable to a Then() clause. On the other hand, I don't want to leave the When() clause action-less because if I do, the textual output (i.e., the “I try…” first parameter) will not appear in the output. This may be a bug or a feature, but in any case, I use an empty lambda to make sure that doesn't happen.
At this stage, I can simply finish off my naive implementation of the Deposit() function and run the test again. It would look something like this:
public void Deposit(int amount)
{
if (amount <= 0)
throw new Exception();
balance += amount;
}
Here's a situation where you cannot use the nice extension methods. Suppose you are testing a transfer between one bank account and another. You want to make sure that, if the transfer is possible (i.e., there is sufficient amount of money, it's a non-negative amount, etc.), that no exception should be thrown when the transfer happens. Since there is no ShouldNotThrow() extension method, we end up writing the following:
ShouldNotThrow()
story.WithScenario("Valid transfer")
.Given("I have 100 dollars", () => { account.Balance = 100; })
.And("You have 100 dollars", () => { account2.Balance = 100; })
.When("I give you 50 dollars",
() => Assert.DoesNotThrow(() => Account.Transfer(account, account2, 50)))
.Then("I have 50 dollars left", () => account.Balance.ShouldEqual(50))
.And("You have 150 dollars", () => account2.Balance.ShouldEqual(150));
By now, you've probably figured out that, apart from the way things are described (fluent interfaces, English descriptions), there is nothing new in NBehave. In fact, some people are annoyed that BDD is essentially a kind of 'verbose xUnit' that does the same things, but insists on describing everything you do. However, the benefit you get from this is traceability. For example, the words in your Unit Tests can refer to Use Cases in your requirements specification, thus making it easier to show that your product conforms to a particular spec. In fact, it's probably feasible to write a transformation tool that takes an English sentence and turns it into a skeleton NBehave scenario. If you write such a tool, please let me know!
This is it for this article. Thanks for reading. Comments and suggestions.
|
https://www.codeproject.com/Articles/32512/Behavior-Driven-Development-with-NBehave?fid=1533669&df=90&mpp=10&sort=Position&spc=None&select=3225328&tid=2896569&noise=1&prof=True&view=None
|
CC-MAIN-2018-13
|
refinedweb
| 2,637
| 63.19
|
Lecture 11: Risk-Neutral Valuation Steven Skiena. skiena
- Anastasia Pierce
- 1 years ago
- Views:
Transcription
1 Lecture 11: Risk-Neutral Valuation Steven Skiena Department of Computer Science State University of New York Stony Brook, NY skiena
2 Risk-Neutral Probabilities We can use an arbitrage argument to set the right probability of an upward move (β) as a function of the riskfree rate. At any point, investors can either (a) hold $1 stock or (b) invest $1 at the risk-free rate r. A risk-neutral investor would not care which portfolio they owned if they had the same return. Setting equal the returns from the stock (βα+(1 β)/α) and the risk-free portfolio (1+r), we can solve for β to determine the risk-neutral probability. But in truth, investors are not risk-neutral. In order to take the riskier investment they must be paid a premium.
3 Single-Step Option Pricing Binomial trees price options using the idea of risk-neutral valuation. Suppose a stock price is currently at $20, and will either be at $22 or $18 in three months. What is the price of a European call option for a strike price of $21? Clearly, this reduces to determining the probability of the upward price movement.
4 Risk Neutral Valuation The risk-neutral investor argument for setting this probability can be applied if we set up two portfolios which are of provably of equal risk and value. We will construct two riskless portfolios, one involving the stock and the other the risk-free rate.
5 Using Options to Eliminate Risk A riskless portfolio can be created by buying shares of stock and selling a short position in 1 call option, such that the value of the portfolio is the same whether the stock moves up or down. If the stock moves to $22, our portfolio will be worth $22 $1 1, since we must pay the return of the option we sold. If the stock moves to $18, our portfolio will be worth $18 $0, since the option we sold is worthless. A riskless portfolio is constructed by buying = 0.25 shares, since it is the solution of $22 $1 1 = $18.
6 Valuing the Portfolio Whether the stock goes up or down, this portfolio is worth $4.50 at the end of the period. The discounted value of this portfolio today, V, can be computed given the risk-free interest rate r. Thus V = (4.50)e rt. Since the value of V is equal to owning = 0.25 shares of stock at $20 per share minus the value f of the option, f = V.
7 The General Case In general, if there is an upward price movement, the value at the end of the option is S 0 u f u where S 0 u (f u ) the price of the stock (option) after an upward movement. If there is a downward price movement, the value at the end of the option is S 0 d f d Setting them equal and solving for yields = f u f d S 0 u S 0 d
8 The present value of the portfolio with a risk-free rate of r is (S 0 u f u )e rt which can be set up for a cost of S 0 f. Equating these two and solving for f yields f = S 0 (S 0 u f u )e rt By definition, the value of f must also be f = e rt (βf u + (1 β)f d ) where β is the probability of an upward movement. Solving for β we get β = ert d u d
9 Interpreting this Probability The expected stock price at time T implied by these probabilities is S 0 e rt. This implies that the stock price earns the risk free rate. The value of an option is its expected payoff in a risk-neutral world discounted at the risk-free rate.
10 Irrelevance of Stock s Expected Return When we value an option in terms of the price of the underlying asset, the probability of up and down movements in the real world is irrelevant, since they can be hedged. This is an example of a more general result stating that the expected return (drift) on the underlying asset in the real world is irrelevant. The option has to have the risk-neutral valuation, because if not there exists an arbitrage opportunity buying the right portfolio.
11 Pricing Options with Binomial Trees The value of the option can be worked backwards from the terminating (basis) condition level by level. The value of the option on leaf / terminating level is determined because the option price at expiration is completely given by the stock and strike prices.
12 Finer Gradations Adding additional levels to the trees allows finer price gradations than just a single up or down. The price of an option generally converges after about n = 30 levels or so. Note that the number of options needed ( ) changes at each node/level in the binomial tree. Thus to maintain a riskless portfolio options must be bought and sold continuously, a process known as delta hedging.
13 Generalizing the Model This binomial tree model can be generalized to include the effects of (1) dividends, by changing the magnitude of the moves in the levels corresponding to dividend periods, (2) changing interest rates, by using the rate appropriate on a given yield curve. It can also be generalized to allow more than two price movements from each node, say increase, decrease, and unchanged.
14 Pricing American Options American options permit execution at any intermediate time point. It pays to exercise a non-dividend paying American put early if the underlying stock price is sufficiently low (say 0) due to time-value of money. In general, it pays to exercise now whenever the payoff from immediate execution exceeds the value computed for the option at that point. The options can be priced by using the higher of the two possible valuations at any point in the tree.
15 American Put Example Observe the difference between evaluating a put (S 0 = 50, strike price K = 52)) as European vs. American: The price at each node is the maximum of S K S T and its European evaluation.
16 Early Exercise for American Calls It can be proven that it never pays to execute an American call option early. Consider a single period for an American call. Start at S 0 and end at S 0 u or S 0 d, with payoff f u and f d where 0 < f d < e rt < f u. The no exercise condition e rt (pf u + (1 p)f d ) > S 0 K clearly holds for K > S 0. The two other cases are: S 0 d < K S 0 K S 0 d
17 Case I: S 0 d < K S 0 e rt (pf u + (1 p)f d ) e rt pf u e rt p(s 0 u K). Using p = (e rt d)/(u d), we therefore need to prove Proof: e rt p(s 0 u K) > S 0 K (1 e rt p)k > (1 e rt pu)s 0 (u d 1 + de rt )K > (ue rt 1)S 0 d. (u d 1 + de rt )K > (u d 1 + de rt )S 0 d (1) = (u d (u d)e rt + ue rt 1)S 0 d (2) = ((u d)(1 e rt ) + ue rt 1)S 0 d (3) > (ue rt 1)S 0 d. (4) This completes the proof and shows that an American call would never be exercised early in this case.
18 Case II: K S 0 d We have f u S 0 u K, and f d S 0 d K. We therefore have e rt (pf u + (1 p)f d ) e rt (p(s 0 u K) + (1 p)(s 0 d K)) (5) = e rt ((pus 0 + (1 p)ds 0 ) K) (6) = e rt (e rt S 0 K) (7) = S 0 e rt K (8) > S 0 K. (9) (10) This shows that American call can never be exercised early in this case either.
19 Why Monte Carlo Simulation? Monte Carlo simulation is simpler than dynamic programming to conceive or implement. When the number of levels gets too high for exhaustive dynamic programming computation (say n = 1, 000, 000), Monte Carlo random walks can still be used to sample the distribution. Dynamic programming cannot as readily be applied to compute path-dependent distributions (such as Hurst random walks or pricing Asian options) as the state at each node depends on the path used to get there.
20 How Much is Up (and Down)? To complete the model, we need to set the magnitude for up and down movements in the binomial tree. If S 0 u < S 0 e rt, the upside for stock ownership is too low, and we are better off investing at the risk-free rate. If S 0 d > S 0 e rt, holding stock guarantees a better return than the risk free rate! Thus S 0 u > S 0 e rt > S 0 d. Otherwise the probability formulae give numbers outside of [0, 1]. This leaves us considerable freedom to set the u, d, and p parameters.
Caput Derivatives: October 30, 2003
Caput Derivatives: October 30, 2003 Exam + Answers Total time: 2 hours and 30 minutes. Note 1: You are allowed to use books, course notes, and a calculator. Question 1. [20 points] Consider an investor
Financial Modeling. Class #06B. Financial Modeling MSS 2012 1
Financial Modeling Class #06B Financial Modeling MSS 2012 1 Class Overview Equity options We will cover three methods of determining an option s price 1. Black-Scholes-Merton formula 2. Binomial trees
Option Valuation. Chapter 21
Option Valuation Chapter 21 Intrinsic and Time Value intrinsic value of in-the-money options = the payoff that could be obtained from the immediate exercise of the option for a call option: stock price
Numerical Methods for Option Pricing
Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly }
Two-State Option Pricing
Rendleman and Bartter [1] present a simple two-state model of option pricing. The states of the world evolve like the branches of a tree. Given the current state, there are two possible states next
Lecture 3.1: Option Pricing Models: The Binomial Model
Important Concepts Lecture 3.1: Option Pricing Models: The Binomial Model The concept of an option pricing model The one and two period binomial option pricing models Explanation of the establishment and
Lecture 21 Options Pricing
Lecture 21 Options Pricing Readings BM, chapter 20 Reader, Lecture 21 M. Spiegel and R. Stanton, 2000 1 Outline Last lecture: Examples of options Derivatives and risk (mis)management Replication and Put-call
Part V: Option Pricing Basics
erivatives & Risk Management First Week: Part A: Option Fundamentals payoffs market microstructure Next 2 Weeks: Part B: Option Pricing fundamentals: intrinsic vs. time value, put-call parity introduction
BINOMIAL OPTION PRICING
Darden Graduate School of Business Administration University of Virginia BINOMIAL OPTION PRICING Binomial option pricing is a simple but powerful technique that can be used to solve many complex option-pricing
Financial Options: Pricing and Hedging
Financial Options: Pricing and Hedging Diagrams Debt Equity Value of Firm s Assets T Value of Firm s Assets T Valuation of distressed debt and equity-linked securities requires an understanding of financial
Lecture 5: Put - Call Parity
Lecture 5: Put - Call Parity Reading: J.C.Hull, Chapter 9 Reminder: basic assumptions 1. There are no arbitrage opportunities, i.e. no party can get a riskless profit. 2. Borrowing and lending are possible,
Properties of Stock Options. Chapter 10
Properties of Stock Options Chapter 10 1 Notation c : European call option price C : American Call option price p : European put option price P : American Put option price S 0 : Stock price today K : Strike
Chapter 5 Financial Forwards and Futures
Chapter 5 Financial Forwards and Futures Question 5.1. Four different ways to sell a share of stock that has a price S(0) at time 0. Question 5.2. Description Get Paid at Lose Ownership of Receive Payment
Determination of Forward and Futures Prices
Determination of Forward and Futures Prices 3.1 Chapter 3 3.2 Consumption vs Investment Assets Investment assets assets held by significant numbers of people purely for investment purposes Examples: gold,
Option Pricing Basics
Option Pricing Basics Aswath Damodaran Aswath Damodaran 1 What is an option? An option provides the holder with the right to buy or sell a specified quantity of an underlying asset at a fixed price (called
Valuing Options / Volatility
Chapter 5 Valuing Options / Volatility Measures Now that the foundation regarding the basics of futures and options contracts has been set, we now move to discuss the role of volatility in futures
CHAPTER 21: OPTION VALUATION
CHAPTER 21: OPTION VALUATION PROBLEM SETS 1. The value of a put option also increases with the volatility of the stock. We see this from the put-call parity theorem as follows: P = C S + PV(X) + PV(Dividends)
Two-State Model of Option Pricing
Rendleman and Bartter [1] put forward a simple two-state model of option pricing. As in the Black-Scholes model, to buy the stock and to sell the call in the hedge ratio obtains a risk-free portfolio.
Options Pricing. This is sometimes referred to as the intrinsic value of the option.
Options Pricing We will use the example of a call option in discussing the pricing issue. Later, we will turn our attention to the Put-Call Parity Relationship. I. Preliminary Material Recall the payoff
FINANCIAL OPTION ANALYSIS HANDOUTS
FINANCIAL OPTION ANALYSIS HANDOUTS 1 2 FAIR PRICING There is a market for an object called S. The prevailing price today is S 0 = 100. At this price the object S can be bought or sold by anyone for any
Finance 436 Futures and Options Review Notes for Final Exam. Chapter 9
Finance 436 Futures and Options Review Notes for Final Exam Chapter 9 1. Options: call options vs. put options, American options vs. European options 2. Characteristics: option premium, option type, underlying
Chapter 2 Questions Sample Comparing Options
Chapter 2 Questions Sample Comparing Options Questions 2.16 through 2.21 from Chapter 2 are provided below as a Sample of our Questions, followed by the corresponding full Solutions. At the beginning of
Call Price as a Function of the Stock Price
Call Price as a Function of the Stock Price Intuitively, the call price should be an increasing function of the stock price. This relationship allows one to develop a theory of option pricing, derived
DERIVATIVE SECURITIES Lecture 2: Binomial Option Pricing and Call Options
DERIVATIVE SECURITIES Lecture 2: Binomial Option Pricing and Call Options Philip H. Dybvig Washington University in Saint Louis review of pricing formulas assets versus futures practical issues call options
Options Markets: Introduction
Options Markets: Introduction Chapter 20 Option Contracts call option = contract that gives the holder the right to purchase an asset at a specified price, on or before a certain date put option = contract. + Concepts and Buzzwords. Readings. Put-Call Parity Volatility Effects
+ Options + Concepts and Buzzwords Put-Call Parity Volatility Effects Call, put, European, American, underlying asset, strike price, expiration date Readings Tuckman, Chapter 19 Veronesi, Chapter 6 Options,
OPTIONS TRADING (ADVANCED) MODULE
OPTIONS TRADING (ADVANCED) MODULE PRACTICE QUESTIONS 1. Which of the following is a contract where both parties are committed? Forward Future Option 2. Swaps can be based on Interest Principal and Interest
Lecture 17/18/19 Options II
1 Lecture 17/18/19 Options II Alexander K. Koch Department of Economics, Royal Holloway, University of London February 25, February 29, and March 10 2008 In addition to learning the material covered in
Finance 350: Problem Set 8 Alternative Solutions
Finance 35: Problem Set 8 Alternative Solutions Note: Where appropriate, the final answer for each problem is given in bold italics for those not interested in the discussion of the solution. All payoff
S 1 S 2. Options and Other Derivatives
Options and Other Derivatives The One-Period Model The previous chapter introduced the following two methods: Replicate the option payoffs with known securities, and calculate the price of the replicating
Lecture 3: Forward Contracts Steven Skiena. skiena
Lecture 3: Forward Contracts Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 skiena Derivatives Derivatives are financial
Lecture 4: Properties of stock options
Lecture 4: Properties of stock options Reading: J.C.Hull, Chapter 9 An European call option is an agreement between two parties giving the holder the right to buy a certain asset (e.g. one stock unit)
Chapter 7: Option pricing foundations Exercises - solutions
Chapter 7: Option pricing foundations Exercises - solutions 1. (a) We use the put-call parity: Share + Put = Call + PV(X) or Share + Put - Call = 97.70 + 4.16 23.20 = 78.66 and P V (X) = 80 e 0.0315 =
CHAPTER 21: OPTION VALUATION
CHAPTER 21: OPTION VALUATION 1. Put values also must increase as the volatility of the underlying stock increases. We see this from the parity relation as follows: P = C + PV(X) S 0 + PV(Dividends). Given
Martingale Pricing Theory
IEOR E4707: Financial Engineering: Continuous-Time Models Fall 200 c 200 by Martin Haugh Martingale Pricing Theory These notes develop the modern theory of martingale pricing in a discrete-time, discrete-space
FIN-40008 FINANCIAL INSTRUMENTS SPRING 2008
FIN-40008 FINANCIAL INSTRUMENTS SPRING 2008 Options These notes consider the way put and call options and the underlying can be combined to create hedges, spreads and combinations. We will consider the
Contents. iii. MFE/3F Study Manual 9th edition Copyright 2011 ASM
Contents 1 Put-Call Parity 1 1.1 Review of derivative instruments................................................ 1 1.1.1 Forwards............................................................ 1 1.1.2 Call:
BUS 316 NOTES AND ANSWERS BINOMIAL OPTION PRICING
BUS 316 NOTES AND ANSWERS BINOMIAL OPTION PRICING 3. Suppose there are only two possible future states of the world. In state 1 the stock price rises by 50%. In state 2, the stock price drops by 25%. The
The Black-Scholes
Chapter 21 Valuing Options
Chapter 21 Valuing Options Multiple Choice Questions 1. Relative to the underlying stock, a call option always has: A) A higher beta and a higher standard deviation of return B) A lower beta and a
Contents. iii. MFE/3F Study Manual 9 th edition 10 th printing Copyright 2015 ASM
Contents 1 Put-Call Parity 1 1.1 Review of derivative instruments................................. 1 1.1.1 Forwards........................................... 1 1.1.2 Call and put options....................................:
Other observable variables as arguments besides S.
Valuation of options before expiration Consider European options with time t until expiration. Value now of receiving c T at expiration? (Value now of receiving p T at expiration?) Have candidate
|
http://docplayer.net/24046647-Lecture-11-risk-neutral-valuation-steven-skiena-skiena.html
|
CC-MAIN-2018-30
|
refinedweb
| 3,187
| 50.46
|
I've got a huge list of words (every single word in one line in a txt file) and certain words need to get capitalized manually (e.g. by hand), so I was looking if there's a shortcut in notepad++ (my editor currently) to automatically capitalize the first letter of a line but couldnt find one. Is there none? If not, can you advise me an alternative windows program to quickly do this by using a simple shortcut (so I can go through with the arrow-down key and use the shortcut whenever needed on a specific word)?thanks a lot
This is the assignment my professor assigned:
Use a stack to reverse the words of a sentence. Keep reading in words and adding them to the stack until you have a word that ends in a period. When that happens, pop the words off the stack and print them. For example, for the input "It was a period of civil war." you should output "War civil of period a was it." Pay attention to the capitalization and punctuation changes.
I have the program so far reversing the order of the words but I don't know how to have it stop at the period, change the capitalization, and move the punctuation. How do I do that?
import java.util.Scanner;import java.util.Stack;import java.util.regex.Pattern;public class ReverseWordsInString { //main method public static void main(String[] args) { Scanner scanner = new Scanner(System.in); System.out.printf("1. Enter string to reverse : "); String inputString = scanner.nextLine(); if (inputString == null || inputString.length() == 0) { System.out.println("Enter the valid string"); return; } String reverse = reverseStringWordWise_Stack(inputString); System.out.printf("\n3. Reverse string using stack is : %s", reverse); } //reverses the string using a stack private static String reverseStringWordWise_Stack(String inputString) { String[] arrString = inputString.trim().split(Pattern.quote(" ")); Stack stack = new Stack(); for(String input : arrString) { stack.push(input); } StringBuilder builder = new StringBuilder(); while( !stack.isEmpty()) { builder.append(stack.pop()).append(" "); } return builder.toString(); }}
I am new to MVC and have not found a solution for this online.
I have the html as :
@Html.DisplayFor(model => model.Address1) <br />
I want all the first letter of address1 to be capital letters e.g. Something Road instead of something road.
Now I have a class client and property Address1 and using EF to get the address as follow:
public class MyDBContext : DbContext { public DbSet<Client> Clients { get; set; } }
Hope it makes sense.
This question already has an answer here:
I was trying to write something to capitalize each word in a sentence. And it works fine, as follows:
print " ".join((word.capitalize() for word in raw_input().strip().split(" ")))
If the input is 'hello world', the output would be :
Hello World
But I tried writing it differently as follows :
s = raw_input().strip().split(' ')for word in s: word.capitalize()print ' '.join(s)
And its output would be wrong :
hello world
So what's wrong with that, why the result isn't the same ?! Thank you.
I have a name input box. I would like to auto-capitalise the first letter of the name as the user types, but allow the user to override the change for names such as "de Salis".
I see here that this is impossible with CSS, because
text-transform:capitalize; will capitalise every word and can't be overridden.
A
.keyup handler can fix up the first letter as you type, and there are a bunch of solutions found here to do that.
What I can't see how the original capitalisation can be overriden easily. I guess I need a flag, but if the function is one that can be attached to multiple elements, where should the flag live? I could append a DOM element as a flag, but this seems pretty ugly.
A fiddle of the non-override model is found here.
Suggestions?
|
http://www.convertstring.com/en/StringFunction/ToTitleCase
|
CC-MAIN-2017-17
|
refinedweb
| 647
| 65.22
|
This guide describes the Sharing Saved Objects effort, and the breaking changes that plugin developers need to be aware of for the planned 8.0 release of Kibana.
Saved objects (hereinafter "objects") are used to store all sorts of things in Kibana, from Dashboards to Index Patterns to Machine Learning Jobs. The effort to make objects shareable can be summarized in a single picture:
Each plugin can register different object types to be used in Kibana. Historically, objects could be isolated (existing in a single space) or global (existing in all spaces), there was no in-between. As of the 7.12 release, Kibana now supports two additional types of objects:
Ideally, most types of objects in Kibana will eventually be shareable; however, we have also introduced share-capable objects as a stepping stone for plugin developers to fully support this feature.
To implement this feature, we had to make a key change to how objects are serialized into raw Elasticsearch documents. As a result, some existing object IDs need to be changed, and this will cause some breaking changes to the way that consumers (plugin developers) interact with objects. We have implemented mitigations so that these changes will not affect end-users if consumers implement the required steps below.
Existing, isolated object types will need to go through a special conversion process to become share-capable upon upgrading Kibana to version 8.0. Once objects are converted, they can easily be switched to become fully shareable in any future release. This conversion will change the IDs of any existing objects that are not in the Default space. Changing object IDs itself has several knock-on effects:
To be perfectly clear: these effects will all be mitigated if and only if you follow the steps below!
External plugins can also convert their objects, but they don’t have to do so before the 8.0 release.
If you’re still reading this page, you’re probably developing a Kibana plugin that registers an object type, and you want to know what steps you need to take to prepare for the 8.0 release and mitigate any breaking changes! Depending on how you are using saved objects, you may need to take up to 5 steps, which are detailed in separate sections below. Refer to this flowchart:
There is a proof-of-concept (POC) pull request to demonstrate these changes. It first adds a simple test plugin that allows users to create and view notes. Then, it goes through the steps of the flowchart to convert the isolated "note" objects to become share-capable. As you read this guide, you can follow along in the POC to see exactly how to take these steps.
Do these objects contain links to other objects?
If your objects store any links to other objects (with an object type/ID), you need to take specific steps to ensure that these links continue functioning after the 8.0 upgrade.
⚠️ This step must be completed no later than the 7.16 release. ⚠️
Ensure all object links use the root-level
referencesfield
If you answered "Yes" to Question 1, you need to make sure that your object links are only stored in the root-level
references field. When a given object’s ID is changed, this field will be updated accordingly for other objects.
The image below shows two different examples of object links from a "case" object to an "action" object. The top shows the incorrect way to link to another object, and the bottom shows the correct way.
If your objects do not use the root-level
references field, you’ll need to add a migration
before the 8.0 release to fix that. Here’s a migration function for the example above:
function migrateCaseToV716( doc: SavedObjectUnsanitizedDoc<{ connector: { type: string; id: string } }> ): SavedObjectSanitizedDoc<unknown> { const { connector: { type: connectorType, id: connectorId, ...otherConnectorAttrs }, } = doc.attributes; const { references = [] } = doc; return { ...doc, attributes: { ...doc.attributes, connector: otherConnectorAttrs, }, references: [...references, { type: connectorType, id: connectorId, name: 'connector' }], }; } ... // Use this migration function where the "case" object type is registered migrations: { '7.16.0': migrateCaseToV716, },
Reminder, don’t forget to add unit tests and integration tests!
Are there any "deep links" to these objects?
A deep link is a URL to a page that shows a specific object. End-users may bookmark these URLs or schedule reports with them, so it is critical to ensure that these URLs continue working. The image below shows an example of a deep link to a Canvas workpad object:
Note that some URLs may contain deep links to multiple objects, for example, a Dashboard and a filter for an Index Pattern.
⚠️ This step will preferably be completed in the 7.16 release; it must be completed no later than the 8.0 release. ⚠️
Update your code to use the new SavedObjectsClient
resolve()method instead of
get()
If you answered "Yes" to Question 2, you need to make sure that when you use the SavedObjectsClient to fetch an object
using its ID, you use a different API to do so. The existing
get() function will only find an object using its current ID. To make sure
your existing deep link URLs don’t break, you should use the new
resolve() function; this
attempts to find an object using its old ID and its current ID.
In a nutshell, if your deep link page had something like this before:
const savedObject = savedObjectsClient.get(objType, objId);
You’ll need to change it to this:
const resolveResult = savedObjectsClient.resolve(objType, objId); const savedObject = resolveResult.saved_object;
See an example of this in step 2 of the POC!
The SavedObjectsResolveResponse interface has three fields, summarized below:
saved_object- The saved object that was found.
outcome- One of the following values:
'exactMatch' | 'aliasMatch' | 'conflict'
alias_target_id- This is defined if the outcome is
'aliasMatch'or
'conflict'. It means that a legacy URL alias with this ID points to an object with a different ID.
The SavedObjectsClient is available both on the server-side and the client-side. You may be fetching the object on the server-side via a
custom HTTP route, or you may be fetching it on the client-side directly. Either way, the
outcome and
alias_target_id fields need to be
passed to your client-side code, and you should update your UI accordingly in the next step.
You don’t need to use
resolve() everywhere, you should only use it for deep
links!
⚠️ This step will preferably be completed in the 7.16 release; it must be completed no later than the 8.0 release. ⚠️
Update your client-side code to correctly handle the three different
resolve()outcomes
The Spaces plugin API exposes React components and functions that you should use to render your UI in a consistent manner for end-users. Your UI will need to use the Core HTTP service and the Spaces plugin API to do this.
Your page should change according to the outcome:
See an example of this in step 3 of the POC!
Update your plugin’s
kibana.jsonto add a dependency on the Spaces plugin:
... "optionalPlugins": ["spaces"]
Update your plugin’s
tsconfig.jsonto add a dependency to the Space’s plugin’s type definitions:
... "references": [ ... { "path": "../spaces/tsconfig.json" }, ]
Update your Plugin class implementation to depend on the Core HTTP service and Spaces plugin API:
interface PluginStartDeps { spaces?: SpacesPluginStart; } export class MyPlugin implements Plugin<{}, {}, {}, PluginStartDeps> { public setup(core: CoreSetup<PluginStartDeps>) { core.application.register({ ... async mount(appMountParams: AppMountParameters) { const [coreStart, pluginStartDeps] = await core.getStartServices(); const { http } = coreStart; const { spaces: spacesApi } = pluginStartDeps; ... // pass `http` and `spacesApi` to your app when you render it }, }); ... } }
In your deep link page, add a check for the
'aliasMatch'outcome:
if (spacesApi && resolveResult.outcome === 'aliasMatch') { // We found this object by a legacy URL alias from its old ID; redirect the user to the page with its new ID, preserving any URL hash const newObjectId = resolveResult.alias_target_id!; // This is always defined if outcome === 'aliasMatch' const newPath = `/this/page/${newObjectId}${window.location.hash}`; // Use the *local* path within this app (do not include the "/app/appId" prefix) await spacesApi.ui.redirectLegacyUrl(newPath, OBJECT_NOUN); return; }
Note that
OBJECT_NOUNis optional, it just changes "object" in the toast to whatever you specify — you may want the toast to say "dashboard" or "index pattern" instead!
And finally, in your deep link page, add a function that will create a callout in the case of a
'conflict'outcome:
const getLegacyUrlConflictCallout = () => { // This function returns a callout component *if* we have encountered a "legacy URL conflict" scenario if (spacesApi && resolveResult.outcome === 'conflict') { // We have resolved to one object, but another object has a legacy URL alias associated with this ID/page. We should display a // callout with a warning for the user, and provide a way for them to navigate to the other object. const currentObjectId = savedObject.id; const otherObjectId = resolveResult.alias_target_id!; // This is always defined if outcome === 'conflict' const otherObjectPath = `/this/page/${otherObjectId}${window.location.hash}`; // Use the *local* path within this app (do not include the "/app/appId" prefix) return ( <> {spacesApi.ui.components.getLegacyUrlConflict({ objectNoun: OBJECT_NOUN, currentObjectId, otherObjectId, otherObjectPath, })} <EuiSpacer /> </> ); } return null; }; ... return ( <EuiPage> <EuiPageBody> <EuiPageContent> {/* If we have a legacy URL conflict callout to display, show it at the top of the page */} {getLegacyUrlConflictCallout()} <EuiPageContentHeader> ... );
- Generate staging data and test your page’s behavior with the different outcomes.
Reminder, don’t forget to add unit tests and functional tests!
⚠️ This step must be completed in the 8.0 release (no earlier and no later). ⚠️
Update your server-side code to convert these objects to become "share-capable"
After Step 3 is complete, you can add the code to convert your objects.
The previous steps can be backported to the 7.x branch, but this step — the conversion itself — can only take place in 8.0! You should use a separate pull request for this.
When you register your object, you need to change the
namespaceType and also add a
convertToMultiNamespaceTypeVersion field. This
special field will trigger the actual conversion that will take place during the Core migration upgrade process when a user installs the
Kibana 8.0 release:
See an example of this in step 4 of the POC!
Reminder, don’t forget to add integration tests!
Are these objects encrypted?
Saved objects can optionally be encrypted by using the Encrypted Saved Objects plugin. Very few object types are encrypted, so most plugin developers will not be affected.
⚠️ This step must be completed in the 8.0 release (no earlier and no later). ⚠️
Update your server-side code to add an Encrypted Saved Object (ESO) migration for these objects
If you answered "Yes" to Question 3, you need to take additional steps to make sure that your objects can still be decrypted after the conversion process. Encrypted saved objects use some fields as part of "additionally authenticated data" (AAD) to defend against different types of cryptographic attacks. The object ID is part of this AAD, and so it follows that the after the object’s ID is changed, the object will not be able to be decrypted with the standard process.
To mitigate this, you need to add a "no-op" ESO migration that will be applied immediately after the object is converted during the 8.0 upgrade process. This will decrypt the object using its old ID and then re-encrypt it using its new ID:
Reminder, don’t forget to add unit tests and integration tests!
Update your code to make your objects shareable
This is not required for the 8.0 release; this additional information will be added in the near future!
We implemented the share-capable object type as an intermediate step for consumers who currently have isolated objects, but are not yet ready to support fully shareable objects. This is primarily because we want to make sure all object types are converted at the same time in the 8.0 release to minimize confusion and disruption for the end-user experience.
We realize that the conversion process and all that it entails can be a not-insignificant amount of work for some Kibana teams to prepare for by the 8.0 release. As long as an object is made share-capable, that ensures that its ID will be globally unique, so it will be trivial to make that object shareable later on when the time is right.
A developer can easily flip a switch to make a share-capable object into a shareable one, since these are both serialized the same way. However, we envision that each consumer will need to enact their own plan and make additional UI changes when making an object shareable. For example, some users may not have access to the Saved Objects Management page, but we still want those users to be able to see what space(s) their objects exist in and share them to other spaces. Each application should add the appropriate UI controls to handle this.
This is because of how isolated objects are serialized to raw Elasticsearch documents. Each raw document ID today contains its space ID (namespace) as a prefix. When objects are copied or imported to other spaces, they keep the same object ID, they just have a different prefix when they are serialized to Elasticsearch. This has resulted in a situation where many Kibana installations have saved objects in different spaces with the same object ID:
Once an object is converted, we need to remove this prefix. Because of limitations with our migration process, we cannot actively check if this would result in a conflict. Therefore, we decided to pre-emptively regenerate the object ID for every object in a non-Default space to ensure that every object ID becomes globally unique:
As mentioned in Question 2, some URLs may contain multiple object IDs, effectively deep linking to multiple objects. These should be handled on a case-by-case basis at the plugin owner’s discretion. A good rule of thumb is:
- The "primary" object on the page should always handle the three
resolve()outcomes as described in Step 3.
Any "secondary" objects on the page may handle the outcomes differently. If the secondary object ID is not important (for example, it just functions as a page anchor), it may make more sense to ignore the different outcomes. If the secondary object is important but it is not directly represented in the UI, it may make more sense to throw a descriptive error when a
'conflict'outcome is encountered.
Embeddables should use
spacesApi.ui.components.getEmbeddableLegacyUrlConflictto render conflict errors:
Viewing details shows the user how to disable the alias and fix the problem using the _disable_legacy_url_aliases API:
- If the secondary object is resolved by an external service (such as the index pattern service), the service should simply make the full outcome available to consumers.
Ideally, if a secondary object on a deep link page resolves to an
'aliasMatch' outcome, the consumer should redirect the user to a URL
with the new ID and display a toast message. The reason for this is that we don’t want users relying on legacy URL aliases more often than
necessary. However, such handling of secondary objects is not considered critical for the 8.0 release.
As depicted above, when an object is converted to become share-capable, if it exists in a non-Default space, its ID gets changed. To preserve its old ID, we also create a special object called a legacy URL alias ("alias" for short); this alias retains the target object’s old ID (sourceId), and it contains a pointer to the target object’s new ID (targetId).
Aliases are meant to be mostly invisible to end-users by design. There is no UI to manage them directly. Our vision is that aliases will be used as a stop-gap to help us through the 8.0 upgrade process, but we will nudge users away from relying on aliases so we can eventually deprecate and remove them.
The
resolve() function checks both if an object with the given ID exists, and if an object has an alias with the given ID.
- If only the former is true, the outcome is an
'exactMatch'— we found the exact object we were looking for.
- If only the latter is true, the outcome is an
'aliasMatch'— we found an alias with this ID, that pointed us to an object with a different ID.
- Finally, if both conditions are true, the outcome is a
'conflict'— we found two objects using this ID. Instead of returning an error in this situation, in the interest of usability, we decided to return the most correct match, which is the exact match. By informing the consumer that this is a conflict, the consumer can render an appropriate UI to the end-user explaining that this might not be the object they are actually looking for.
Outcome 1
When you resolve an object with its current ID, the outcome is an
'exactMatch':
This can happen in the Default space and in non-Default spaces.
Outcome 2
When you resolve an object with its old ID (the ID of its alias), the outcome is an
'aliasMatch':
This outcome can only happen in non-Default spaces.
Outcome 3
The third outcome is an edge case that is a combination of the others. If you resolve an object ID and two objects are found — one as an
exact match, the other as an alias match — the outcome is a
'conflict':
We actually have controls in place to prevent this scenario from happening when you share, import, or copy objects. However, this scenario could still happen in a few different situations, if objects are created a certain way or if a user tampers with an object’s raw ES document. Since we can’t 100% rule out this scenario, we must handle it gracefully, but we do expect this will be a rare occurrence.
It is important to note that when a
'conflict' occurs, the object that is returned is the "most correct" match — the one with the ID that
exactly matches.
Reading through this guide, you may think it is safer or better to use
resolve() everywhere instead of
get(). Actually, we made an
explicit design decision to add a separate
resolve() function because we want to limit the affects of and reliance upon legacy URL
aliases. To that end, we collect anonymous usage data based on how many times
resolve() is used and the different outcomes are
encountered. That usage data is less useful is
resolve() is used more often than necessary.
Ultimately,
resolve() should only be used for data flows that involve a user-controlled deep link to an object. There is no reason to
change any other data flows to use
resolve().
External plugins (those not shipped with Kibana) can use this guide to convert any isolated objects to become share-capable or fully shareable! If you are an external plugin developer, the steps are the same, but you don’t need to worry about getting anything done before a specific release. The only thing you need to know is that your plugin cannot convert your objects until the 8.0 release.
|
https://www.elastic.co/guide/en/kibana/7.x/sharing-saved-objects.html
|
CC-MAIN-2021-43
|
refinedweb
| 3,189
| 60.95
|
Towards the end of the 20th century, computers could already work with several programs simultaneously. This is normally called multitasking.
However, only one computing unit had to be executed at a given time. So the risk was that one program could monoplize the CPU, causing other applications and the Operating System itself to wait for ever.
So the Operating System designers decided to split a physical unit across a few virtualized processors in some way. This could then give a certain amount of processor time to each executing program.
Besides, the Operating System had to be given higher priority to the processor time. Then the Operating System could prioritize the CPU access across different programs.
This implementation is what we call a thread.
A thread is a path of execution that exists within a process. Threads are like virtual processors assigned to a specific programs. The program then runs the thread independently.
A process is an instance of the program that is being executed.A process can comprise one or more threads.
Processors were becoming faster and faster, being able to execute more instructions per second.
But later on, modern processors started to have more computing cores, instead of becoming faster. However, programs written in the older way could not take advantage of this increase in power via multiple cores since they were designed to run on a single core processor.
So nowadays it is important to write programs that can use more than one computing core. This allows them to effectively utilize the modern processors power.
To achieve this we can execute tasks using multiple threads. The threads should then be able to properly communicate and synchronize with each other.
Creating a Thread
The
Thread class is defined in the
System.Threading namespace.
using System.Threading;
Let’s look at an example of creating a thread that prints out stars defined in an array. In fact we have two arrays: one stars array and the other nebulas array.
The stars array will be creating and rendered from our background thread while the nebulas array will be created and printed out from main thread.
using System; using System.Threading; namespace MsThread { class Program { static void Main() { Thread t = new Thread(showStars); t.Start(); showNebulas(); Console.ReadLine(); } static void showNebulas() { Console.WriteLine("Main Thread..................."); string[] nebulas = {"Horse Head","Ghost Head","Orion","Pelican","Witch Head","Helix","Boomerang","Bernad 68"}; foreach (var nebula in nebulas) { Console.WriteLine(nebula); } Console.WriteLine(); } static void showStars() { Console.WriteLine("Starting Stars Thread............."); string[] nebulas = { "UY Scuti", "VY Canis Majoris", "VV Cephei A","NML Cygni", "Betelgeuse","Antares","Rigel","Aldebaran" }; foreach (var nebula in nebulas) { Console.WriteLine(nebula); } } } }
Result
Main Thread................... Horse Head Ghost Head Orion Pelican Witch Head Helix Boomerang Bernad 68 Starting Stars Thread............. UY Scuti VY Canis Majoris VV Cephei A NML Cygni Betelgeuse Antares Rigel Aldebaran
We create a thread by instantiating the
System.Threading.Thread class.
Then we pass an instance of
ThreadStart or
ParameterizedThreadStart delegate via the constructor.
The C# compiler will then create an object behind the scenes as long as we pass the name of the method we want to run in the different thread.
To start the thread we use the
start() method.
The
showNebulas() on the other hand will be run in the main thread.
Best Regards.
|
https://camposha.info/introduction-to-threads-in-csharp/
|
CC-MAIN-2020-16
|
refinedweb
| 548
| 58.28
|
Provided by: libzip-dev_1.5.1-0ubuntu1_amd64
NAME
zip_file_set_encryption — set encryption method for file in zip
LIBRARY
libzip (-lzip)
SYNOPSIS
#include <zip.h> int zip_file_set_encryption(zip_t *archive, zip_uint64_t index, zip_uint16_t method, const char *password);
DESCRIPTION
The zip_file_set_encryption() function sets the encryption method for the file at position index in the zip archive to method using the password password. The method is the same as returned by zip_stat(3). For the method argument, currently only the following values are supported: ZIP_EM_NONE No encryption. ZIP_EM_AES_128 Winzip AES-128 encryption. ZIP_EM_AES_192 Winzip AES-192 encryption. ZIP_EM_AES_256 Winzip AES-256 encryption. If password is NULL, the default password provided by zip_set_default_password(3) is used. The current encryption method for a file in a zip archive can be determined using zip_stat(3).
RETURN VALUES
Upon successful completion 0 is returned. Otherwise, -1 is returned and the error information in archive is set to indicate the error.
ERRORS
zip_file_set_encryption() fails if: [ZIP_ER_ENCRNOTSUPP] Unsupported compression method requested. [ZIP_ER_INVAL] index is not a valid file index in archive, or the argument combination is invalid. [ZIP_ER_MEMORY] Required memory could not be allocated. [ZIP_ER_RDONLY] Read-only zip file, no changes allowed.
SEE ALSO
libzip(3), zip_set_default_password(3), zip_stat(3)
HISTORY
zip_file_set_encryption() was added in libzip 1.2.0.
AUTHORS
Dieter Baron <dillo@nih.at> and Thomas Klausner <tk@giga.or.at>
|
http://manpages.ubuntu.com/manpages/disco/man3/zip_file_set_encryption.3.html
|
CC-MAIN-2019-39
|
refinedweb
| 221
| 52.46
|
iOS Swift Login
Sample Project
Download a sample project specific to this tutorial configured with your Auth0 API Keys.
- CocoaPods 1.2.1
- Version 8.3.2 (8E2002)
- iPhone 7 - iOS 10.3 (14E269)
The first step in adding authentication to your iOS application is to provide a way for your users to log in. The fastest, most secure, and most feature-rich way to do this with Auth0 is to use the login page.
Install the callback
Auth0 will need to handle the callback of this authentication, add the following to your
AppDelegate: the Login
First, import the
Auth0 module in the file where you want to present the hosted login page:
import Auth0
Then present the hosted login screen, like this:
// HomeViewController.swift Auth0 .webAuth() .audience("") .start { switch $0 { case .failure(let error): // Handle the error print("Error: \(error)") case .success(let credentials): // Do something with credentials e.g.: save them. // Auth0 will automatically dismiss the hosted login page print("Credentials: \(credentials)") } }
Upon successful authentication the user's
credentials will be returned. this login (as covered in this tutorial), but if you wish to embed the Lock widget directly in your application, you can follow the Embedded Login sample.
|
https://auth0.com/docs/quickstart/native/ios-swift/00-login
|
CC-MAIN-2017-34
|
refinedweb
| 201
| 54.12
|
Old /dpt/ at >>51494191 What are you working on?
First for C
How do i memory optimization?
>creating a thread more than 30 minutes after an identical one when the bump limit hasn't been reached
kill yourself
>>51501312
that code in the pic is making me bilious.
>>51501407
THANKS FOR POSTING IN BETTER THRED
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
Getting myself ready to do Launchcode.org's "coding challenge", basically the first step towards getting an interview with them
they say to watch this
and then do this practice test
and then the actual thing is timed at 90 minutes with three sections and you can use whatever language you want. I dunno hopefully I don't buttfuck it so hard they laugh in the office and never call me
>>51501450
what kind of bullshit test is this?
How am I supposed to solve their number counter challenge if I can't edit the main loop?
>>51501312
something to replace notepad
>>51501312
shows how microsoft didn't advance
they still have the same shit on the gui
nothing progressed since win 3.11
its f*cking garbage
>>51501450
I can't help but notice this guy seems like a fucking idiot about solving these problems. He was given a plain English description of exactly what he needed to do, and yet was acting as if he was trying to find out what it was that the question was asking him to do.
I can fucking solve the goddamn challenge faster than him.void countUp(int start) {
for (int i = 1; i < 10; i++) {
printf("%d then ", start + i);
}
printf("%d", start + 10);
}
why isn't there a sticky for these threads
how do i get started with this programming thing lads
>>51502186
go back to reddit you cancerous faggot
>>51502186
>why isn't there a sticky for these threads
Because they are cyclical. No mod wants to have to sticky a thread, and then delete and re-sticky a thread every time they get too large to load.
>how do i get started with this programming thing lads
Learn C. I recommend C Primer Plus as a choice of books.
>>51502452
fuck c
python
>>51502590
Pythonfags are by far the worst memesters.
>>51502452
>C Primer Plus
thanks
>>51502601
why is python meme
>tfw solely used notepad to program C for 3 years
it was a learning experience
>>51502641
Python is not a meme. C is the biggest meme.
>>51502671
>learning experience
What did you learn by doing that?
>>51502733
>What did you learn by doing that?
pain.
>>51501368
a) Primarily by moving you're code over to Sepples.
b) see a)
>>51502733
i had previously used IDEs and let them handle the compiling and whatnot, so i didn't completely understand what was going on when i hit the "compile and run" button
i learned to use gcc and make
now i also make fewer simple mistakes (the kind that IDEs handle for you)
also this >>51502763
>>51502826
Why didn't you use Vim? You would learn all that without pain.
>>51502452
Why C? C++ is the first language I learned and I don't think it was necessarily helpful to do so. I think C would have been even less helpful to learn first.
>>51502902
I'm not so sure either. But c/c++ teaches you very well that everything is basically nothing but a series of pointers pointing at arrays.
Knowing that changes how you program, but javascript guys get things done more quickly
>>51502926
It's not really helpful 'knowing' that IMO. I don't know if that changed how I program because I started with it, but these days I expend a lot of effort into not thinking about things like that.
>>51502926
You are ignorant.
Modern C++ (the proper way of writing C++) doesn't use raw pointers. Also, everything is not a series of pointers. Pointers are just features that some programming languages support.
JavaScript guys will get things done more quickly and more safely than C guys.
>>51502950
Pointers still form the foundation of C++. People did overuse pointers in the past, although even with C++98 that was poor form. Thinking about memory as locations that can be accessed with pointers is... still as 'helpful' as it has ever been.
should I learn python as my first language?
i learned a bit of java and I fucking hated it
println is WAY better than Systen.Out.NiGgErS.ToUnGE[].!My.Angu$
>>51503108
Don't fall "Java is bad" meme. It is good.
Also, you will like Python if you don't like typing much.
>>51503118
thank god
is it marketable / useful?
>>51503108
Java doesn't fuck about, its names are famously verbose.
Python has shorter names and is generally shorter than Java (and more verbose in all its friggin docs, look forward to being eternally spoon-fed you entitled swine).
>>51503128
If you are to have a palette of different languages, a scripting language like Python is usually worth having. If I had to chose one scripting language it would be Python.
>>51503108
>i learned a bit of java and I fucking hated it
I used to hate it as well but now it's my language of choice next to c++.
Can someone tell me why the resource is not applied to this button? I have this in my Page.Resources<Style x:
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="Button">
<Grid Background="{TemplateBinding Background}">
<ContentPresenter HorizontalAlignment="Center"
VerticalAlignment="Center"/>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
<Setter Property="Background" Value="#FFF"/>
<Setter Property="Height" Value="50"/>
</Style>
and this in my codebehindButton sideButton = new Button();
sideButton.Name = "btn";
sideButton.Content = "HelloWorld";
sideButton.Style = Application.Current.Resources["sideButton"] as Style;
sideButton.Click += sideButton_Click;
rightPanel.Children.Add(sideButton);
Not sure if i'm missing something simple or completely off the mark
>>51501312
> pic
Should have typed this in MS Word.
do you ever have a moment where you remember why you love programming?
I was sitting around with my roommate who is working on a compiler and we were sitting and whiteboarding out ideas and I was just kind of in awe at how much fun I was having. Programming is just so satisfying when you're doing logic heavy stuff.
does anyone else scrape websites only using substrings and indexof?
is there a better way? I've thought about using regex but I'm already so good at it this way
>>51503302
relevant
>>51503298
notepad is plenty retarded to write code in
>>51503302
I only have these moments on my own.
>>51503302
>>51503339
i want normies to leave
>>51503355
Dunno about normies but I feel like the people who say stuff like "love programming" are deluding themselves.
I don't love programming, I do it because it's the kind of habit I get into trying to make things happen. Sometimes I appreciate the result but I don't 'love' programming, that's just a step in the journey.
>>51503355
Oh, you'll get your time.
Soon enough I'll have to do some work for a living and you can have your toys to yourself.
What advantages would switching from sublime to emacs or vim bring me other than 1337 cred
>>51503384
Wow you really are cancer
>>51503420
Lots of editors have okay to decent vim bindings provided as plugins or whatnot, so learning vim could be quite advantageous in being able to work with lots of different editors efficiently.
Er and bash and gdb use commands similar to emacs so I guess that's helpful. Not really that helpful.
>>51501312
>inb4 bait
>void main
>returning a value
>returning anything other than 0 for a successful run
>not using a semicolon ;
why would i use anything other than this beauty?
>>51503463
>emacs
Well if you can use it, sure. When you get RSI you can come join the vim club.
>>51503461
You missed some.
>incorrect preprocessor syntax
>string[]
>indentation
>pipe
>using true in what is probably meant to be C
is there a built in function in directx for transforming a vector of normalized device coordinates to screens space using a D3D11_VIEWPORT structure or do i have to do that manually?
>>51503463
>const correctness
Missing some consts mate
Is this as fast as it can get? The actual code is in c, i pasted it in pseudopython just to make it less verboserunning_queue = []
finished_queue = []
def worker()
while True:
# i make a local copy so that other threads
# can still push stuff into running_queue
lock(running_queue)
local_queue = running_queue.pop_all_items()
unlock(running_queue)
while local_queue:
task = local_queue.pop()
task.run()
if task.finished():
lock(finished_queue)
finished_queue.push(task)
unlock(finished_queue)
else:
lock(running_queue)
running_queue.push(task)
unlock(running_queue)
def main()
while True:
push_some_tasks()
lock(finished_queue)
while finished_queue:
task = finished_queue.pop()
task.callback()
unlock(finished_queue)
I am the only one that find low level programming easier than high level programming ?
>>51504017
No.
>>51504017
No. I think it's because the lower you get the more well defined the things are.
>>51502072
Not <=10 to save that last line???
>>51504052
>>51504053
It's been two days that I'm trying to create a simple app for my ubuntu touch in QML. I never thought a language could be that bad.
Whatever. I'm going to look for another language and library to create something good.
I really wonder how people are still developing in .net and other bullshit like that.
>>51503463
nice warnings faggot
>>51501450
>Just for fun. No pressure. Watch this video for tips on how to rock your interview.
not sure if socially retarded or catering to pussy ass faggots
>>51504165
then it would print an extraneous " then " at the end
>>51502950
>>51503096
It's important to differentiate between 'user applications' (ie, the stuff being written largely by the ppl on /g/), and 'library code'.
Library code relies heavily on low-level coding approaches such as heavy pointer use. This allows them to have the efficiency demanded by library code.
User applications however isn't the place for low-level code. Rely on the abstractions provided by good library code. That way the complexity of user application code is easier to reason about, and efficient, without getting bogged down in the details of low-level structures &tc. implementations.
This is the essence of modern C++. But make no mistake, pointers are still very important, but don't typically use them you'reself.
I would say python then c++ and if you still enjoy it by then pick up C# or java.
Those three langs above make for a well rounded programming toolkit.
Python os easiest -- the simple syntax and readability will allow you to focus on implementing logic into your program without getting too fustrated.
C++ is actually a lot like python, just a lot more low level too. You have to be more explicit with C++ in telling it what you want. It wont hold your hand as much as python. Youll learn about memory management and making the most out of properly designed OOP. Plus you *essentially* learn a bit of C while learning C++ (C++ = C plus a bunch of extra library stuff)
Lastly with C# or Java, its kinda the best of both words. These langs meet halfway between python and C++. These are good for making a full project but still doing so with some ease. Python is a scripting language and is good for quick problem solving but may not be the best choice for a single solution to a large project .
C/C++ could also be used for entire projects but could be more work than necessary - often, only speed critical parts of a program are written in C/C++ (as its very fast).
C# and Java are very popular in the 'real' world and a lot of enterprise code is written with these langs.
>>51504359
>python
get out
>>51503211
If you want to end up as jobless faggots like these two though..
>>51504371
>>51504394
..then just learn Lisp and call it a day
>>51503108
absolutely disgusting please fuck off to >>>/vg/agdg
>>51504371
Plus.
"[since 2014] Pythonis currently the most popular language for teaching introductory computer science courses at top-ranked U.S. departments.
Specifically, eight of the top 10 CS departments (80%), and 27 of the top 39 (69%), teach Python in introductory CS0 or CS1 courses."
>>51504416
>>>/trash/ with your speech impediment
>>51504440
as if that's a good thing
for my first language, should I learn java, python, or C?
>inb4 assembly
>>51504440
python really goes well with diversity doesn't it mate
>>51504440
>eight of the top 10 CS departments (80%)
that's because those cs people can't code for shit, they are competent at cs but not at coding, the professors just want to make their life easier.
>>51504461
Out of those three, C
But really you should start with C++.
If you learn C++, you will learn most C along the way.
>>51504461
>>51504482
Ignore the Java fag. It's literally the worst language.
>>51504440
It's simply an artifact of institutions trying to cater to women and other minorities. However industry wants less and less to do with CS programs (predictably) when looking for developers. They go increasingly to CE and EE graduates, because they have been exposed to real-world problems and those programs stress solid engineering approaches, not "muh cute fluffy bunnies".
Python itself is a nice language when coupled with good libraries, but programs using it to introduce programming are not.
>>51504469
Thats also a good point. Its easier on the sudents, the professors and it teaches basic OOP and allows students to take their knowldge further with other languages.
So whats the problem again?
>>51504461
none of the above?
>>51504474
this.
>>51504522
Fuck off Bernie
>>51504536
I swear im not bernie sanders
Why doesn't /dpt/ use Common Lisp? Any reason except "I don't know lisp and it looks daunting to learn"?
>>51504612
Because some of us need to support ourselves rather than relying on mommy and her spare basement.
>>51504612
See
>>51504416
>>51504612
It's meme tier and lacks actual api
>>51503252
you really need some MVVM up in this, yo. it'll make things a lot easier in the long run
>>51504482
.
literally this except python is a horrible language regardless
>>51504612
why would i use common lisp? any reason except "hurr i'm a neckbeard neet"?
>>51501312
Building a Lisp, implemented in the following way with the help of Objective-C, YaCC, Lex, and itself:
This Lisp is called Valutron. It is architected as an integrated compiler-VM; essentially, this means that code, when entered at the REPL, is first converted into Cons trees, and then mapped down to primitive operations at the stage of evaluation. These primitive operations control a register VM, which operates on Cons cells and Objective-C objects representing Lisp values, and which is implemented as threaded code for maximum performance.
This form was chosen to make it possible to implement continuations easily and to promote good performance. At this point I have described how it implements a basic Scheme-esque Lisp. The VM, as I explained, operates natively on Cons cells. Implemented in Valutron itself are more extensible versions of some primitives like eval; these functions are registered with the VM and, from there on, replace the primitives with themselves.
Here is where Valutron offers a feature called V-expressions. V-expressions are an alternative to S-expression notation for code. They are low in the number of brackets characteristic of S-expressions, and are instead primarily algebraic-style infix. These are implemented by using reader macros. I am going to look into integrating a shift/reduce GLR parser that may have productions and rules dynamically altered at runtime, which will allow the extension and modification of V-expression syntax as well as even creating new DSLs with unique syntax.
The pièce de résistance is an object model inspired by the CL Object System, the first ever standardised object-oriented system, which was designed as part of Common Lisp. This is implemented in terms of the Objective-C runtime and hence the full dynamism of Smalltalk, Objective-C, and CLOS are available. The Objective-C runtime used was extended in order to support double dispatch, making available CLOS' characteristic multi-methods.
>>51504995
gay
>>51505038
2nded
>>51505038
this
should i make a website or a game
>>51505067
website. video games are for degenerates. the world will be better off when the kikes are dead and video games are a thing of the past.
>>51505073
games are equal to movies
>>51505093
video games are a waste of time that kikes want you to believe are a work of art. like anime, vidya is how the kikes are trying to destroy the white race.
>>51505110
like movies, they are a medium to pass information
>>51505118
what info? like how you should be liberal and race mix or be a faggot? fuck that.
>>51505135
k, what should the website do/ what should it contain
>>51505067
a game
>>51505073
web dev/web design is top plebdom
>>51505135
how to contain anit-liberal information there
>>51504669
>>51504678
>>51504866
>>51504936
I said except the "I don't know lisp" reason.
>>51505404
lisp is literally useless you faggot
the only functional languages worth learning are Erlang/Elixir and even those have very limited usefulness.
>>51505404
why don't you obey Allah and the words of the prophet Mohammed? any reason except "i don't know islam"?
>>51505426
this
>>51505446
>>51505426
>literally useless
>no explanation
Again, do you have any reason except "I don't know lisp and can't be bothered because I'm a web developer"?
>>51505446
>any reason except "i don't know islam"
No, none. It's the same for you and Lisp? If so, don't bother replying because I specifically asked for reasons other than "I don't know".
>>51505426
>the only functional languages
>lisp
>functional
So I was right, you don't know Lisp. Please refrain from replying if you don't understand what the adults are talking about.
>>51505545
(>(>(>(/(t(r(a(s(h(/))))))))))
>>51505555
>muh twitter
ishiggy...
I'm currently learning C# at college - right now it's all programming for Windows phones using Visual Studio. I'm having a blast with if statements.
Also following that Bob Tabor guy's videos on C# fundamentals - they're pretty good. I've just made a super simplistic program with yes/no inputs from the user to generate new "situations" - imagine a "choose your own adventure" book but it was only 3 pages long.
God I can't wait to learn more!
>>51505426
>lisp
>functional
m8...
what's the most difficult language
I want to display an image when I click on a button. But I want to display it on the same page where the button is located. How do I achieve that ?
>>51505578
I don't have time for social media nor do I care about what any of those people have to say.
I read this shithole while I'm at work.
My quads don't lie, lisp is fucking useless. Go stroke your ego on reddit.
>>51505555
>I don't know Lisp
>I don't know C
>I'm not a programmer
what DO you know, webfag? CSS?
>>51505606
javascript?
>>51505605
haskell by far
>>51505613
>while I'm at work
did you get the CSS to render acceptably on both safari and ie8? hard stuff, wageslave, better put in some more hours!
>>51505606
CSS
don't use fucking javascript ever
>>51505613
>lisp is fucking useless
How would you know when you admitted you don't have a clue about Lisp? Did somebody you follow said that without offering some context? Don't be so gullible, anon!
>>51505615
Oh, I'm not that guy. My first reply in the thread was the quads one. I do in fact know C, it's the main language I program in when I have time. I don't work as a programmer..
>>51505641
>wageslave
Here's your (You) while you sit at your parents house wearing out your parens, buttercup!
I'll be in my office.
>>51505650
found the framework fag
>>51505622
>>51505650
I'm working with ASP.NET MVC and C#. But I guess it can be done with simple HTML.
Any code ideas ?
This is what I have so far but it opens the image in a separate cshtml.<input type="button" value="Create" onclick="location.href='@Url.Action("DrawChart")'" />
>>51505673
>I don't work as a programmer
>I had my first website
>javascript fireworks
Your opinion is irrelevant here. Go post in a homescreen thread.
>>51505681
javascript fags are the framework fags you fucking tard. if you can't code CSS by hand you fail at life.
>>51505694
i bet you'll get better answers in >>>/g/wdg
>>51505707
Thank you.
>>51505694
Mate, you have no fucking clue what you're doing. You're trash, you're not a programmer. The "simple HTML" contains javascript, web monkey.
>ASP.NET MVC and C#
You're in over your head. Better quit and go flip some burgers.
>>51505038
>>51505044
>>51505056
How is it gay?
>>51505598
It's probably part of the key aspects that make a Lisp a Lisp that the language is built on the Lambada calculus. A model of computation that focuses explicitly on the application of functions.
>Here's your (You) while you sit at your parents house wearing out your parens, buttercup!
>I'll be in my office.
Have fun with that alienation and insignificance.
>>51505707
>code CSS by hand
badass over here! great skillz, webtrash!
>>51505728
>>51505701
So because I showed interest at a young age in creating stuff, don't have a job as a codemonkey and program in C for my own pleasure and knowledge, I belong in a homescreen thread and my opinion (which it is just that, as is yours) is irrelevant?
Damn. I guess I'll be on my way then. I clearly don't belong here since I don't get paid to program and built a website almost 17 years ago while you were still probably swimming in your father's balls.
>>51505728
>Have fun with that alienation and insignificance.
i've seen some projection in my life but holy shit
>>51505728
>It's probably part of the key aspects
It's not. Please, if you don't know what you're talking about, do us all a favor and stop publicly embarrassing yourself.
>>51505613
>I'm at work
so how does it feel to be a wagec.u.c.k wasting your time working? I'm a NEET and have unlimited freedom not being enslaved by my jew boss to make him more money.
>>51505743
>young age in creating stuff
>built a website
>don't work as a programmer
>I'll be on my way then
Sounds about right.
>>51505722
U wot m8 ?
Give me a suggestion then.
>>51505743
>don't have a job
>website
>homescreen thread
>opinion is irrelevant
>don't belong here
git gud, fgt cuk
>>51505743
>26 year old trying to play the oldfag card
.
tippity top kek
>>51505673
>can't write 2 sentences to defend opinion on lisp
>blog post about his "early life as webfag"
Seriously, m8? And you expect not to be ridiculed?
>>51505779
So what you're saying is you literally have no argument. Good shit.
>>51505771
Feels pretty fucking good. I'm treated with respect here. Wasting my time? I enjoy coming to work. Much more fulfilling earning my own money, supporting myself, you know, like an adult. Stay manchild.
>>51505798
Don't have a job as a programmer. Context, reading comprehension, git non-retarded.
>>51505728
No alienation. No insignificance, at all. Project and assume more.
I'm done entertaining lispfaggots now. You all have a wonderful day.
>>51505743
Nice blog. How about elaborating on why "lisp is useless"? Can't? Oh, you don't actually know anything except the HTML you "created" 17 years ago? Figures.
>>51505827
Your "ridiculing" is weak at best and will be forgotten by the end of my shift. You're the insignificant one here.
>>51505817
>trying to play the oldfag card
How'd you extrapolate that?
>>51505842
Missed the part where I said I know C, hm? I didn't say that was all I knew either, but I'm not wasting my time "proving" myself to a) lisptards b) people in a thread who struggle with fizzbuzz.
On this note, I'm off to take an early lunch, maybe go for a walk, get paid for it; that ol' tune.
>>51505846
>built a website almost 17 years ago while you were still probably swimming in your father's balls
???
you're acting as if you made one of the first websites from scratch and hosted the server yourself back in the 90's or someshit. if you had an angelfire website or someshit with copy-pasted javascript code that's nothing worth mentioning
>>51505830
>>51505846
>treated with respect
>webfag
hahahahaha literally the trash of the industry
>don't have a job as a programmer
clearly, you're not good enough for it
>insignificant
yes, you are
>I know C
>struggle with fizzbuzz
bwahahahaahha, that's you level? no wonder you can't find a job as a programmer, htmlboy
>proving myself
what you've proven is that your shit; stay pleb
>>51505876
I wrote the HTML myself and clearly stated I did not write the JS. At nine, seeing my site online was pretty great.
>>51505885
You can't read so our conversation is over.
>>51505842
>How about elaborating on why "lisp is useless"?
>>51505871
>I said I know C
top lels (also, no, you don't know C, we can all see through your bullshit; try lying about javascript next time)
>>51505902
how fucking fascinating. please, tell us more.
>>51505902
>I wrote the HTML myself
such achievement! hahahahahaha
>>51505902
>discussion on lisp
>I wrote the HTML
dem mad skillz tho
>>51505930
What'd you do at nine? Experiment with your dicks at a best friends sleepover party?
Or maybe you turned on a computer one night after dreaming in code and churned out some Enterprise Yava software? Do tell.
let it go
>>51505959
>yava software
anon...
>>51505974
Aight.
>Go lacks muh generics
>>51505902
>I wrote the HTML myself
>>51505959
i made a website on a free hosting site for fun at like age 10 and each of my neopets had its own webpage faggot
>>51505959
>at nine
>experiment with dicks
>best friends
>Yava software
this anon clearly has discovered the path to enlightenment :^)
>>51505990
>I have selective reading issues!
>>51506017
>HTML
>programming
ebin meme fam
>>51506079
Could you please point me to the post where I said HTML was programming as opposed to a mark-up language? Never mind. I see you're just shitposting. Have fun with that, champ.
>>51506094
>programming thread
>lisp question
>muh html achievements
you dun goofed, weeb
>>51506017
>buttflummoxed
>>51506116
Now you're assuming I like anime, which I absolutely don't.
You also must be the person who I've deemed unable to read, since my "html achievements" were in reply to probably you, who assumed I didn't know lisp/C/I'm not a programmer, assumed I'm a webfag, and assumed I only did CSS.
All you have are half-assed assumptions based on nothing to defend yourself mixed in with some ad hominem. It's great.
>>51506145
you keep posting multiple line comments "bragging" about stuff you've "done" but can't write 2 sentences regarding a simple question on lisp; everyone can see you're full of shit and don't actually amount to much
so which one? , anons
>>51506145
>implying I know lisp and C
>not a word about lisp nor C programming
>shitposted half the thread about my html prowess
anon, this looks fishy...
>>51506145
>csscuk
why live?
enough
>>51506168
>keep posting multiple line comments bragging
>literally mentioned one thing which keeps being addressed yet that's my problem it's being dwelled upon.
>>51506202
I don't even know CSS. When I had the slightest interest in making a webpage (and I only made one), CSS wasn't around.
>>51506079
>Putting tags into a text file
Literally this desu senpai
>>51506218
Nevermind, I was looking at counter-strike source.
Apparently CSS was around but I hadn't heard of it then. Only for a year or so it looks like.
>>51506218
>I don't even know CSS. When I had the slightest interest in making a webpage (and I only made one), CSS wasn't around.
>>51506248
>Nevermind, I was looking at counter-strike source.
oh shit nigger could you BE any more clueless
>>51506262
I blame Google. Maybe my not paying attention a little bit as well.
Honest mistake which I corrected.
>>51505755
What the fuck is it then? John McCarthy himself says as much in his 1960 paper about Lisp, Recursive Functions of Symbolic Expressions and Their Computation by Machine.
>>51506315
It's multi-paradigm.
All you guys constantly ridiculing web devs are acting like retards and need to grow up. Absolutely disgusting.
>>51506462
I know you are but what am I?
>>51506462
we ridicule all retarded faggots, not just web devs
>>51506462
Well. At least I'm not a web dev.
>>51504612
GC and massive binaries.
>>51506544
Same as Java, C#, Go.
>>51506325
What does that have to do with its foundation upon the Lambda Calculus?
>>51506646
>functional
>>51506620
Those are all shit as well of course.
>>51505981
Well I spent 10 minutes typing up a response only to have a window pop up and backspace take me back a page. Fuck built-in keybinds. All programs should have fully user customizable keybinds.
Anyway, it isn't type safe. Imagine a func that takes two or more maps and performs a union on their elements with key precedence going to maps listed first. Well you could have the function declaration look like this:func mapUnion(a, b, map[interface{}]interface{}, ...map[interface{}]interface{}) (interface{}, error)but there are problems with this. Well, the user must explicitly convert their maps to map[interface{}]interface{} before the function call. To alleviate this you can make the arguments just pure interface{}. Now what must be done by the function? It has to fist check that all parameters are maps and that all the maps have the same type (or you could do it so that if the types are different you put everything into a map[interface{}]interface{}, but that will be impossible to deal with as I'll get to).
Here's where shit gets gay. If you use the type switch you talked about you either 1) have to write a 400+ line type switch to deal with all the built-in types as both keys and values, AND not be able to cover any user defined types, or 2) use the reflect library which is slow and stupid unsafe.
Finally you have to return the finished map, which you can do as a map[interface{}]interface{} or as an interface{}. The second is better because you can just do a simple type assert instead of a copy and assert. However you can't just do that, you have to make the user checks the error before doing the type assert or you might be type asserting a nil, which is bad. And of course doing this precludes the ability to mesh maps of different types because the type of the map output is indeterminate unless you know the types of all the maps going in.
The entire process is shitty as fuck. Thank god that go generate exists so you can do template generics.
>>51501368
nice bomb Ahmed
>>51501368
hey, what are the 3 circles
>>51506544
>>51506680
Then the "answer" doesn't answer the question. In case you're blind, the question was "Why doesn't /dpt/ use Common Lisp?"
In case you're an idiot, logic works this way:
/dpt/ uses Java, C# and Go.
/dpt/ doesn't use Common Lisp.
"GC and massive binaries" does not discriminate the two sets, thus is not an answer.
Wanna try again?
>>51506743
Detonator buttons.
>>51506771
c'mon, I had some electrical engineering
>>51506743
Not him, but probably +1, -1, reset to 0.
>>51506785
They are buttons. Do you think I would lie to you anon?
for programming, I see
>>51506544
>GC and massive binaries.
Garbage collection is much more optimised than the atrocious excuses for programs and libraries pumped out by C fanatics who don't understand that C must be treated with extreme respect in order to avoid anything from leaking memory to segmentation faults.
In fact, manual memory management with malloc() and free() directly is one of the least efficient forms available. There's a reason why applications where high control of memory is important, for example game engines, usually implement arena/zone allocators and deletion queues instead of throwing malloc() and free(), primitive instruments, all over the place.
Manually managing your memory today, without doing so only so you can implement a more advanced system atop (like the systems found in game engines,) or in situations like OSdev where you have no choice, is nothing more than cargo-cult posturing.
>>51506667
What is your point?
>>51506843
>Garbage collection is much more optimised
Wrong. Stooped reading right there. Nice blog tho.
>>51506843
>usually implement arena/zone allocators and deletion queues
Because they can, faggot! Try to do the same in your shit GC language.
>manual memory management let's you manually manage memory
Any more revelations, snowfag?
Seems /dpt/ doesn't use Lisp because they're a bunch of retards than accidentally stumbled upon this thread while looking for /wdg/.
>>51506913
"Nice blog" is a concise way to say "I'm a dumb shitposter"
>>51507028
Nah, it's a concise way of saying "you're rambling again, sperg, if you need that many words for spewing drivel there's no wonder you can't learn anything and can't be taken seriously so I won't waste time bringing you counter arguments that you won't understand anyway".
>>51506756
/dpt/ does not use Java,C# or Go.
Wanna try again?
>>51507212
Is this your first day?
>>51506843
>C must be treated with extreme respect in order to avoid anything from leaking memory to segmentation faults.
>usually implement arena/zone allocators and deletion queues instead of throwing malloc() and free(), primitive instruments, all over the place.
You're contradicting yourself you retard.
Implementing memory arenas is easy and makes memory management dead simple (and very efficient).
>>51507212
>i thought this is the homescreen thread
fam...
>>51507230
Is it yours?
>>51507262
my what, anon?
>>51501312
>>51506997
>Because they can, faggot! Try to do the same in your shit GC language.
I already did. Specifically, I wrote a precise-tracing incremental garbage collector with copying for Valutron. I implement the core of the VM in Objective-C; I maintain my own Objective-C runtime library that includes a conservative tracing garbage collector, which is bypassed for much of Valutron since I have written a superior GC for its use, unconstrained by the limitations of the C memory model.
This is wildly different to "hurr durr i can use malloc and free directly lets use them instead of leaving the work to a library that knows better :D."
>>51506913
Take a look here.
Note that, until Chen had wildly optimised his dictionary to a degree that is totally inappropriate for all but the very most performance-intensive applications, the garbage-collected dictionary had superior performance. I won't get too much into how this works, but it is probably a product of techniques like deletion queues (which allow the program to free a whole set of garbage at a convenient time, rather than inline one-at-a-time, which is slow) and generational collection (which is designed to allow the very quick freeing of objects that aren't used for long.)
>>51507252
What are you talking about? Of course implementing arenas is easy. It is also a style that is NOT suited to everything; good luck taking your average piecemeal-deallocating program and trying to convert it to the 'all-at-once' arena model! It really has to be designed with such paradigms in mind.
Recommend me a new language
New
Old
Experimental
Anything.
I've been trying Lobster today, its pretty interesting. I'd be interested to see if anyone has done any performance testing with it.
>>51507358
>It is also a style that is NOT suited to everything;
Yes it is.
>good luck taking your average piecemeal-deallocating program
You would never do that in the first place.
Actually even arenas are overkill most of the time, a large chunk of every day programs can get by with ZERO dynamic allocation (i.e allocate everything you need at startup statically, program runs to completion with no allocation/deallocation at runtime).
This is especially true on 64 bit platforms which has massive address spaces.
>>51507424
Try Java
>>51507424
brainfuck and ook.
>>51507424
haskell
>>51507424
Why don't you master one anon, knowing one or two languages real well is better paying than knowing 6
>>51507358
>Valutron
Who? Ah, your toy shit that can't hold a candle to fucking javascript? hahahaha
>core of the VM in Objective-C
You already failed, faggot.
>my own Objective-C runtime library
hahahha; yo dawg, I pus a GC on your GC so you can randomly freeze while you randomly freeze
>limitations of the C memory model
Now you're just throwing buzzwords, you fucking baboon can't possibly know what a "memory model" is, much less the "C memory model".
>garbage-collected dictionary had superior performance
well memed
>>51507358
>to a degree that is totally inappropriate for all but the very most performance-intensive applications,
This way of looking at things is why we have web browser which uses 2gigs of ram and chokes on low powered laptops, Visual Studio takes longer to cold start than my OS and despite the fact that computers have gotten several orders of magnitude faster the general computing experience has gotten more sluggish.
For people who care about software quality and the users experience, performance is ALWAYS important.
>>51507468
I do know a language or two well. I just like trying new languages. I really want to try something that makes me think in a fundamentally different way and I think maybe I should actually try to learn >>51507466 instead of just messing around with it (because people often say it requires a different kind of thinking to program in it)
>>51507539
don't learn haskell it's a meme language
learn elixir instead
>>51507539
yp, it's more mathematical. is this language useful though
>tfw no programmer bf
>Java "programmers"
>>51507632
>lgbt invasion
>>51507654
Just a brainwashed generation with little-to-no hope.
>>51507654
what does a straight cis female wanting a programmer bf have to do with lgbt?
>>51507698
Just because you wear your mother's bra and kneesocks and undergo HRT doesn't make you female.
>>51507698
rule 30
>>51507499
>toy shit
It's been turing-complete for a while now.
>You already failed, faggot.
Cry more. Sorry you don't like Objective-C
>randomly freeze
Do you really think that, after 5 decades of research, that would *still* be a problem in well-designed garbage collectors? Hint: it isn't.
>fucking baboon can't possibly know what a "memory model" is, much less the "C memory model".
Lol. You seem upset. Like many terms in computer science, it is overloaded with multiple definitions. In this instance, this baboon is referring to C's very low-level structure, the inability to discern pointers from integers, and the layout and existence of the stack. If you are an obsequious FizzBuzz champion like yourself, you would fail to immediately realise what is meant and start rattling out a random insult.
>well memed
Read for yourself.
>>51507531
Is that memory management to blame, or architecture? I would suggest the latter, of which memory management is just a component. Both Firefox and Visual Studio are huge and there are no doubt myriad ways they can be improved, and looking to their memory management is one approach.
That being said, I can't support optimisation until a program becomes totally incomprehensible and gains a large and confusing codebase except where eking out the final 2% gain is truly important.
>>51507448
>Yes it is.
No it isn't. Allocating memory in blocks that are released at once is not the right way for many situations where you need piecemeal control of data. That's not to say that there are projects that can benefit from arena allocators and projects that can't - in most projects of any size, there are areas where arena allocation is sensible. But it's not a panacaa.
>a large chunk of every day programs can get by with ZERO dynamic allocation
I don't disagree that many can. I don't usually write them. Most of my programs deal with indeterminate input and have no set point of termination.
>>51507725
>projecting
>>51507729
are you 12 years old?
>>51507758
I don't think you understand what projecting is.
>>51503463
>all that boilerplate
>>51507736
>It's been turing-complete
srs bsns
>*still* be a problem in well-designed garbage collectors? Hint: it isn't
what is HotSpot
>memory model
>overloaded with multiple definitions
no, you just don't know what you're talking about and want to sound fancy
>C's very low-level structure
>inability to discern pointers from integers
you don't know C
>layout and existence of the stack
no such thing in C
>memory management to blame, or architecture? I would suggest the latter
yes, architecture, like using a gc language
>final 2% gain
more like 50% gain over gc languages
>piecemeal control of data
not in gc languages
>my programs deal with indeterminate input and have no set point of termination
if you don't know that dynamic allocation is not required for that there's no wonder you're spewing so much ignorant crap
>>51507785
"Psychological projection, also known as blame shifting, is a theory in psychology in which humans defend themselves against their own unpleasant impulses by denying their existence while attributing them to others."
either YOU don't understand what projecting is, or you have some serious difficulties with reading comprehension
>>51507584
not that anon, nor do I know Haskell, so I'm talking out of my ass, but it's supposed to make you solve problems at a higher level before you start writing code. If that's true, it could benefit his coding in his primary languages.
>>51507882
Precisely. Nice google definition since you clearly couldn't explain it without a reference.
I have no "unpleasant impulses" nor am I denying a want to be a sissy faggot. I was born male, I'll always be a male, and you're deluded to think wearing clothes and taking drugs changes what YOU'RE running from.
>>51507886
idk, I've looked into it and it seems more mathematical, c++ seems more close to the machine.
eg. in Haskell you define an array of natural numbers [1,2,...] which is of infinite size and it's evaluated only when called
>>51507634
have huge tits? I don't understand
>>51507926
get ladies
>>51507654
>lgbt invasion of a thread dedicated to a lgbt field
What, do you think Alan Turing and Grace Hopper were straight?
>>51507916
>I have no "unpleasant impulses"
you obviously do if your first reaction to someone calling themselves cis was to claim the opposite
only someone obsessed with the idea would make such a challenge
>I was born male, I'll always be a male
sounds like denial to me
>>51507970
faggot, or something?
>>51508005
>sounds like denial to me
Sounds like acceptable, which clearly you're not used to
>first reaction
Look where you are and look at the flamboyant statistics. Assumption, sure; without reason? Absolutely not.
>>51508011
uh?
Did you all read Programming: Principles and Practice Using C++ by Stroustrup as someone first starting to get into programming?
>>51508046
Acceptance*
Sup, recommended reading for C#?
>>51508076
what are the best "c for dummies" kind of websites
>>51508076
>>51508105
suicide://kil.l-yourself.read/a.book
>>51503252
Listen to this guy
>>51504871
MVVM makes this so much easier.
Also for some extra fun try out prism with unity and make your stuff modular.
>>51507736
>Allocating memory in blocks that are released at once is not the right way for many situations where you need piecemeal control of data.
What situations?
If you need tighter or hierarchical control you just nest them in a stack.
There's never a reason to have arbitrary allocations happening all over the place, that is just plain bad design from the start.
>>51508046
that is definitely not what acceptance sounds like. there is literally no reason to be as defensive as you are, if it weren't for the fact that you find the reality of actually having these thoughts uncomfortable
>>51507867
>no, you just don't know what you're talking about and want to sound fancy
I'd say that's the same of you
>you don't know C
But I do. I know it better than you, it seems - do you just write FizzBuzz all day?
0x1234
0x1234
Which is a pointer? Which is an integer?
>no such thing in C
I've never seen a C implementation that doesn't use a stack. The concept is thoroughly intertwined with C to the heart.
>yes, architecture, like using a gc language
No, FizzBuzzlord, that's not what I'm talking about. I am thinking instead about considerations like XUL, the process model, the system of IPC, the interaction of these.
.
>>51508164
Consider the representation of contacts in an instant messenger.
>if you don't know that dynamic allocation is not required for that there's no wonder you're spewing so much ignorant crap
You are the one spewing ignorant crap. I doubt you've ever written anything other than FizzBuzz. Maybe a Sudoku solver at most.
How do YOU propose to solve this problem? Don't say 'use static pools' because they cannot cope with indeterminate memory usage, and you would implementing a dynamic allocator, however crude, atop these. Don't say stack allocation either - the stack is, in fact, much like a stack of arenas, from which you may dynamically allocate memory.
>>51508100
Thanks
>>51508147
Nope
>>51508198
>Which is a pointer? Which is an integer?
A pointer and integer are synonymous in C.
Is it okay to write macros for malloc() and fopen() that print an error if they return NULL?
>>51508198
>Which is a pointer?
none of them
>Which is an integer?
both of them
>I've never seen
you haven't seen a lot of stuff
>concept is thoroughly intertwined with C
not at all
>Maybe you'd get a 200% loss v.s. garbage collection
will never happen
>50% is not a huge gain
that's what all gc fags say
>Stop saying GC
>it betrays your lack of understanding as to what garbage collection
gc = garbage collection; it's a shorthand, you moron
>copy data
slow
>a very complex algorithm usually
slow
>Don't say 'use static pools' because they cannot cope
they can
>from which you may dynamically allocate memory.
you have no idea what you're talking about
>>51508249
They're not. Stop teaching him garbage if you don't know C either. His knowledge is already fucked up, don't make it worse.
>>51508262
You can do whatever the fuck you want. You can write macros for "return" and "+" if you like.
>>51507736
>Is that memory management to blame, or architecture?
Memory management is certainly a huge factor because that's the part which hasn't really gotten faster (in fact it's actually gotten slower relative to cpu speed, back in the day a memory fetch would just be a cycle or two, now it's dozens or up to hundreds of cycles, (but of course in raw numbers memory is faster today))
The other big factor is of course nested interpreters interpreting other interpreters (be it a full vm or some data format being converted to/from), it's truly insane how many levels of indirect execution you have to slog through to draw a single pixel of a 'modern web site' (I mean fucking hell, people have started using javascript to decode and render images, certain inline image viewing/expansion pegs a full fucking core to render it slow as hell, whereas opening in a new tab is instant and barely any cpu usage at all)
I've got a weird question: Is there some place/website where only handsome programmers are allowed to post?
>>51508249
>A pointer and integer are synonymous in C.
WTF?
>>51508307
>>>/hm/
>>51508307
Yes. You don't qualify though.
>>51508289
How are they not synonymous? (apart from integers being signed)
>>51508358
Read a book, nobody has time to teach you the basics of C. You'll also find out how you can add two numbers and print the result on the screen.
>>51508198
>Consider the representation of contacts in an instant messenger.
That's a candidate for static preallocation, or at the very least no need to ever deallocate (just keep allocating and forget about it).
Having thousands of contacts is pathological.
>>51508358
Integers (ints) and pointers are also of different size.
>>51508379
>You'll also find out how you can add two numbers and print the result on the screen.
But can you calculate the average of two ints?
>>51508379
It sounds to me like you don't know what the fuck you're talking about.
>>51508407
Integer =/= int
>>51508358int i = 1;
i += 3; // this is allowed and adds 3 to 1
void* p = (void*) 1;
p += 3; // this is not
int* ip = (int*) 1;
ip += 3; // this is allowed and adds sizeof(int) * 3 to 1
>>51508432
>Integer =/= int
Doesn't matter, pointer types are different from integer types (long long, long or short).
>>51508407
>Integers (ints) and pointers are also of different size.
They MAY be of different sizes. Regardless, the numbers originally posted are both capable of being ints or pointers.
>>51507634
>tfw no huge-titted qtπgf
>>51508500
>They MAY be of different sizes.
No, they MAY be of the same size.
The only guarantee you have is that ints are at least 16 bits. Pointer size, on the other hand, is completely implementation specific.
>Regardless, the numbers originally posted are both capable of being ints or pointers.
Literals are irrelevant. See >>51508449
Integer types do not behave in the same way as pointer types, because they aren't the same thing.
>>51508432
>It sounds to me like
You don't have enough knowledge for an informed opinion. What it "sounds" to you is irrelevant.
>>51508268
>>51508316
>>51508289
Actually, they both may be either a pointer or an integer.
What I have discovered here is that none of you know how a computer works. I don't care about the type system here because it is totally irrelevant once the code has been compiled. Both void * p = 0x1234 and long p = 1234 have the same representation in memory on all but the most byzantine of architectures (it may not be the case on 16-bit DOS). Now, try and deny that.
You know, why am I even trying to tell you this? This is basic knowledge. It's so basic that academic articles on garbage collection reference it casually reference it (The Magpie Source-to-Source Transformer ensures that "[...] no integers or dead registers will accidentally or maliciously retain the object.")
Note that this article describes a tool that preprocesses C source code so that their precise garbage collector can differentiate pointers and integers.
>will never happen
See the Chen dictionary.
>gc = garbage collection; it's a shorthand, you moron
Whoops. I had meant to say 'gc language' there. You should've guessed that from the following sentences. Too bad FizzBuzzlords can only remember one word at a time.
>slow
No. Very fast thanks to decades of research and development.
>>51508305
I can't disagree with you. The modern browser has become an extremely complex virtual machine, really.
>>51508407
Integer doesn't mean int. An int is one type of integer.
>>51508407
>can you calculate the average of two ints
Yes.
>>51508449
>>51508539
Whoops, you missed the point. The compiler is not the runtime representation in memory.
>>51508500
>the numbers originally posted are both capable of being ints or pointers
Wrong.
>>51508571
>Actually, they both may be either a pointer or an integer.
No, actually they're both literals, not a type. You assign the literal to a type, but the literal is just a literal.
>Both void * p = 0x1234 and long p = 1234 have the same representation in memory on all but the most byzantine of architectures (it may not be the case on 16-bit DOS). Now, try and deny that.
It doesn't matter. The fact that it can be different on one system means that they are not the same. That's the whole point of standardising stuff in the first place, so that what is defined on one system is also defined on the other.
>Integer doesn't mean int. An int is one type of integer.
Weasel words. long long ints, long ints or short ints, doesn't matter. They're not pointers.
>>51508598
>Whoops, you missed the point
I think not.
>>the numbers originally posted are both capable of being ints or pointers
This is wrong
What you mean are values. Integers, floating points, addresses, they're all values when it comes to representation in memory. But when you're talking about types (ints, pointers, etc), you're in the domain of C (not computer architecture).
>>51508571
>they both may be either a pointer or an integer
wrong
>none of you know how a computer works
keep telling yourself that
>I don't care about the type system
of course, you don't understand it
>Both void * p = 0x1234 and long p = 1234 have the same representation in memory
they dont
>all but the most byzantine of architectures
64-bit windows is not "byzantine"
>This is basic knowledge
imagined
>Whoops. I had meant to say
yeah, I bet you meant to say a lot of stuff but you still posted only garbage
>No. Very fast
you forgot your medication?
>An int is one type of integer.
first correct thing you've posted
>>51508571
€50 on this dumb poster being a transethnic haloniggress turbofemme woman "programmer"
>>51508571
>Both void * p = 0x1234 and long p = 1234 have the same representation in memory
No they don't.
1234 = 0x4d2
0x1234 = 4660
Also, the number of preceeding 0s also matter. A pointer type must be as wide as an address (word), the long p must only be at least 16 bits (meaning you can fit it into a halfword).
>>51508623
Meaningless distinctions don't matter. This topic came up when I was discussing the implementation of garbage collectors in C.
And the fact that it can be the same on one system means that they are the same for all practical purposes. The point is that I cannot precisely differentiate a pointer from an integer, because in memory, they tend to look the same..
>>51508672
>wrong
Not wrong. Learn something other than FizzBuzz.
>they dont
They do on most platforms. It's an assumption central to the operation of Boehm GC. UINT_PTR exists for a reason, by the way.
>imagined
No. Basic knowledge.
>yeah, I bet you meant to say a lot of stuff but you still posted only garbage
What I posted made more sense than your whining tripe. Go back to making FizzBuzz faster.
>you forgot your medication?
Ableist insults because you've been cornered. Nice work.
>>51508721
Whoops. I forgot to prepend 0x to 1234 in the second case. Anyway, this assumption works on systems that use the LP model, which is most.
>>51508571
> Both void * p = 0x1234 and long p = 1234 have the same representation in memory on all but the most byzantine of architectures
64-bit Windows is a byzantine architecture? (on that platform, pointers are 64-bit, "long" is only 32-bit).
>>51503355
You know, since your social group is nonexistent you are still a pretty little snowflake because of your mad programming skills. No need to fear other people on here.
>>51508764
>The point is that I cannot precisely differentiate a pointer from an integer, because in memory, they tend to look the same.
In memory literally everything looks exactly the same.
>>51508764
>Meaningless distinctions don't matter.
They're not meaningless, the distinctions are there because they're not the same thing.
>And the fact that it can be the same on one system means that they are the same for all practical purposes.
But that's massively wrong. They're not even the same on the system you're currently using.
>The point is that I cannot precisely differentiate a pointer from an integer, because in memory, they tend to look the same
The difference is in the size. An integer can be a bit, a nibble, a byte, a halfword, a word.
An address on the other hand, can only be a word (unless you're using some super weird architecture where addresses are of variable length [yes these do exist])..
Telling the difference is easy. One contains a legal address that the program is allowed to access, dereferencing the other on the other hand leads to unspecified or undefined behaviour.
>>51508764
Your whole argument is basically that when loaded into memory everything is just 0s and 1s, therefore nothing exist because you can't tell them apart.
You have to be the stupidest person to post in a /dpt/ ever.
>>51508764
>They do on most platforms
I see you skipped the 64-bit windows part
>UINT_PTR exists for a reason
only on windows
>No. Basic knowledge.
you don't have that
>Whoops. I forgot
you keep "forgetting", are you sure about that medication thing?
>Anyway, this assumption
so now it's just an assumption
Question:
in C11, what's the best way to check if a variable is NOT NULL?
is if(variable) sufficient?
>>51508878
underrated post
>>51508878
This
>>51508914
Yes.
How people store data more complicated that 2 strings in key-value db?
>>51508914
I have a pointer which is : 0x0000000000000b
The pointer, which i access like this: ptr->hi, i already made NULL earlier
however when I do if(var) and if(var != NULL), it still passes through
>>51508972
Then it's not null
> 0x0000000000000b
>b
It's not null
>>51508764
>I cannot precisely differentiate a pointer from an integer
sounds like your problem; C on the other hand, can, because pointers are not integers
>>51508972
post the code that doesn't work
>>51508969
that is a bit vague, can you give some examples?
they store JSON / serialized data in the value. which ends up being interpreted as real data types
there are also column-based stores like cassandra.
>>51508837
Yes. Luckily in languages like Smalltalk this isn't a problem because the garbage collector knows the memory layout and can hence determine what is a pointer - without having to perform any horrible and unportable tricks.
>>51508854
Thank you for a thought-out reply.
>They're not meaningless, the distinctions are there because they're not the same thing.
They are meaningless to me when I want to write a garbage collector. That's why I (as well as Hans Boehm and others) had to go with a conservative design, where any word-aligned word that represented as uintptr_t has the address of allocated memory, is assumed to point to that allocated memory.
>But that's massively wrong. They're not even the same on the system you're currently using.
They are. FreeBSD x64.
.
But it does matter. The language specification on the other hand does not. This is because the language specification's word isn't going to tell my garbage collector if something is a pointer or an integer.
>Telling the difference is easy. One contains a legal address that the program is allowed to access, dereferencing the other on the other hand leads to undefined behaviour.
The integer can be a legal address..
>>51508878
That's the fucking point. My whole argument is about what matters if you're writing a garbage collector for C. Because writing a garbage collector is all about looking at what's been loaded into memory. Too bad you have such a short memory.
>>51508999
>>51509009
what is the b?
>>51508914
how fucking hard is it to write != null? if(variable) is false even when for instance int is 0, most types probably dont even have the unary operator for that. Also if it is null doesnt it just crash at that?
>>51508764
>pointers are integers
>I cannot precisely differentiate a pointer from an integer
>I can't tell if what I've found is an integer that happens to have the same value as the pointer, or an actual pointer
you also can't differentiate an integer from a string or a struct or an array; by your logic, integers are strings?
>>51508878
>Your whole argument is basically that when loaded into memory everything is just 0s and 1s, therefore nothing exist because you can't tell them apart.
Not him, but that is indeed the case.
It's why you can never drop-in a 100% correct GC into a C program because there's no way you can scan memory and know if something is a pointer to the heap, pointer to the stack or just some arbitrary integer or object data that happen to have the same bit pattern as a valid pointer.
You have to instrument all allocations with your own little tag system and then tell the users to only use those tagged allocators for ALL memory allocations, if they ever call malloc() (or some OS or 3rd party library api which allocates memory themselves) then the GC will potentially cause a false-positive or false-negative somewhere.
>>51509023
>I want to write a garbage collector
you don't have enough knowledge to do that
>FreeBSD x64
actually OSX, don't lie, applecuk
>writing a garbage collector for C
you're an idiot
>>51509023
>They are meaningless to me when I want to write a garbage collector.
Then your approach to garbage collection is stupid. Most people use reference counting.
>They are. FreeBSD x64.
Pointer type = 64 bit
Int type = 32 bit
Long long it = 64 bit
Short int = 16 bit
Clearly, integers and pointers are of different sizes.
>The integer can be a legal address.
Yes, and a random string can be a valid password... What's your point exactly?
.
You obviously have no idea of how garbage collectors are implemented
HINT: They do NOT search memory for random pointers, they keep track of reference counts.
>That's the fucking point
Then you are stupid.
> My whole argument is about what matters if you're writing a garbage collector for C.
WTF are you on about? See above.
> Because writing a garbage collector is all about looking at what's been loaded into memory.
No, you fucking don't retard.
C++ garbage collection = using RAII to implement reference counters
Python garbage collection = reference counters
Java garbage collection = reference counters + breadth first search
>>51509058
>why you can never drop-in
why would you do that?
>>51509031
A hex value.
>>51509058
>You have to instrument all allocations with your own little tag system
But that's how ALL garbage collectors are implemented.
>if they ever call malloc() (or some OS or 3rd party library api which allocates memory themselves) then the GC will potentially cause a false-positive or false-negative somewhere.
You know you can hook in calls to malloc, right? You know that's how valgrind works?
I swear, if people only ventured outside Windows for two fucking seconds, they would know a tad more about how computers work.
>>51509058char *mypointer = malloc(size);
my_C_GC_release_control(mypointer, size); //atomic
my_C_GC_regain_control(mypointer, size); //atomic
free(mypointer);
How would this not work? It might be cumbersome and error-prone, but I don't see why it wouldn't work.
>>51509031
11 in hexadecimal
>>51509058
>It's why you can never drop-in a 100% correct GC into a C program
But you can without scanning memory...
How do you think valgrind keeps track of all your allocated and free'd bytes?
>>51509121
>>51509110
I have an array of struct instances.
What is the proper way to free/delete the struct instance so it's at a default state?
>>51509136
he doesn't know about valgrind, the cuk lies about being on freebsd and using UINT_PTR from the win32 api
>>51509116
Hell, GNU extensions even support this__attribute__((cleanup))
>>51509110
>But that's how ALL garbage collectors are implemented.
Yeah but if you actually built it into the language's memory model (and don't expose raw pointers) it's a solved problem, no user data can ever accidentally become an alias to a heap pointer because the language has full control over everything.
>You know you can hook in calls to malloc, right? You know that's how valgrind works?
That doesn't help you at all.
The problem is not recording all instances of malloc() being called, it's figuring out when you can actually free something, and that means you need to know whether a certain address is 'alive' or not. And if some arbitrary data happens to show up as the bit pattern of a currently existing pointer you have created a false alias and will keep it alive even when it's supposed to be dead.
>>51509136
valgrind is not perfect, does not catch all memory leaks.
And it definitely can't tell you WHEN (as in at what wall clock moment) it's safe to free something or not (which is the main problem a GC has to solve).
>>51509206
>no user data can ever accidentally become an alias to a heap pointer because the language has full control over everything.
You can never safe guard against idiot programmers. I can access raw pointers in Java too by using the misc.sun.unsafe package and make the GC start freeing parts of the JVM and crash the JVM itself... What's your point?
>The problem is not recording all instances of malloc() being called, it's figuring out when you can actually free something
REFERENCE COUNTING
ffs
>>51509206
>if you actually built it into the language's memory model
then you'd get a language with slow implementations, like all languages with gc
>>51509223
>valgrind is not perfect, does not catch all memory leaks.
Because no real-life implementation of anything is actually perfect.
>>51509240
your mom is perfect
>>51509116
>but I don't see why it wouldn't work.
It's been explained already.
C allows a user to create a pointer to any arbitrary address at any time, and eve worse, any arbitrary integer (or object data) can take on the bit pattern of a valid pointer, which means you'll trip the GC up when it comes to tally reference counts.
>>51509251
Y-yours too
>>51509240
In other words: valgrind-type functionality is not sufficient for implementing a working GC for C.
Just as I initially said.
>>51509278
>In other words: valgrind-type functionality is not sufficient for implementing a working GC for C.
But it is
Perfect GC != working GC
You want some magic functionality that safe-guards agains programmers doing all sorts of arbitrary pointer stuff. Well, no language offers protection against this.
>>51509081
First, I've already written two garbage collectors. One for Objective-C, one for Valutron (a Lisp).
Second:
>>51509169
>actually OSX, don't lie, applecuk
Ok, see pic related though.
>you're an idiot
Ok
>>51509090
>Most people use re ference counting.
I don't care to check the statistics but I find that dubious.
>Clearly, integers and pointers are of different sizes
int may be short for integer, but it has a different meaning. An int is just one type of integer. A long is 64-bit, like a pointer on FreeBSD x64.
>You obviously have no idea of how garbage collectors are implemented
It seems the one who has no idea is you.
From Golang FAQ:
>The current implementation is a parallel mark-and-sweep collector.
Mark and sweep literally means "look through the roots memory, if you find a pointer then mark the object it points to as living and now look through that object's memory.". In some languages, you can differentiate what addresses may contain pointers and what may not. In C, one has to scan the whole stack and assume anything that has a value representing the address of an object is a pointer. This is elementary. It is not even controversial. Boehm GC, TinyGC use this model directly; Golang and others use variations.
Regarding Valgrind, since it literally runs your program in what essentially constitutes a virtual machine, it doesn't seem relevant.
>>51509259
^ This guy is true.
>>51509228
>You can never safe guard against idiot programmers.
This is not about idiot programmers.extern uintptr_t x;
x = rand();
// x might now contain the same bit pattern as a heap pointer
The point is that there's NOTHING you can do to account for things like that.
> I can access raw pointers in Java too by using the misc.sun.unsafe package and make the GC start freeing parts of the JVM and crash the JVM itself.
Exactly.
>REFERENCE COUNTING
Do you even know how that actually works?
>>51509301
>A long is 64-bit, like a pointer on FreeBSD x64.
That's wrong.
A long is 32. You mean long long.
>Regarding Valgrind, since it literally runs your program in what essentially constitutes a virtual machine, it doesn't seem relevant.
Yeah, VMs are totally not relevant when you're talking about virtualizing memory and doing elaborate garbage collection by memory inspection....
Retard
>>51509288
>But it is
Not even close.
>Perfect GC != working GC
You can't even create a working GC.
>You want some magic functionality that safe-guards agains programmers doing all sorts of arbitrary pointer stuff.
That's basically the point of a GC.
>Well, no language offers protection against this.
Language without arbitrary pointers does.
>>51509301
That browser reminds me of Netscape Navigator.
>>51509317
>using rand to generate a pointer address
>not about retarded programmers
That's the worst example ever.
>>51509344
>You can't even create a working GC.
Valgrind is basically memory virtualization. Of course you can use this to create a working GC.
>Language without arbitrary pointers does.
Neither Python nor Java has arbitrary pointers, it's still possible to circumvent the language internals and trip the GC.
>That's wrong.
>A long is 32. You mean long long.
FreeBSD is an LP platform. That means longs are 64-bit:$ cat > test.c
#include <stdio.h>
int main()
{
printf("%d\n", sizeof (long));
return 0;
}
$ cc test.c
test.c:4:16: warning: format specifies type 'int' but the argument has type 'unsigned long' [-Wformat]
printf("%d\n", sizeof (long));
~~ ^~~~~~~~~~~~~
%lu
1 warning generated.
$ ./a.out
8
>Yeah, VMs are totally not relevant when you're talking about virtualizing memory and doing elaborate garbage collection by memory inspection....
Running your whole program in a virtual machine is totally different to periodically scanning the stack and allocated memory for things that look like pointers. Valgrind has a severe performance impact; the state-of-the-art Boehm-Demers-Weiser Garbage Collector for C and C++ does not.
>>51509317
>uintptr_t
That's not standard C.
>it's merely optional in C99
>>51509381
>state-of-the-art slow-as-fuck used-by-nobody
well, then
>>51509386
>not standard C
it is; you don't know what "standard" means
>>51509350
Are you retarded or do you not understand the example?
uintptr_t was just to make a suitable integer type, and rand() was just a simple example. Could be ANY function generating ANY integer value.
Could be just any int on a sizeof(int) == sizeof(pointer) platform.
errno is a global int always available which can ruin your day.
OOP question; say I have this example code:# python example code
class Spell(object):
def __init__(self, name, damage):
self.name = name
self.damage = damage
class FireBall(Spell):
def __init__(self, name, damage, x1, x2, # and 99 more....):
super(Fireball, self).__init__(name, damage)
self.x1 = x1
self.x2 = x2
# and 99 more
def y1(self):
pass
def y2(self):
pass
def # 99 more:
pass
I'm making FireBall() a child of Spell(), and FireBall() inherits everything from Spell() and makes each of those variables different for every child I make. Spell() only creates very few variables, while FireBall(), and other childs of it, create a large amount of variables/functions. Is it even worth it to have parent class like this, when it only does very few things, and they're all going to be overwritten for each child? Would it be fine just to delete Spell(), and just put those few variables in every class that would have inherited them? Or should I stick with the is-a thing and put those few variables in a parent class?
>>51509440
that's why no sane person tries to jam a gc on c or c++
>>51509369
>Valgrind is basically memory virtualization. Of course you can use this to create a working GC.
valgrind can't even catch all memory leaks, it wouldn't have a chance to serve as a GC.
>Neither Python nor Java has arbitrary pointers, it's still possible to circumvent the language internals and trip the GC.
Keyword being 'circumvent' i.e you escape into unsafe blocks or FFIs.
All of those are isolated on a language level (either via special syntax or as a special library) and can be statically removed from the source code.
>>51509440
But making an arbitrary pointer is literally idiot programmers.
>>51509386
>>51509430
It is standard, you just can't expect it to work.
>>51509381
Pic
>>51509514
>But making an arbitrary pointer is literally idiot programmers.
You can't write a non-trivial C program without arbitrary pointers.
C noob here, when should I malloc? I haven't used it yet since valgrind never said I have a memory leak.
Any good resource explaining it in detail?
>>51509593
When you need it. You're welcome.
>>51509514
Are you on a 32-bit platform? The only platforms I know using the LLP64 model (which means a long integer is 32-bit and a long long integer is 64-bit) are Windows.
>But making an arbitrary pointer is literally idiot programmers.
The memory representation of an arbitrary number can match that of a legal pointer.
If your program counts to ~4 billion on a 32-bit system then it will have had, in its counter variable, a potential valid pointer for every address at some point. There will be no way for the garbage collector to tell whether that's an integer or a pointer.
>>51509556
You don't need to free all arbitrary pointers, only when the pointee data is no longer pointed at. That's fairly easy to figure out, but OH NOEZ YOU CAN'T VIRTUALIZE MEMORY BECAUSE MUH ARBITRARY DISTINCTIONS
>>51509612
What is your actual problem that made you type all this drivel for half a thread? That C doesn't have GC and you can't handle malloc/free? Pick another language, moron!
>>51509612
>Are you on a 32-bit platform?
No
>The only platforms I know using the LLP64 model (which means a long integer is 32-bit and a long long integer is 64-bit) are Windows.
>>51509514
lewd, jonas.
pic related, mfw.
>>51509514
long is at least 32 bits, not exactly 32 bits. it's 32 bits on your system and 64 bits on the other guy's system. don't be obtuse
>But making an arbitrary pointer is literally idiot programmers.
he's not making an arbitrary pointer he's making a random integer which a dumb mark-and-sweep gc could mistake for a pointer, even though it isn't one
>>51504612
everyone on /dpt/ who keeps hyping Lisp doesnt understand Lisp. Lisp is a metaprogramming language language, its a DSL for making DSLs. People who try to apply Lisp to bullding static applications are idiots. The most general use any Lisp could be used for is scripting.
>>51509669
it's even lewder if you consider that I name my computers after Calvin and Hobbes characters
>mfw fantasizing about little susie derkins
>>51509638
>only when the pointee data is no longer pointed at. That's fairly easy to figure out,
It's impossible to figure out with C's memory model.
>>51509701
It's not.
>keep track of calls to malloc, store what it returns
>zero out memory that your GC frees
>when no aligned (because pointer aliasing) pointer-sized value contains the address, make your GC free it
>>51509660
The only reasonable explanation remaining is that Apple Clang is producing 32-bit binaries by default, given that OS X is an LP64 architecture.
>>51509647
>le I Am So Leet Because I Use Malloc/Free Explicitly meme
your code is probably slower than the equivalent garbage-collected version, sorry. You just can't beat 50 years of GC research and development that easily. That's why the brightest minds like John McCarthy and Alan Kay went with GC.
>>51509737
>You just can't beat 50 years of GC research and development that easily.
>>51509701
>C's memory model.
you keep using those words without knowing what they mean
>>51509737
>your code is probably slower than the equivalent garbage-collected version
it's not
>You just can't beat 50 years of GC research and development
I do it everyday
>John McCarthy and Alan Kay went with GC.
they don't care about performance
>>51509736
>It's not.
It is.
>when no aligned (because pointer aliasing) pointer-sized value contains the address,
This is the part that's impossible.
Your GC is currently sweeping at 0x1000, it sees this:addr value
[0x1000]: 0x12345
[0x1004]: 0x12345
Which is the pointer, which is the integer (or float, or array of bytes, or some 4 byte struct, etc)?
>>51509737
>The only reasonable explanation remaining is that Apple Clang is producing 32-bit binaries by default, given that OS X is an LP64 architecture.
>>51509800
I know exactly what it means.
>>51509852
If that'd be true, you wouldn't be using it in the wrong context all the time.
In LLDB, why is my debugging printing the memory address of the object.
but when I print and array, it's structured perfectly.
how can i make it so a pointer correctly prints like it should, with the structure,, in lldb?
>>51509902
>you wouldn't be using it in the wrong context all the time.
I haven't.
>>51509918
You did, everywhere.
>>51509931
Nope.
>>51509851
how curious
>>51509851
>LC_SEGMENT_64
Yes, it would be. Now please compile with the -m64 flag to specify generation of 64-bit code.
>>51509937
Where did you talk about threads?
>>51509937
Mods. It's time to call it on this guy. Better clean up.
He knows too much.
>>51509947
Indeed. Maybe I have some weird default flags?
>>51509949
Do you see the virtual addresses it loads the segment into, that is clearly a 64-bit address
Also, have the rest of the output.
>>51509968
>threads
Ah, its you who appear to now know what memory model means.
>>51509994
But I do, I said threads. Your surprise upon hearing this shows your knowledge on the subject.
>>51509976
>>51509949
>>51509994
protip: memory model doesn't mean "representation of objects in memory"
>>51509968
>>51510007
>>51510028
#REKT
R
E
K
T
>>51510007
>But I do, I said threads.
Which means you don't know what it means.
>>51510028
No one has said that.
>>51510056
>I don't know what "memory model" means
>they are nice buzzwords so I keep mentioning them
>>51510087
Is that a projection?
>>51510056
What do you think "memory model" is?
>>51510056
>Which means you don't know
But I do, I said threads.
>>51510104
>>51510087
>>51510056
>>51510028
>>51509994
>>51510007
>you go first! i know what it means, but do you?!
>>51510099
>meme arrows
>projection
how new can you be, m8?
>>51509949
>>51509737
Well? >>51510011
>>51510118
one of them said it has to do with threads and wikipedia at least seems to support that, the other keeps rambling about gc in c and c++ and throwing the words around from time to time without explanations "gc is hard because memory model"
>>51510104
What kind of addressing modes there are, whether code and data are stored separately, segmentation or flat address space, and things like size of pointers.
We're not talking about memory models in the context of multi threading of atomic operations and happens-before type stuff, but in the hardware/software architectural context since it's a GC we're talking about.
>>51510154
>kind of addressing modes there are, whether code and data are stored separately, segmentation or flat address space, and things like size of pointers
Those are not "memory model". You can call them whatever you want, though, but it's not accepted terminology for them. Memory model means something else to the entire programming community. Why not call them "banana"? That doesn't have an established meaning yet in programming.
>>51510154
>C memory model
>addressing modes
>code and data stored separately
>segmentation
>flat address space
none of those have anything to do with C; how can they be called "C memory model"?
>>51510189
>Those are not "memory model".
Yes they are.
>You can call them whatever you want, though, but it's not accepted terminology for them.
See excerpt from Intel Manuals.
>Memory model means something else to the entire programming community.
It has two widely accepted contexts, "setting the memory model" for x86 asm programmers means things like tiny huge flat, rip-relative (which corresponds to x86 memory models, size of pointers, whether segmentation is on or not and if addresses should default to PC relative or not), etc.
You're welcome for having been educated.
>>51510154
>not talking about memory models in the context of multi threading of atomic operations and happens-before type stuff
that's literally what memory model is, you dumb fuck
>>51510257
>Intel Manuals
>x86 asm
That's not C; definitely not "C memory model".
>having been educated
On what? "C memory model"? Hardly, seeing you don't know what the fuck you're talking about.
>>51510257
>c memory model
>intel assembly manuals
>maximum backpedaling
>maximum damage control
You can't create a proper GC for C using
>>51509386
It is standard C. Even if optional.
itt: gc cuk getting told hard by c pros
happens every time
/thread
>>51510317
>>51510299
>>51510258
>>51510200
>>51510189
Maximum samefaggotry
>inb4 "no" and fake screenshot
>>51509381
>>51509514
>%lu
undefined behaviour faggot.
It's %zu, holy shit.
>>51510200
They have to do with the implementation of a GC.
A flat address space + unrestricted raw pointers means you cannot distinguish heap pointers from some other arbitrary data.
(C doesn't require flat addresses of course, a C implementation on a segmented memory model could have potentially have a working GC, although not on x86 because it's not quite enough since segment selector:offset pairs can overlap)
>>51510353
see >>51510347
it's over, webfag
Anime app because I'm too lazy to constantly open kiss anime
>>51510381
>on a segmented memory model
no such thing
>>51510365
>C99
>>51510299
>That's not C; definitely not "C memory model".
It's a C implementation.
>On what? "C memory model"
On the two meanings of 'memory model', you though there were only one (having to do with threads, which I knew of as well of course)
>>51510382
>it''s not samefaggotry because I samefagged one more time
>>51510400
for pre-C99 you have to do something like printf ("%lu\n", (unsigned long int) sizeof (long));
And still not be sure if it is correct (in case size_t is bigger than unsigned long int). Uncasted versions are undefined behaviour.
>>51510397
x86 has a segmented memory model.
>>51510408
>It's a C implementation.
And? Still not "C memory model".
>>51510408
>I knew of as well of course
anon, if you knew, you wouldn't have said >>51509994
inb4: PRETENDING
>>51510408
>IA-32/x86 is a C implementation
I'm none of the people you are responding to, but what the actual fuck anon?
GCC is a C implementation
Clang is a C implementation
x86 is an architecture
>>51510433
>C memory model
>>51510437
It would be the C memory model on that implementation.
And on the software side of the architectural memory model you have things like the layout of stack, heap, globals, read-only data, etc.
>>51510424
>>51510365
>undefined behaviour
No, it's "implementation-defined". Read the standard.
>>51510473
>C memory model
no; the C memory model is about threads and shared access to data; it's the same on all implementations
>>51510473
The C memory model is defined on the standard anon.
>>51510440
>anon, if you knew, you wouldn't have said >>51509994 (You)
That's exactly why I said "ah", as in - I know you were probably gonna mention threads.
>>51510473
>layout of stack, heap, globals
no such thing in C
>>51510441
gcc or clang compiling C for x86 would have to follow those memory models.
>>51510479
You read the standard you fucking cocksucker. Passing wrong types to the wrong formattings is undefined behaviour.
What is implementation-defined is the size of size_t.
>>51510493
>"ah", as in - I know you were probably gonna
you said "ah" after the fact, faggot, there's no "probably gonna"
you got told. hard.
>>51510500
I'm talking about implementation details here.
>>51510527
>C memory model
just no
>>51509381
>>51509612
>>51509737
>>51509949
>>51510365
>>51510511
>make claim about architecture and C implementation
>can't explain >>51510011
>"hurr durr it's undefined behaviour anyway"
Typical /g/ autism
>>51510473
>architectural memory model
This faggot just loves these buzzwords. Well done, anon, well done.
>>51510519
>you said "ah" after the fact, faggot, there's no "probably gonna"
Nope.
There's absolutely no need to be upset, I've educated you all of something you were ignorant of.
Just take it in and move on.
>>51510557
>C memory model
how about no
>>51510548
>can't explain >>51510011
Explain what?
>>51510539
We've always been talking about how to implement a GC for C, keep up.
>>51510557
>I just found out "memory model" means something else
>welp, better act smug anyway
m8...
>>51510548
>>51510511
Admit that you were wrong and fuck off, or explain it, instead of back-pedalling and going on about undefined behaviour.
>>51510553
It's sad that you're so resistant to new information.
>>51510586
then stop using words that have nothing to do with neither gc nor c
>>51510596
Wrong about what?
Back-pedalling about what? Are you fucking dense?
Explain what?
>>51510594
Another projection?
Because that's literally what you're doing here.
>>51510608
>new information
>C memory model
hahahahaha
webcuks actually believe this
>>51510582
>"LP platform means 8 byte longs on 64-bit"
>long is 4 byte
>can't explain it and goes on about 32-bit binaries, flags and now undefined behaviour
>>51510615
>pretending to be someone else
How original...
>>51510615
>C
>memory
>model
>>51510632
Was I asked to explain anything you little shit? Kill yourself.
>>51510666
>muh memory model
>>51510660
>quoting random people
>>51510666
>Was I asked to explain anything you little shit? Kill yourself.
Uh, yeah?
>>51510140
>Well?
>>51510682
>Uh, yeah?
When?
>>51510676
Yep, a retard.
>>51510679
3 words:
C... memory... model...
>>51510613
Do you now even know what a GC is?
The memory model has everything to do with implementing a GC, its job is pretty much managing the details of the memory model.
Anyone implementing a GC has to be intimately familiar with both the hardware memory model and the language memory model (which the GC is being implemented in), and of course the days GC have to work in multi threaded environments, so even if you wanna willingly misinterpret the context of how 'memory model' as been used I've still been right from the start.
>>51510697
>I got told
we know, anon, we know
>>51510697
>When?
I even quoted and linked the post for you, are you blind?
>>51510718
>no (you)
Consider killing yourself.
>>51510676
>muh data structures
>MUH ALGORITHMS
>>51510713
>the memory model has everything
>of the memory model
>hardware memory model
>language memory model
no "C memory model", anon?
>>51510730
>pretending to be another person
How original.
>>51510739
See >>51510730
>>51510713
>I've still been right from the start
>C memory model
>segmentation
hahahaha
>>51510750
See >>51510632
>>51510737
If you write the GC in C you of course need to adhere to the C memory model or you'll have undefined behavior.
>>51510713
>and of course
>so even if
>I've still
damage control: the 3 strikes
>>51510784
>damage control
Nice meme.
>>51510765
>muh backpedaling
>muh "difficult because of C memory model"
>I don't know the C memory model is defined in the standard document and it's the same for all implementations
>I confuse intel manuals with C specification
>webfag really
>>51510803
don't be butthurt anon, it's not the first time you get told and certainly not the last
>>51510596
Considering no one ever wanted to talk about OS X (I run FreeBSD and asserted correctly that FreeBSD x64 uses 64-bit length long integers) you're lucky anyone's gratifying you at all.
Quite frankly I have no interest in your compiler. OS X on 64-bit systems is an LP64 system (source: Intel C++ 14.0 manual, ) and I don't care if you can get the wrong value somehow.
This has got to be the most desperate damage control I've ever seen. You guys were totally obliterated on the difference between pointers and integers, and proceeded then to be wrong about how tracing GCs work, how integer as a term relates to int and long, and now the size of a long on 64-bit OS X. It's astonishingly bad. Just give up and stop clutching at straws; it'll only hurt more if you keep this up.
>>51510818
>>51510629
>accuses others of being webcuk
>needs a third party library to average two ints
>>51510850
>third party library to average two ints
but anon, I dont
>>51510841
>I don't care
You mean you don't know.
C# horrors
Why is event system such a spaghetti shit? The syntax is horrible, the way around it is horrible and illogical.
>>51510596
>back-pedalling
Oh look, it's that shitposter again.
Long time no see.
>>51510841
>run freebsd
>screenshot names confirm osx
anon...
>>51510871
Then you're not a webcuk but a big dick C boss
>>51510886
>Long time no see
long long
>>51510886
>Oh look, it's that shitposter again.
Who do you think I am?
>>51510833
I've never been 'told'.
This is not debate club or a competition anon-kun, you don't score points if you think you've zinged someone (it's especially silly to act this way since this is an anonymous forum).
Clarifications are not 'backpedaling' or 'damage control', it's just responding to someone who apparently misunderstood your point or didn't get it.
>>51510897
correct terminology is "big dick C playa"
>>51510911
lol'd
>>51510929
oh, i will get it right next time
>>51510923
>Clarifications are not 'backpedaling' or 'damage control'
They are when you go from "C memory model" to "x86 memory model". That's literally the definition of backpedaling!
>>51510919
The "backpedaling" faggot. Still got no new vocabulary I see.
>>51510923
>I've never been 'told'
You got told on "C memory model". It doesn't mean what you thought. It's defined in the standard document for the C programming language, not in the Intel reference manuals.
So: told.
>>51510841
>be wrong about integer type size
>"oh but i meant longs, longs are integers too fuck you"
>be wrong about sizeof long as well
kek
>I run FreeBSD and asserted correctly that FreeBSD x64 uses 64-bit length long integers
I'll believe you when you post screenshot instead of copy paste.
bamp for ebin /dpt/
>>51510965
Can you be more specific? Specific topic? Thread? I'm fairly sure I'm not the only one using a common word such as backpedaling/back-pedaling.
478 replies
>>51510994
>bumping when the thread is autosaging
>>51511013
you dun goofed
>>51510894
How do they confirm OS X? >>51509301 was saved directly from KSnapshot of Trinity Desktop.
>>51510878
Both statements apply. I don't know and I don't care.
Another note: 'memory model' means 'memory model'. It's quite clearly a context-dependent term. Those who object are politely instructed to leave the thread immediately and never come back.
>>51510976
Integer means integer. It does not mean int. Longs are indeed integers. So are shorts. I'm not interested in why someone is getting sizeof long = 4, I don't know and don't care. I have already provided a reliable source for my claim. You haven't.
>>51511019
stfu or I'll bamp with traps next time
>>51510944
Backpedal from what?
I've originally always been talking about the issues of implementing a GC for C.
In this context it's the issue of how raw pointers with a flat address space works that makes it impossible to create a drop-in GC for an arbitrary C program.
Prior to C11 "C memory model" would refer to exactly these kinds of points (just google it, put -c11) there are papers written about "C memory models" which have nothing to do with threads but things like pointer aliasing and object/type representations.
A Precise Yet Efficient Memory Model For C - Electronic Notes in Theoretical Computer Science (ENTCS) archive
Volume 254, October, 2009
A Formal C Memory Model Supporting Integer-Pointer Casts - Jeehoon Kang, Chung-Kil Hur, William Mansky et al
Is this when I'm supposed to go "LELELEL REKT FAGGOT!!!!!!!!!"?
>>51511037
>'memory model' means 'memory model'
How about "C memory model", anon? You know, the one you "got told" about.
>>51511059
>Is this when I'm supposed to go
You have nowhere to go, you're in full damage control mode.
>>51511037
>I don't know and I don't care.
Unlike you I'm not content with not knowing.
>Integer means integer. It does not mean int. Longs are indeed integers. So are shorts.
Exactly, so assuming that an integer type is large enough to hold pointer types is non-portable and results in unspecified or implementation-defined behaviour.
>Integer means integer. It does not mean int. Longs are indeed integers. So are shorts. I'm not interested in why someone is getting sizeof long = 4, I don't know and don't care. I have already provided a reliable source for my claim. You haven't.
What claim?
All I'm saying is that you falsely assumed that long types are large enough to hold pointer types on all architectures (except 16-bit DOS as you said in >>51508571)
I've provided more than my share of evidence proving it isn't so.
>>51511059
>I know I got rekt like fuck
>here's some worthless references of dudes that also used the term wrong, don't use the standard document for that!
>this makes me right
>>51511084
I've explaining all the issues regarding GCs for C and cited two scientific papers.
You post memes and green texts.
>>51511120
>C memory model
yo got rekt on that tbh
>>51511141
Nope.
>>51511141
>no desu
Did Jackie 4chan remove word filters?
kek desu senpai
>>51511059
>I've originally always been talking about the issues of implementing a GC for C and incorrectly used the buzzwords "C memory model" and got my ass handed to me after that
ftfy
>>51511116
There's literally no reason to be upset.
Given you tons of helpful information, take it in, read those papers, read up on Boehm GC (specifically its limitations), maybe you'll learn something and become a better programmer.
>>51511159
but... you literally did
>>51511200
>read those papers
when are you gonna read the C standard? because, you know, "C memory model"
ITT and the last couple of /dpt/s
>We /prog/ now
mental midgets and toilet scrubbers has been replaced with webcuks and rekt
jews has been replaced with SJWs
racket has been replaced with haskell
ENTERPRISE QUALITY has been replaced with poo in loo
>>51511238
C... memory... model... tho!
>>51511238
>/prog/
>>51511294
We got 750+ in a thread a few years ago.
>>51511314
>>51511294
Not exactly surprising
It's just a bunch of autists fighting over semantics
|
https://4archive.org/board/g/thread/51501312/daily-programming-thread
|
CC-MAIN-2018-22
|
refinedweb
| 16,240
| 72.16
|
/* * Aim: * Among all empty lands accessible by all buildings, find the one which has the smallest sum of distances * Two subproblems, calculate distance and accessibility. * * Method: * For each building in the grid, use BFS to update its distance to all empty lands accessible from it. * Maintain a "map" matrix to save the sum of distances from all buildings to a certain empty land. * e.g. the distance between buildingA and empty spot(x, y) is disA, the distance between * buildingB and empty spot(x, y) is disB. Then, map[x, y] = disA + disB; * The smallest travel distance is the smallest value in the map which is accessible by all buildings. * * Improvement: inspired by @StefanPochmann's idea :"Instead, I walk only onto the cells that were reachable from all previous buildings. From the first building I only walk onto cells where grid is 0, and make them -1. From the second building I only walk onto cells where grid is -1, and I make them -2. And so on." * When updating the ith building's distance to all empty spaces, consider only positions which are accessible by all the previous i-1 buildings. * * Sum up : * n is the entry number of input matrix. * Time O(n2) : Space O(n) * * real performance: 11ms */ public class Solution { public int shortestDistance(int[][] grid) { int height = grid.length; int width = grid[0].length; int[][] map = new int[height][width]; int b_cnt = 0; int[] steps = {1, 0, -1, 0, 1}; for (int i = 0; i < height; i ++) { for (int j = 0; j < width; j ++) { if (grid[i][j] == 1) { if (!BFS(b_cnt, grid, map, new boolean[height][width], i, j, steps) { return -1; } b_cnt --; } } } int Shortest = Integer.MAX_VALUE; for (int i = 0; i < height; i ++) { for (int j = 0; j < width; j ++) { if (grid[i][j] == b_cnt && Shortest > map[i][j]) { Shortest = map[i][j]; } } } return Shortest == Integer.MAX_VALUE ? -1 : Shortest; } public boolean BFS(int b_cnt, int[][] grid, int[][] map, boolean[][] visited, int row, int col, int[] steps) { int distance = 1; List<Integer> curr = new ArrayList<>(); curr.add(row);curr.add(col); boolean valid = false; while (!curr.isEmpty()) { List<Integer> next = new ArrayList<>(); int size = curr.size(); for (int i = 0; i < size; i += 2) { int r = curr.get(i); int c = curr.get(i + 1); for (int k = 1; k < steps.length; k ++) { int x = r + steps[k - 1]; int y = c + steps[k]; if (x >= 0 && x < map.length && y >= 0 && y < map[0].length && !visited[x][y] && grid[x][y] == b_cnt) { valid = true; visited[x][y] = true; grid[x][y] --; map[x][y] += distance; next.add(x);next.add(y); } } } curr = next; distance ++; } return valid; } }
BFS method with pruning, JAVA, Good performance
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/70778/bfs-method-with-pruning-java-good-performance
|
CC-MAIN-2018-05
|
refinedweb
| 465
| 72.66
|
passwd 0.3.1
crypt(3)-compatible (UNIX passwd style) password hashing algorithms
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
passwd
A D library for UNIX-style (
crypt(3)) password hashing.
Features
Supported hashing algorithms:
- MD5-based crypt (algorithm "1")
- Bcrypt (algorithm "2a" and "2b") --- recommended
- SHA256-based crypt (algorithm "5")
- SHA512-based crypt (algorithm "6")
The bcrypt algorithm version 2a is a buggy historical version that was on OpenBSD. It's only different from 2b for passwords that are much longer than practically anyone uses, but OpenBSD bumped the version number with its fix. This implementation has been regression tested against OpenBSD's 2a and 2b.
Usage
import passwd; import passwd.bcrypt; // Create salt for bcrypt auto salt = Bcrypt.genSalt(); // Create a hashed password auto crypted = "hunter2".crypt(salt); // Save hashed password to database or password file // ... // Test a password at login auto password_guess = "hunter2"; assert (password_guess.canCryptTo(crypted));
Which algorithm should I use?
If you're asking, just use bcrypt. The other algorithms are for interoperating with existing software.
Bcrypt is the default for user passwords on most BSD systems. Most modern GNU/Linux systems use the SHA algorithms as the default for user passwords.
MD5 crypt(3) is supported by a lot of software, but it's not recommended for new code. Although MD5 is completely broken for things like certificate signing, brute force guessing is still the best known way to reverse an MD5-hashed password. However, brute forcing MD5 is relatively cheap and easy today, so it's not good enough for the weak passwords humans typically use. Just use another algorithm if you can.
Notes on error handling
The library throws the exceptions in
passwd.exception.
To help you meet any compliance requirements you might have, error messages don't display any part of the hashed password. If you're using hashes generated by this library, you should only get errors if your password database is corrupted, or something. However, for your own debugging sanity, it's a good idea to catch errors and log some kind of ID (e.g., user ID) you can use to track down the problem.
Installation
passwd can be added to a dub project with
dub add passwd.
passwd requires
libbsd for portable entropy generation. It's available on many systems. For example, you can install it on Debian with
sudo apt-get install libbsd0. If you're not using
dub, you'll need to add
-L-lbsd to your D compiler command line.
Documentation
You can view the online documentation, or build the docs yourself using
dub build --build=docs.
Contributing
New algorithms are welcome as long as they're well standardised for use in
crypt(3) implementations (and preferrably already in popular
libcs). Please provide thorough test suites, and add links to algorithm specifications.
This library is licensed under the Mozilla Public License version 2.0. Parts of the library might be relicensed for inclusion in D's standard libraries in future. Don't contribute patches if you're not okay with them being relicensed that way.
- Registered by Simon Arneaud
- 0.3.1 released a year ago
- sarneaud/passwd
- MPLv2
- Authors:
-
- Dependencies:
- none
- System dependencies:
- libbsd ()
- Versions:
- Show all 5 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
30 downloads total
- Score:
- 0.0
- Short URL:
- passwd.dub.pm
|
https://code.dlang.org/packages/passwd
|
CC-MAIN-2022-27
|
refinedweb
| 579
| 56.86
|
If the custom id/description field contains a bug number, it would be nice if the try server could automatically add a comment in bugzilla saying something like "New tryserver builds referencing this bug are available at". (Perhaps a new explicit bug # field could be added alongside description/etc.)
Want. Unless someone else wants to do this sooner I'll do it with the next round of try server upgrades (probably Q3).
Mass change of target milestone.
Futuring.
Mass move of bugs from Release Engineering:Future -> Release Engineering. See for more details.
(Closed during triage a few weeks ago, but no comment, so not sure why. Reopening while we figure out what we were thinking!) from irc, - not sure if this is still needed, because we now have email notifications for tryserver jobs? - we keep tryserver jobs for 14 days, so those buglinks will quickly break. Maybe thats ok for users of tryserver, but just to be clear.
I think that the more important aspect of this bug is adding the status of the try server run, rather than a download link. I can see a lot of value in having a permanent record of "did this patch pass on try?"
Yep, even if the link said "valid until m/d/y" it would still be helpful, because the tryserver run would be useful for people who are actively tracking that bug. This would require a bit more work though, such as waiting until all the try server builds for the patch were done before notifying/adding, or until a category of them was done (which would be great for email too, incidentally; "Build finished, success on Mac and Linux, failure on Windows" etc. instead of the dozen+ emails now). Anyway, could use a bit of design work to figure out what ideal tryserver/bug/etc. flow could look like.
Adding this to the bugs tracked by the try_enhancements bug. When we get to doing a road map for the next iteration of try improvements it will be visible there for review and prioritization.
Created attachment 500387 [details] [diff] [review] try bugposter v.1 I've added this to braindump in a try-related dir. Tested locally (connected to build-vpn) and have it working. Currently set POST_TO_BUGZILLA to false so as not to overpost to but as you can see, the bug posting works. I'll need to substitute the API info and add a local config with cltbld's bugzilla credentials. Also will need to broadcast to developers the ability to add a bug # and a '--post-to-bugzilla' flag to their push comment.
Created attachment 501450 [details] [diff] [review] try bugposter v.1 (with consistent 4 space indenting)
Comment on attachment 501450 [details] [diff] [review] try bugposter v.1 (with consistent 4 space indenting) Bit of a lengthy review here. >diff --git a/try-related/bz_utils.py b/try-related/bz_utils.py >new file mode 100644 >--- /dev/null >+++ b/try-related/bz_utils.py It may be worth considering putting bz_utils.py into tools, or somewhere where other scripts can make use of it. >diff --git a/try-related/e2e.html b/try-related/e2e.html >new file mode 100644 >--- /dev/null >+++ b/try-related/e2e.html These two test files should go into a 'tests' directory or something. >+import re, os, ast, urllib >+try: >+ import simplejson as json >+except ImportError: >+ import json >+import bz_utils as BugzillaAPI Why rename the module here? >+ >+++POST_TO_BUGZILLA = False >+++ >+### BUGZILLA API GLOBALS ### >++### TODO: Read this from local config >+++ if os.path.isfile(filename): >+ print "Located existing cache file, reading contents" >+ with open(filename, 'r') as f: >+ for line in f.readlines(): >+ try: >+ (build,info) = line.split(':',1) >+ loadedCache[build] = "%s" % ast.literal_eval(info.strip()) >+ except: >+ errors += "%s is not in correct format and cannot be read into the list" % line >+ else: >+ print "No existing cache file, one will be created to track incomplete builds" >+ return (loadedCache,errors) Inconsistent indentation here in this function. Also, please use the logging module instead of print statements. You also need spaces after ',' in a few of your tuples. >+ >+ def UpdateCache(self, loadedCache, builds): >+ # accepts a dictionary from cache, and the dictionary of current incomplete builds >+ # returns a tuple of builds to be tracked in cache and the ones ready to post on bugzilla >+ outgoing = {} >+ bugmail = {} >+ for c in loadedCache.keys(): >+ if c in builds.keys(): >+ outgoing[c] = loadedCache[c] >+ print "Keeping %s in cache, still not complete" % c >+ else: >+ bugmail[c] = loadedCache[c] >+ print "Added %s to bugmail list" % c >+ for b in builds.keys(): >+ if b not in outgoing.keys(): >+ outgoing[b] = builds[b] >+ print "Added new buildrun %s to outgoing cache list" % b >+ return outgoing,bugmail Inconsistent indentation, missing spaces, and print statements here too. >+ >+ def WriteCache(self, filename, builds): >+ results = None >+ # write to file each revision:bug_number that has running/pending builds >+ try: >+ with open(filename, 'w') as f: >+ for b in builds.keys(): >+ f.write("%s:%s\n" % (b, builds[b])) >+ except: >++ e2e_report = self.GetBuildReport(e2e_url) >+ if e2e_report.has_key('build_runs'): >+ for buildrun in e2e_report['build_runs']: >+ if e2e_report['build_runs'][buildrun]["is_complete"] == "no": >+ revision_report = self.GetBuildReport(revision_url, buildrun) >+ for r in revision_report["build_requests"]: >+ if bug_number is "": >+ bug_number = self.GetBugNumber(r["comments"]) >+ else: >+ continue >+ builds[buildrun.encode("utf8", "replace")] = bug_number >+ return builds Indentation. Also, this looks like the first bug number detected will be applied to all incomplete revisions? >+ >+ def BuildRunInfo(self, revision_report): >+ # Accepts the json information for a revision >+ # return a tuple containing the formatted info on the build run and the bug number from the pushlog comments if one is present: >+ # "Try run for {revision} had {total_builds} build requests and {total_complete} completed. The results are: >+ # success: {success_num} >+ # warnings: {warning_num} >+ # * {list_of_warning_builders} >+ # failed: {failure_num} >+ # * {list_of_failure_builders} >+ # unknown: {unknown} >+ # * {list_unknown) >+ # More information is available here: {link_to_logs? tbpl?} >+ >++ >+ if revision_report: >++++ if revision_report.has_key('complete'): >+ complete = revision_report["complete"] >+ total_builds = revision_report["total_build_requests"] >+ success_num = 0 >+ warn_num = 0 >+ fail_num = 0 >+ unknown = 0 >+ list_warn = [] >+ list_fail = [] >+ >+ if revision_report.has_key('build_requests'): >+ build_requests = revision_report["build_requests"] >+ for build in build_requests: >+ result = build["results"] >+ bug_number = self.GetBugNumber(build["comments"], FLAG) >+ revision = build["revision"] >+ buildername = build["buildername"].encode("utf8", "replace") >+ if result == 0: >+ success_num += 1 >+ elif result == 1: >+ warn_num += 1 >+ list_warn.append(buildername) >+ elif result == 2: >+ fail_num += 1 >+ list_fail.append(buildername) >+ else: >+ unknown += 1 >+ list_unknown.append(buildername) >+ >++ >+ c = TryBugPoster() >+ # gather incomplete build revisions in that report >+ print "Gathering incomplete builds list from %s" % E2E_URL >+ incomplete = c.IncompleteBuildRevisions(E2E_URL, REVISION_URL) >+ print "Checking for an existing cache file" >+ #check for a cache file >+ loadedCache = c.LoadCache(FILENAME) >+ # proceed if no errors >+ if loadedCache[0] and not loadedCache[1]: >+ # compare what's in cache to the new list to make two lists: what's done and what's incomplete >+ print "Comparing cache to current list of incomplete builds" >+ (newIncomplete,outgoing) = c.UpdateCache(loadedCache[0], incomplete) >+ elif loadedCache[1]: >+ print "Errors in loading cache: %s" % loadedCache[1] >+ return 0 >+ else: >+ print "Nothing in cache, continuing with only the new list of incomplete builds" >+ newIncomplete = incomplete >+ outgoing = None >+ >+ # write to cache with the incomplete >+ print "Writing results to %s" % FILENAME >+ results = c.WriteCache(FILENAME, newIncomplete) >+ if results: >+ print results >+ >+ # send bugmail for each revision with a bug number (and flag) in the outgoing list >+ if outgoing: >+ print "Posting bug comments for complete builds" >+ ## need to do a per bug loop here if the outgoing revision has a bug number >+ for revision in outgoing.keys(): >+ revision_report = c.GetBuildReport(REVISION_URL, revision) >+ (comment, bug_number) = c.BuildRunInfo(revision_report) >+ if bug_number: >+ bugmail = c.PostBugComment(bug_number, comment) >+ print "Posted to bug number %s for %s" % (bug_number, revision) >+ else: >+ print "No bug number provided for %s" % revision >+ else: >+ print "No outgoing list to post to for this run" >+ >+ return 1 Use logging instead of print statements in this function, and here's where you would do the configuration file handling instead of using global variables. >+ >+if __name__ == '__main__': >+ results = RunTryBugPoster() >+ if results: >+ print "Run is complete without error" >+ else: >+ print "Run finished with errors" If there are errors, the process should exit with non-zero status. >+ >+#### FUTURE TODO #### >+# check comments on a bug to make sure we are not putting up a duplicate comment? >+# allow for an email option instead/as well and search try syntax for an --email flag
Created attachment 504991 [details] [diff] [review] try_bugposter v2 What's new: * Everything is now in tools/scripts/tryserver * Tests and files needed for testing are in a tests directory * No longer renaming the bz_utils module * Uses a configuration file instead of globals * Comments are docstrings, clearer explanation of each function * Uses more robust bug parser (added to bz_utils) * Consistent indentation * Using logging module instead of print statements * Added spaces after ',' in a few of the tuples * Added comment to explain how the first bug number detected (in a set of builds for a build_request) will be applied only to the revisions -- this is a response to the previous patch's feedback * If there are errors, the process exits with non-zero status.
My 2c, which may or may not be overkill: * Generate a simple html page with links to files/logs with build status of each. Optionally color coded. Bonus points for *why* there was an error/warning. * Upload this html page to the same directory as the files/logs * The bug comment would say something like "Try run ____: X successful Y warning Z failed."
The comment could just be a link to TBPL: * Without fully automated starring, a "permanent record" consisting of "40 successful 3 warning 0 failed" doesn't really tell me anything. I have to visit TBPL and click around to see if any of the failures were new. * It's easy to get from TBPL to hg.mozilla.org, to see exactly what was submitted. * Once bug 631448 is fixed, it will be easy to get from TBPL to a download link as well.
I want links directly to the logs for the failing jobs even if there is also a link to the TBPL summary for the push, because TBPL is still subject to odd failure modes along the lines of "gets stuck at 75% loaded forever", and is quite painfully slow if you go back too far. I kinda wish the list of candidate known failures that TBPL generates were in the output of showlog.cgi. I'd happily trade that feature for *everything* showlog.cgi currently does, by the way.
TBPL's single-push view has been reliable for me since. There should be no "pain from going back too far" now. TBPL doesn't deal well with Try resets, but I'd prefer we fixed that in TBPL than add mountains of spam to bugs as a workaround.
I hope you will understand why, given the track record, I do not want to rely on Tinderbox, TBPL, or anything related to them, for this information.
Created attachment 511880 [details] [diff] [review] try_bugposter v3 This script has been running in staging for a while and I've been checking the output. Added handling for 404 errors, also retries for both posting to bugzilla and getting the json content from cruncher. Right now the default behaviour is to look for '--post-to-bugzilla' in the syntax before trying to post to bugzilla.
This is now part of the larger project detailed here: and here: This should not only poll for completed try runs, but for anything with autoland-$bugnumber in the reason and post back to that bug For try syntax, this tools should allow someone to use the syntax to a) include list of the bug(s) that they would like their try results posted to and b) turn off email notifications
Status update: has my most recent work on this bug. I had tried to run it in production but quickly discovered a significant flaw in that the comments for a push *start* ok (bug number present and grabbed by the poller) but then once tests or talos show up the comment that is returned from the db is the more recent (and blank) comment from those builders which means that at the time of trying to send the results back to the bug there is no bug to report to. I have made notes to myself and when I return from vacation on July 5th I will get back to writing tests for this and working out a solution.
Currently running on cruncher - have first bug post up:
Created attachment 546664 [details] [diff] [review] adds try_bugposting to try syntax options in trychooser webpage
Comment on attachment 546664 [details] [diff] [review] adds try_bugposting to try syntax options in trychooser webpage ^5
Comment on attachment 546664 [details] [diff] [review] adds try_bugposting to try syntax options in trychooser webpage
This is currently running as a cronjob on cruncher: */10 * * * * source /home/lsblakk/autoland/bin/activate && cd /home/lsblakk/autoland/tools/scripts/autoland && time python schedulerDBpoller.py -b try -f -c schedulerdb_config.ini which is running at the moment. While this is not "fixed", the functionality has been added to the try syntax helper page and developers have been informed through Twitter, blog post (planet), IRC, and as a choice on the syntax helper webpage that they can try it out. I will continue to monitor the performance of this script while also continuing to develop it as part of bug 657828. When that code lands in our repos I will circle back to this to close..
(In reply to comment #27) >. That's an option, and one I considered in the initial design. What can happen is that a ton of stuff fails and the bug comment gets very long. One idea would be to add another flag for putting the buildernames (therefore which test/talos/build) failed into the comment, then if that comment is long - you asked for it :) Or perhaps a threshold at which putting the names in the comment is not allowed? Some try runs can generate 200+ builder results. Other ideas?
(In reply to comment #28) > Or perhaps a threshold at which putting the names in the comment is not > allowed? Some try runs can generate 200+ builder results. Other ideas? Just cut it off at N lines, for some reasonable N, say 25? And I'm much less interested in "Linux debug mochitest-1 failed" than I am in "test-foo-api.html failed."
>?
(In reply to comment #30) > >? I don't think the buildername is all that useful ... but others may disagree.
Examples from the 'wild':.) The feature Kyle requested would sure be nice, but would not remove the need for direct links to the logs.
Some of my Try builds are actually so other people can download a build and test the changes - eg, someone from UX (they'd like for us to do Try builds more often), or someone with a bug I can't reproduce locally, etc. In those cases, it's far more friendly to have a link to the download directory directly in the bug comment, rather than having to go through TBPL (where the link well is hidden).
> In those cases, it's far more friendly to have a link to the download > directory directly in the bug comment definitely easy to add that into the comment string, I'll can include it in the next round of improvements.
(In reply to comment #33) >.) so it sounds like two bugs here - one to do some non-trivial log parsing (which can be filed as a blocking bug to the try_enhancements tracking bug) and then something for that TBPL loading issue, I'm not sure what the cause is but sounds very frustrating.
For log parsing, you might look at or talk to :jgriffin, though I agree that this should probably go as a separate bug.
> In those cases, it's far more friendly to have a link to the download > directory directly in the bug comment, rather than having to go through TBPL > (where the link well is hidden). This is now live, so any future use of the --post-to-bugzilla flag will give an ftp link in the bug comment (this link will be broken after 4 days though until there's a fix on the moving of old try builds to /old is fixed)
Could we make the syntax here more human-author friendly? Something like: --bug 675107 is a lot easier to remember and write than: --post-to-bugzilla Bug 675107
Try run for 21417b3e45d9 is complete. Detailed breakdown of the results available here: Results: success: 151 warnings: 18 failure: 3 Total buildrequests: 172 Builds available at
I tried checking the checkbox and then entering a number but it doesn't fill in the number as you type. You have to uncheck and then recheck the box to get the bug number to be filled in in the syntax box.
(In reply to Robert Longson from comment #41) > I tried checking the checkbox and then entering a number but it doesn't fill > in the number as you type. You have to uncheck and then recheck the box to > get the bug number to be filled in in the syntax box. The code for the trychooser page lives here: If you have a better way to get the number into the syntax box, please submit a patch. What's there now is my best attempt at javascript as someone with very little experience with it so it's just "good enough".
I expected to result in a post to the bug, but it didn't; why it didn't isn't clear to me.
(In reply to David Baron [:dbaron] from comment #43) > I expected to result in > a post to the bug, but it didn't; why it didn't isn't clear to me. Logs show the bug number was not picked up in the bug_commenting script, it's not obvious to me as to why. I will try to reproduce this in tests as this script is still in active development. thanks for letting me know about this anomaly.
Created attachment 579544 [details] [diff] [review] big patch for posting to bug when builds are complete This is my first attempt to parcel out some of the review process for what is currently running and posting to bugs (and has been since mid-August) but it's also part of the larger Autolanding system modules. So I stripped out everything that wasn't needed by schedulerDBpoller to run the tests and to work on cruncher. This script is currently running as a cronjob on cruncher and posts to bugs, you can see the bugs it has posted to by looking at /home/lsblakk/autoland/tools-new/scripts/autoland/postedbugs.log I'd like to find a way to land the schedulerDBpoller-relevant stuff so that future landings for Autoland components (Tracking bug 657828) will be slightly easier. If it helps with the binaries for the .sqlite files, this same patch is also up at my user repo:
Created attachment 580595 [details] [diff] [review] big patch for posting to bug when builds are complete Most recent version, with more fixes and cleanups than previously. This is still running live on cruncher and posting to bugs - logs are viewable in /home/lsblakk/autoland-2.6/tools/scripts/autoland
I had to disable the cronjob on cruncher for repeated comments, see bug 629668 comment #29 through to 34. Also bug 534963 and bug 704855 since Friday night, according to .../scripts/autoland/postedbugs.log.
5 identical comments : to 16, 05-10 seconds apart.
This is now landed as part of autoland: and this functionality has been available to developers since August 2011. Resolving. Docs:
I pushed a job to Try and upon completion the results weren't posted to the bug. Here's the url:
|
https://bugzilla.mozilla.org/show_bug.cgi?id=430942
|
CC-MAIN-2017-13
|
refinedweb
| 3,285
| 59.84
|
IRC log of ws-ra on 2011-03-22
Timestamps are in UTC.
19:27:41 [RRSAgent]
RRSAgent has joined #ws-ra
19:27:41 [RRSAgent]
logging to
19:27:43 [trackbot]
RRSAgent, make logs public
19:27:43 [Zakim]
Zakim has joined #ws-ra
19:27:45 [trackbot]
Zakim, this will be WSRA
19:27:45 [Zakim]
ok, trackbot; I see WS_WSRA()3:30PM scheduled to start in 3 minutes
19:27:46 [trackbot]
Meeting: Web Services Resource Access Working Group Teleconference
19:27:46 [trackbot]
Date: 22 March 2011
19:28:11 [Zakim]
WS_WSRA()3:30PM has now started
19:28:18 [Zakim]
+Bob_Freund
19:28:29 [Ashok]
Ashok has joined #ws-ra
19:28:37 [dug]
dug has joined #ws-ra
19:28:50 [Ram]
Ram has joined #ws-ra
19:30:20 [Zakim]
+[Microsoft]
19:30:29 [Zakim]
+Doug_Davis
19:31:22 [Zakim]
+Ashok_Malhotra
19:32:06 [Zakim]
+Yves
19:32:20 [Zakim]
+ +1.908.696.aaaa
19:34:29 [Ashok]
scribenick: Ashok
19:34:33 [dug]
agenda:
19:34:49 [Ashok]
Topic: Agenda bashing
19:35:00 [Ashok]
No changes to agenda
19:35:34 [dug]
19:35:44 [Ashok]
Topic: Approval of minutes for March 1
19:35:54 [Ashok]
19:36:15 [Ashok]
RESOLUTION: Minutes approved w/o objection
19:36:47 [Ashok]
Topic: Issue-12112 Enum: MaxCharacters behavior is ambiguous
-Burkhart
19:37:01 [Ashok]
Ram: Proposal looks good!
19:37:31 [Ashok]
RESOLUTION: Issue 12112 resolved as proposed in bugzilla
19:37:56 [Ashok]
Topic: Issue-11776 Enum: inconsistency between Items and Elements
-Davis
19:38:24 [Ashok]
No objections to proposal
19:38:47 [Ashok]
RESOLUTION: Issue 11776 resolved as proposed in bugzilla
19:39:30 [Ashok]
Topic: Issue-12093 MEX: PutMetadata and dialects/types that don't define identifiers
19:39:33 [dug]
proposal:
19:39:51 [Ashok]
Dug has proposal in bugzilla -- comment 3
19:43:41 [Ashok]
No objections to proposal
19:43:52 [li]
li has joined #ws-ra
19:44:10 [Ashok]
RESOLUTION: Issue 120093 resolved as proposed in bugzilla
19:44:29 [Ashok]
... comment 3 by Doug Davis
19:44:43 [Ashok]
Bob: We are at zero issues
19:45:21 [Ashok]
Bob: Talks about CR
19:45:46 [dug]
ouch
19:45:53 [Ashok]
... I am unavailable for the next 2 weeks
19:46:09 [Ashok]
... can have a call next week if people wish
19:46:46 [dug]
LOL
19:47:30 [Ashok]
Bob: We need docs with the final changes so folks can look at them
19:47:51 [Ashok]
... minimum CR period is 3 weeks
19:48:00 [Ashok]
... sound right to people?
19:48:15 [Ashok]
Yves: Depends on how long it takes to gather results
19:50:02 [Ashok]
Bob: Discusses testing required
19:50:09 [dug]
19:50:36 [Ashok]
That's the status page for tests
19:51:18 [dug]
+q
19:51:52 [dug]
+q
19:52:40 [Ashok]
Dug: Ram sent mail about one of the tests having to change based on ENUM updates
19:52:43 [dug]
19:53:41 [Zakim]
+Gilbert_Pilz
19:54:05 [Ashok]
Doug: I can update the tests
19:54:09 [dug]
-q
19:54:23 [asoldano]
asoldano has joined #ws-ra
19:54:33 [Ram]
q+
19:55:23 [Bob_]
ack ram
19:57:04 [Zakim]
+ +39.331.574.aabb
19:57:15 [dug]
is that dave?
19:57:24 [asoldano]
Zakim, aabb is asoldano
19:57:24 [Zakim]
+asoldano; got it
19:58:07 [asoldano]
sorry, cheated again by EU and US chaning time in different weeks
19:58:34 [Ashok]
Discussion of checking test coverage
20:00:02 [Ashok]
Bob: We will do nxn tests pairwise and remotely
20:00:21 [Ashok]
Li: Update your address in the table
20:01:03 [Ashok]
Gil: I will have the Oracle endpoint public by end of the week
20:02:01 [Ashok]
... Oracle client implementation will take a couple of weeks
20:03:16 [Zakim]
+ +1.831.713.aacc
20:03:27 [Zakim]
-Gilbert_Pilz
20:05:05 [gpilz]
gpilz has joined #ws-ra
20:05:46 [Ashok]
Li: I can do that in a couple of days
20:05:47 [Ashok]
Bob: Li can you check test coverage?
20:05:59 [Ashok]
Li: There may be one change needed
20:06:24 [Ashok]
Doug: I will send out note when the docs are updated ... 3/24
20:06:55 [Ashok]
Bob: Target publication date the week of April 4
20:07:08 [Ashok]
Yves: Will there be a change in the namespaces?
20:07:56 [Ashok]
Bob: Change namespace date to 3/29
20:08:43 [Ashok]
... estimated publication date April 15 or so
20:09:08 [Ashok]
... minimum CR period brings us out to May 6
20:09:20 [Ashok]
... there are no features at risk
20:10:53 [Ashok]
Bob: Is this a good target?
20:10:59 [Ashok]
No objections
20:11:29 [Ashok]
Bob: We will meet next week and tighten up some of the dates
20:11:30 [Dug1]
Dug1 has joined #ws-ra
20:11:37 [dug]
testing
20:12:10 [Ashok]
... we need a meeting next week to have a vote on the docs
20:12:45 [Ashok]
Ram: I will not be able to attend
20:12:55 [Ashok]
Bob: Shall we do an email ballot
20:13:19 [Ashok]
... if we get the docs on 3/24 can we have a ballot on 3/29?
20:14:29 [dug]
test
20:15:03 [Bob_]
action: Yves set up wbs for cr ballot
20:15:03 [trackbot]
Created ACTION-178 - Set up wbs for cr ballot [on Yves Lafon - due 2011-03-29].
20:15:04 [Ashok]
Bob: Yves, could you set up a ballot ... yes/no on each document
20:15:30 [Bob_]
conclude by March 19 end of business
20:16:24 [asoldano]
bye
20:16:25 [Zakim]
-[Microsoft]
20:16:26 [Zakim]
- +1.908.696.aaaa
20:16:27 [Zakim]
-Yves
20:16:29 [Zakim]
-Bob_Freund
20:16:29 [Zakim]
- +1.831.713.aacc
20:16:29 [Ashok]
Adjourned
20:16:30 [Zakim]
-asoldano
20:16:33 [Bob_]
rrsagent, generate minutes
20:16:33 [RRSAgent]
I have made the request to generate
Bob_
20:16:34 [Zakim]
-Doug_Davis
20:16:51 [Zakim]
-Ashok_Malhotra
20:16:53 [Zakim]
WS_WSRA()3:30PM has ended
20:16:54 [Zakim]
Attendees were Bob_Freund, [Microsoft], Doug_Davis, Ashok_Malhotra, Yves, +1.908.696.aaaa, Gilbert_Pilz, +39.331.574.aabb, asoldano, +1.831.713.aacc
20:36:47 [gpilz]
gpilz has left #ws-ra
22:18:57 [Zakim]
Zakim has left #ws-ra
|
http://www.w3.org/2011/03/22-ws-ra-irc
|
CC-MAIN-2016-26
|
refinedweb
| 1,108
| 70.06
|
F.J.K. wrote: > Boris wrote: >> I had created a small test case but unfortunately it worked. The >> project I port is too large - it would take some time to track this >> down which I don't have currently. > > Is the bug present in GCC 4.x? > in mainline? I don't know, I didn't try yet. For now I'm happy if I can work around the bug and finally release the software on Linux. > Is there any reason you can't just do? (Real question, not just > rhetorical :-) > > #ifdef __GNUC__ && __GNU__<4 > #define dynamic_cast static_cast > #endif Yes, I use something like this now: #if defined(__GNUC__) const level1 &l1 = *static_range<const level1*>(&base); #else const level1 &l1 = dynamic_cast<const level1&>(base); #endif >> From what I take from the discussion so far, you have assured your >> code > is semantically correct (See Pete Beckers guru-posting). At this I'm as much sure as you can be with C++ code. ;-) What's 100% for sure though is that everything works when linked statically. Boris
|
http://lists.gnu.org/archive/html/help-gplusplus/2006-10/msg00033.html
|
CC-MAIN-2016-40
|
refinedweb
| 175
| 70.53
|
How to connect to DB in another subdomain
Discussion in 'ASP General' started by Michael, Apr download from a SubDomain using a RedirectMichael Horton, Apr 20, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 347
- Michael Horton
- Apr 20, 2004
namespace and subdomain, Jun 3, 2004, in forum: ASP .Net
- Replies:
- 4
- Views:
- 366
Implications of subdomain vs. subfolder for web services=?Utf-8?B?QmlsbCBCb3Jn?=, Oct 13, 2004, in forum: ASP .Net
- Replies:
- 2
- Views:
- 477
- =?Utf-8?B?QmlsbCBCb3Jn?=
- Oct 14, 2004
Point a subdomain to a sub-directory?Leon, Mar 18, 2005, in forum: ASP .Net
- Replies:
- 11
- Views:
- 9,274
- Leon
- Mar 18, 2005
across subdomain authentication failed on one subdomain while workingon other onesaify, Sep 28, 2009, in forum: ASP .Net Security
- Replies:
- 0
- Views:
- 877
- saify
- Sep 28, 2009
|
http://www.thecodingforums.com/threads/how-to-connect-to-db-in-another-subdomain.803022/
|
CC-MAIN-2014-35
|
refinedweb
| 135
| 67.25
|
Continue to share dry goods in Excel, MySQL and Python. Poke the official account link stamp. The beauty of data analysis and statistics Pay attention to this official account with a little bit of things. You can also obtain four original documents: Python automation office manual, Excel PivotTable complete manual, python basic query manual and Mysql basic query manual
Yesterday, I published an article for you, which was deeply loved by you.
This paper summarizes the 67 pandas functions to perfectly solve data processing and use them immediately!
Strike while the iron is hot, Mr. Huang will explain the 16Pandas function again today. It's really easy to use!
This article introduces
Do you have such a feeling that why the data in your hand is always messy?
As a data analyst, data cleaning is an essential link. Sometimes because the data is too messy, it often takes us a lot of time to deal with it. Therefore, mastering more data cleaning methods will increase your ability by 100 times.
Based on this, this paper describes the super easy-to-use str vectorization string function in Pandas. After learning it, I instantly feel that my data cleaning ability has been improved.
1 data set, 16 Pandas functions
The data set is carefully fabricated by Mr. Huang just to help you learn knowledge. The data sets are as follows:
import pandas as pd df ={'full name':[' Classmate Huang','Huang Zhizun','Huang Laoxie ','Da Mei Chen','Sun Shangxiang'], 'English name':['Huang tong_xue','huang zhi_zun','Huang Lao_xie','Chen Da_mei','sun shang_xiang'], 'Gender':['male','women','men','female','male'], 'ID':['463895200003128433','429475199912122345','420934199110102311','431085200005230122','420953199509082345'], 'height':['mid:175_good','low:165_bad','low:159_bad','high:180_verygood','low:172_bad'], 'Home address':['Guangshui, Hubei','Xinyang, Henan','Guangxi Guilin','Hubei Xiaogan','Guangzhou, Guangdong'], 'Telephone number':['13434813546','19748672895','16728613064','14561586431','19384683910'], 'income':['1.1 ten thousand','8.5 thousand','0.9 ten thousand','6.5 thousand','2.0 ten thousand']} df = pd.DataFrame(df) df
The results are as follows:
Observing the above data, the data set is chaotic. Next, we use 16 Pandas to clean the above data.
① cat function: used for string splicing
df["full name"].str.cat(df["Home address"],sep='-'*3)
The results are as follows:
② Contains: determines whether a string contains a given character
df["Home address"].str.contains("wide")
The results are as follows:
③ Startswitch / endswitch: judge whether a string starts / ends with
# "Huang Wei" in the first line begins with a space df["full name"].str.startswith("yellow") df["English name"].str.endswith("e")
The results are as follows:
④ count: counts the number of occurrences of a given character in the string
df["Telephone number"].str.count("3")
The results are as follows:
⑤ get: gets the string at the specified location
df["full name"].str.get(-1) df["height"].str.split(":") df["height"].str.split(":").str.get(0)
The results are as follows:
⑥ len: calculate string length
df["Gender"].str.len()
The results are as follows:
⑦ upper/lower: English case conversion
df["English name"].str.upper() df["English name"].str.lower()
The results are as follows:
⑧ pad+side parameter / center: adds the given character to the left, right or left and right sides of the string
df["Home address"].str.pad(10,fillchar="*") # Equivalent to ljust() df["Home address"].str.pad(10,side="right",fillchar="*") # Equivalent to rjust() df["Home address"].str.center(10,fillchar="*")
The results are as follows:
⑨ Repeat: repeat the string several times
df["Gender"].str.repeat(3)
The results are as follows:
⑩ slice_replace: replaces the character at the specified position with the given string
df["Telephone number"].str.slice_replace(4,8,"*"*4)
The results are as follows:
⑪ Replace: replace the character at the specified position with the given string
df["height"].str.replace(":","-")
The results are as follows:
⑫ Replace: replace the character at the specified position with the given string (accept regular expression)
- The regular expression is passed into replace to make it easy to use;
- Don't worry about whether the following case is useful or not. You just need to know how easy it is to use regular data cleaning;
df["income"].str.replace("\d+\.\d+","regular")
The results are as follows:
⑬ split method + expand parameter: with join method, the function is very powerful
# Common usage df["height"].str.split(":") # split method with expand parameter df[["Height description","final height"]] = df["height"].str.split(":",expand=True) df # split method with join method df["height"].str.split(":").str.join("?"*5)
The results are as follows:
⑭ strip/rstrip/lstrip: remove blank characters and line breaks
df["full name"].str.len() df["full name"] = df["full name"].str.strip() df["full name"].str.len()
The results are as follows:
⑮ findall: use regular expressions to match strings and return a list of search results
- findall uses regular expressions to clean data. It's really fragrant!
df["height"] df["height"].str.findall("[a-zA-Z]+")
The results are as follows:
⑯ extract/extractall: accept regular expressions and extract matching strings (be sure to add parentheses)
df["height"].str.extract("([a-zA-Z]+)") # Extract the composite index from extractall df["height"].str.extractall("([a-zA-Z]+)") # extract with expand parameter df["height"].str.extract("([a-zA-Z]+).*?([a-zA-Z]+)",expand=True)
The results are as follows:
Today's article, Mr. Huang will tell you here. I hope it can be helpful to you.
|
https://programmer.help/blogs/619f2222b69fb.html
|
CC-MAIN-2021-49
|
refinedweb
| 905
| 57.98
|
Building a Reverse Image Search System Based on Milvus and VGG
Building a Reverse Image Search System Based on Milvus and VGG
See how to build a reverse image search system based on Milvus and VGG.
Join the DZone community and get the full member experience.Join For Free
Introduction
When you heard “Search by Image,”did you first think of the reverse image search function by search engines such as Google and Baidu? In fact, you can build your own image search system: build your own picture library; select a picture to search in the library yourself, and get several pictures similar to it.
As a similarity search engine for massive feature vectors, Milvus aims to help analyze increasingly large unstructured data and discover the great value behind it. In order to allow Milvus to be applied to the scene of similar image retrieval, we designed a reverse image search system based on Milvus and image feature extraction model VGG.
This article is divided into the following parts:
- Data preparation: introduces the data support of the system.
- System overview: presents the overall system architecture.
- VGG model: introduces the structure, features, block structure and weight parameters.
- API introduction: describes the five fundamental working principles of the system API.
- Image construction: explains how to build client and server docker images from source code.
- System deployment: shows how to set up the system in three steps.
- Interface display: display the system GUI.
1. Data Preparation
This article uses the PASCAL VOC image data set as an example to build an end-to-end solution for reverse image search. The data set contains 17,125 images, covering 20 directories: Person; Animal (bird, cat, cow, dog, horse, sheep); Vehicle (airplane, bicycle, boat, bus, car, motorcycle, train); Indoor (bottle, chair, dining table, pot plant, sofa, TV). The data set size is approximately 2GB. You can download the training/validation data through this link:
Note: You can also use other image data set. The currently supported image formats are .jpg format and .png format.
2. System Overview
In order to allow users to interact on web pages, we have adopted a C/S architecture. The WebClient is responsible for receiving the user’s request and sending it to the webserver. The webserver, after receiving the HTTP request from the WebClient, performs the operation and returns the results to the WebClient.
The webserver is mainly composed of two parts, the image feature extraction model VGG and the vector search engine Milvus. The VGG model converts images into vectors, and Milvus is responsible for storing vectors and performing similar vector retrieval. The architecture of the webserver is shown below:
3. VGG Model
VGGNet was proposed by researchers from the University of Oxford’s Visual Geometry Group and Google DeepMind. It is the winner in the localization task and the 1st runner-up in the classification task in ILSVRC-2014. Its outstanding contribution is to prove that using a small convolution (3 \ * 3) and increasing the network depth can effectively improve the model’s performance. VGGNet is highly scalable and its generalization ability to migrate to other data sets is very good. The VGG model outperforms GoogleNet on multiple transfer learning tasks, and it is the preferred algorithm for using CNN to extract features from images. Therefore, VGG is selected as the deep learning model in this solution.
VGGNet explored the relationship between the depth of CNN and its performance. By repeatedly stacking 3 \ * 3 small convolution kernels and 2 \ * 2 maximum pooling layers, VGGNet successfully constructed CNNs with a depth of 16–19 layers. The VGG16 model provided by Keras’s application module (keras.applications) is used in this solution.
(1) VGG16 Structure
VGG16 contains 13 Convolutional Layers, 3 Fully Connected Layers, and 5 Pooling Layers. Among them, the convolutional layer and the fully connected layer have weight coefficients, so they are also called weighting layers. The total number of weighting layers is 13 + 3 = 16, which explains why the structure is called VGG16. (The pooling layer does not involve weights, so it does not belong to the weighting layer and is not counted).
(2) VGG16 Features
- The convolutional layers all use the same convolution kernel parameters.
- All pooling layers use the same pooling kernel parameters.
- The model is constructed by stacking several convolutional layers and pooling layers, which is relatively easy to form a deeper network structure.
(3) VGG16 Block Structure
The convolution layers and pooling layers of VGG16 can be divided into different blocks, which are numbered Block1 ~ Block5 in order from front to back. Each block contains several convolutional layers and a pooling layer. For example: Block2 contains 2 convolutional layers (conv3–256) and 1 pooling layer (maxpool). And in the same block, the number of channels of the convolutional layer is the same. According to the VGG16 structure diagram given below, the input image of VGG16 is 224x224x3. During the process, the number of channels doubles, from 64 to 128 progressively, and then to 256, until finally to 512 which no longer changes. The height and width of the image halve from 224 → 112 → 56 → 28 → 14 → 7.
(4) Weight Parameter
VGG has a simple structure, but it contains a large number of weights, reaching 139,357,544 parameters. These parameters include convolution kernel weights and fully connected layer weights. Therefore, it has a high fitting ability.
4. API Introduction
The webserver of the entire system provides five APIs that correspond to train, process, count, search, and delete operations. Users can perform image loading, load progress query, Milvus vector number query, image retrieval, and Milvus table deletion. These five APIs cover all the basic functions of the reverse image search system. The rest of the article will explain in detail each of these functions.
(1) Train
The parameters of the train API are shown in the following table:
MethodsNameTypePOSTFileString
Before performing similar image retrieval, you need to load the image library into Milvus, and then call the train API to pass the path of the image to the system. Because Milvus only supports the retrieval of vector data, it is necessary to convert the images to feature vectors. The conversion process is mainly achieved by using Python to call the VGG model:
from preprocessor.vggnet import VGGNet
norm_feat = model.vgg_extract_feat (img_path)
After obtaining the feature vectors of the images, import these vectors into Milvus using Milvus’s insert_vectors interface:
xxxxxxxxxx
from indexer.index import milvus_client, insert_vectors
status, ids = insert_vectors (index_client, table_name, vectors)
After importing these feature vectors into Milvus, Milvus will assign a unique id to each vector. In order to better find the image based on the vector id during subsequent retrieval, you need to save the relationship between the vector ids and the corresponding images:
xxxxxxxxxx
from diskcache import Cache
for i in range (len (names)):
cache [ids [i]] = names [i]
(2) Process
The method of the process API is GET, and no other parameters need to be passed in the call. The process API can be called to view the progress of image loading, for example, the number of converted images that have been loaded and the total number of images in the incoming path.
(3) Count
The count API’s method is POST, and no other parameters need to be passed in the call. The count API can be called to view the total number of vectors in the current Milvus. Each vector is converted from an image.
(4) Search
The parameters of the search API are shown in the following table:
MethodsNumFilePOSTTopk (int)Image File
Before importing the image you want to query into the system, call the VGG model to convert the images to vectors first:
xxxxxxxxxx
from preprocessor.vggnet import VGGNet
norm_feat = model.vgg_extract_feat (img_path)
After obtaining the query vectors, call Milvus’s search_vectors interface for similar vector search:
xxxxxxxxxx
from milvus import Milvus, IndexType, MetricType, Status
status, results = client.search_vectors (table_name = table_name, query_records = vectors, top_k = top_k, nprobe = 16)
After getting the result vector id, the corresponding image name can be retrieved according to the correspondence between the previously stored vector ids and the image names:
xxxxxxxxxx
from diskcache import Cache
def query_name_from_ids (vids):
res = []
cache = Cache (default_cache_dir)
for i in vids:
if i in cache:
res.append (cache [i])
return res
(5)
The delete API’s method is POST, and no additional parameters need to be passed in the call. The delete API can be used to delete the tables in Milvus and clean up the previously imported vector data.
5. Docker Image Build
(1) Build pic-search-webserver Image
First, pull the Milvus bootcamp code, then use the Dockerfile we provided to build the image of the webserver:
xxxxxxxxxx
$ git clone
$ cd bootcamp / solutions / pic_search / webserver
# Build image
$ docker build -t pic-search-webserver.
# View the generated image
$ docker images | grep pic-search-webserver
Of course, you can also directly use the image we uploaded to dockerhub:
$ docker pull milvusbootcamp / pic-search-webserver: 0.1.0
(2) Build pic-search-webclient Image
First pull the Milvus bootcamp code, then use the Dockerfile we provided to build the image of webclient:
xxxxxxxxxx
$ git clone
$ cd bootcamp / solutions / pic_search / webclient
# Build image
$ docker build -t pic-search-webclient.
# View the generated image
$ docker images | grep pic-search-webclient
Of course, you can also directly use the image we uploaded to dockerhub:
xxxxxxxxxx
$ docker pull milvusbootcamp / pic-search-webclient: 0.1.0
6. System Deployment
We provide GPU deployment schemes and CPU deployment schemes, and users can choose for themselves. The detailed deployment process is available through this link:
Step 1- Start Milvus Docker
For detailed steps, please refer to the link:
Step 2- Start pic-search-webserver Docker
xxxxxxxxxx
$ docker run -d --name zilliz_search_images_demo \
-v IMAGE_PATH1: / tmp / pic1 \
-v IMAGE_PATH2: / tmp / pic2 \
-p 35000: 5000 \
-e "DATA_PATH = / tmp / images-data" \
-e "MILVUS_HOST = 192.168.1.123" \
milvusbootcamp / pic-search-webserver: 0.1.0
Step 3- Start pic-search-webclient Docker
xxxxxxxxxx
$ docker run --name zilliz_search_images_demo_web \
-d --rm -p 8001: 80 \
-e API_URL = http: //192.168.1.123: 35000 \
milvusbootcamp / pic-search-webclient: 0.1.0
7. Interface Display
When the above deployment procedures are completed, enter “localhost: 8001” in the browser to access the reverse image search interface.
Fill in the path of the images, and wait until all the images are converted to vectors. Load the vectors into Milvus, and you can get started with your image search:
Conclusion
This article demonstrates how to use Milvus and VGG to build a reverse image search system. Milvus is compatible with various deep learning platforms, and searches over billions of vectors take only milliseconds. You can explore more AI applications with Milvus!
If you have any suggestions or comments, you can raise an issue on our GitHub repo or contact us on the Slack community.
- Milvus source code:
- Milvus official website:
- Milvus Bootcamp:
- Milvus Slack community:
For more information about the VGG model, please visit:
- VGG official website:
- VGG GitHub:
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/building-a-reverse-image-search-system-based-on-mi
|
CC-MAIN-2020-29
|
refinedweb
| 1,842
| 51.38
|
Up to [cvs.NetBSD.org] / pkgsrc / devel / p5-Test-CleanNamespaces
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.9 / (download) - annotate - [select for diffs], Wed Jan 16 01:02:39 2019 UTC (13 months ago) by wen.8: +5 -5 lines
Diff to previous 1.8 (colored)
Update to 0.24 Upstream changes: 0.24 2018-12-09 20:20:01Z - fix detection of constant subs on some platforms for perls [5.10,5.20)
Revision 1.8 / (download) - annotate - [select for diffs], Tue Jul 10 08:16:58 2018 UTC (19 months, 1 week ago) by wen
Branch: MAIN
CVS Tags: pkgsrc-2018Q4-base, pkgsrc-2018Q4, pkgsrc-2018Q3-base, pkgsrc-2018Q3
Changes since 1.7: +5 -5 lines
Diff to previous 1.7 (colored)
Update to 0.23 Upstream changes: 0.23 2018-06-26 00:00:13Z - properly skip potentially-problematic tests when needed, due to circular dependencies between Moose and Test::CleanNamespaces (RT#125678)
Revision 1.7 / (download) - annotate - [select for diffs], Fri Aug 19 06:37:56 2016 UTC (3 years, 5.6: +5 -5 lines
Diff to previous 1.6 (colored)
Update to 0.22 Add missing BUILD_DEPENDS Upstream changes: 0.22 2016-08-19 03:45:32Z - properly find the list of modules to test (regression since 0.19)
Revision 1.6 / (download) - annotate - [select for diffs], Fri Aug 19 04:27:49 2016 UTC (3 years, 5 months ago) by wen
Branch: MAIN
Changes since 1.5: +5 -5 lines
Diff to previous 1.5 (colored)
Update to 0.21 Upstream changes: 0.21 2016-08-16 01:31:28Z - no changes since 0.20 0.20 2016-06-19 02:41:02Z (TRIAL RELEASE) - switch to plain old Exporter, removing build_* subs from the API,
Revision 1.5 / (download) - annotate - [select for diffs], Mon Jul 25 12:45:48 2016 UTC (3 years, 6 months ago) by wen
Branch: MAIN
Changes since 1.4: +5 -5 lines
Diff to previous 1.4 (colored)
Update to 0.19 Update DEPENDS Upstream changes: 0.19 2016-06-17 05:00:35Z - removed dependencies on namespace::clean, Sub::Exporter, File::Find::Rule
Revision 1.4 / (download) - annotate - [select for diffs], Tue Nov 3 03:28:49 2015 UTC (4 years, 3.3: +2 -1 lines
Diff to previous 1.3 (colored).
Revision 1.3 / (download) - annotate - [select for diffs], Wed Jan 28 02:02:24 2015 UTC (5 years.2: +4 -4 lines
Diff to previous 1.2 (colored)
Update 0.16 to 0.18 ------------------- 0.18 2015-01-21 03:54:30Z - remove Moose test requires <-> Test::CleanNamespaces test recommends circular relationship (softened to suggests) 0.17 2015-01-20 03:46:46Z - skip Mouse tests if some required interfaces are not available
Revision 1.2 / (download) - annotate - [select for diffs], Mon Nov 24 14:04:52 2014 UTC (5 years, 2 months ago) by wen
Branch: MAIN
CVS Tags: pkgsrc-2014Q4-base, pkgsrc-2014Q4
Changes since 1.1: +4 -4 lines
Diff to previous 1.1 (colored)
Update to 0.16 Upstream changes: 0.16 2014-08-28 23:55:47Z - skip Moose-related tests for normal installs, to get out of circularity hell if Moose is installed but broken and needing an upgrade
Revision 1.1 / (download) - annotate - [select for diffs], Sun Aug 17 07:14:41 2014 UTC (5 years, 6 months ago) by wen
Branch: MAIN
CVS Tags: pkgsrc-2014Q3-base, pkgsrc-2014Q3
Import Test-CleanNamespaces-0.15 as devel/p5-Test-CleanNamespaces. This module lets you check your module's namespaces for imported functions you might have forgotten to remove with namespace::autoclean or namespace::clean and are therefore available to be called as methods, which usually isn't want you want.
This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.
|
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/devel/p5-Test-CleanNamespaces/distinfo
|
CC-MAIN-2020-10
|
refinedweb
| 665
| 77.23
|
0,2
T(n,k) = number of leaves at level k+1 in all ordered trees with n+1 edges. - Emeric Deutsch, Jan 15 2005
Riordan array ((1-2x-sqrt(1-4x))/(2x^2),(1-2x-sqrt(1-4x))/(2x)). Inverse array is A053122. - Paul Barry, Mar 17 2005
T(n,k) = number of walks of n steps, each in direction N, S, W, or E, starting at the origin, remaining in the upper half-plane and ending at height k (see the R. K. Guy reference, p. 5). Example: T(3,2)=6 because we have ENN, WNN, NEN, NWN, NNE and NNW. - Emeric Deutsch, Apr 15 2005
Triangle T(n,k), 0<=k<=n, read by rows given by T(0,0)=1, T(n,k)=0 if k<0 or if k>n, T(n,0)=2*T(n-1,0)+T(n-1,1), T(n,k)=T(n-1,k-1)+2*T(n-1,k)+T(n-1,k+1) for k>=1. - Philippe Deléham, Mar 30 2007
Number of (2n+1)-step walks from (0,0) to (2n+1,2k+1) and consisting of steps u=(1,1) and d=(1,-1) in which the path stays in the nonnegative quadrant. Examples: T(2,0)=5 because we have uuudd, uudud, uuddu, uduud, ududu; T(2,1)=4 because we have uuuud, uuudu, uuduu, uduuu; T(2,2)=1 because we have uuuuu. - Philippe Deléham, Apr 16 2007, Apr 18 2007
Triangle read by rows: T(n,k)=number of lattice paths from (0,0) to (n,k) that do not go below the line y=0 and consist of steps U=(1,1), D=(1,-1) and two types of steps H=(1,0); example: T(3,1)=14 because we have UDU, UUD, 4 HHU paths, 4 HUH paths and 4 UHH paths. - Philippe Deléham, Sep
With offset [1,1] this is the (ordinary) convolution triangle a(n,m) with o.g.f. of column m given by (c(x)-1)^m, where c(x) is the o.g.f. for Catalan numbers A000108. See the Riordan comment by Paul Barry.
T(n, k) is also the number of order-preserving full transformations (of an n-chain) with exactly k fixed points. - Abdullahi Umar, Oct 02 2008
T(n,k)/2^(2n+1) = coefficients of the maximally flat lowpass digital differentiator of the order N=2n+3. - Pavel Holoborodko (pavel(AT)holoborodko.com), Dec 19 2008
The signed triangle S(n,k):=(-1)^(n-k)*T(n,k) provides the transformation matrix between f(n,l) := L(2*l)*5^n* F(2*l)^(2*n+1) (F=Fibonacci numbers A000045, L=Lucas numbers A000032) and F(4*l*(k+1)), k = 0, ..., n, for each l>=0: f(n,l) = sum(S(n,k)*F(4*l*(k+1)),k=0..n), n>=0, l>=0. Proof: the o.g.f. of the l.h.s., G(l;x) := sum(f(n,l)*x^n, n=0..infty) = F(4*l)/(1 - 5*F(2*l)^2*x) is shown to match the o.g.f. of the r.h.s.: after an interchange of the n- and k-summation, the Riordan property of S = (C(x)/x,C(x)) (compare with the above comments by Paul Barry), with C(x) := 1 - c(-x), with the o.g.f. c(x) of A000108 (Catalan numbers), is used, to obtain, after an index shift, first sum(F(4*l*(k))*GS(k;x), k= 0 .. infty), with the o.g.f of column k of triangle S which is GS(k;x) := sum(S(n,k)*x^n,n=k..infty) = C(x)^{k+1}/x. The result is GF(l;C(x))/x with the o.g.f. GF(l,x):= sum(F(4*l*k)*x^k, k=0..infty) = x*F(4*l)/(1-L(4*l)*x+x^2) (see a comment on A049670, and A028412). If one uses then the identity L(4*n) - 5*F(2*n)^2 = 2 (in Koshy's book [reference under A065563] this is No. 15, p. 88, attributed to Lucas, 1876), the proof that one recovers the o.g.f. of the l.h.s. from above boils down to a trivial identity on the Catalan o.g.f., namely 1/c^2(-x) = 1 + 2*x - (x*c(-x))^2. - Wolfdieter Lang, Aug 27 2012
O.g.f. for row polynomials R(x):=sum(a(n,k)*x^k,k=0..n):
((1+x) - C(z))/(x - (1+x)^2*z) with C the o.g.f. of A000108 (Catalan numbers). From Riordan ((C(x)-1)/x,C(x)-1), compare with a Paul Barry comment above. This coincides with the o.g.f. given by Emeric Deutsch in the formula section. - Wolfdieter Lang, Nov 13 2012
The A-sequence for this Riordan triangle is [1,2,1] and the Z-sequence is [2,1]. See a W. Lang link under A006232 with details and references. - Wolfdieter Lang, Nov 13 2012
From Wolfdieter Lang, Sep 20 2013: (Start)
T(n, k) = A053121(2*n+1, 2*k+1). T(n, k) appears in the formula for the (2*n+1)-th power of the algebraic number rho(N):= 2*cos(Pi/N) = R(N, 2) in terms of the even indexed diagonal/side length ratios R(N, 2*(k+1)) = S(2*k+1, rho(N)) in the regular N-gon inscribed in the unit circle (length unit 1). S(n, x) are Chebyshev's S polynomials (see A049310): rho(N)^(2*n+1) = sum(T(n, k)*R(N, 2*(k+1)), k = 0..n), n >= 0, identical in N >= 1. For a proof see the Sep 21 2013 comment under A053121. Note that this is the unreduced version if R(N, j) with j > delta(N), the degree of the algebraic number rho(N) (see A055034), appears. For the even powers of rho(n) see A039599. (End)
The tridiagonal Toeplitz production matrix P in the Example section corresponds to the unsigned Cartan matrix for the simple Lie algebra A_n as n tends to infinity (cf. Damianou ref. in A053122). - Tom Copeland, Dec 11 2015 (revised Dec 28 2015)
T(n,k) = the number of pairs of non-intersecting walks of n steps, each in direction N or E, starting at the origin, and such that the end points of the two paths are separated by a horizontal distance of k. See Shapiro 1976. - Peter Bala, Apr 12 2017
M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards Applied Math. Series 55, 1964 (and various reprintings), p. 796.
B. A. Bondarenko, Generalized Pascal Triangles and Pyramids (in Russian), FAN, Tashkent, 1990, ISBN 5-648-00738-8.
Yang, Sheng-Liang, Yan-Ni Dong, and Tian-Xiao He. "Some matrix identities on colored Motzkin paths." Discrete Mathematics 340.12 (2017): 3081-3091.
G. C. Greubel, Table of n, a(n) for the first 50 rows, flattened
M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series 55, Tenth Printing, 1972 [alternative scanned copy].
José Agapito, Ângela Mestre, Maria M. Torres, and Pasquale Petrullo, On One-Parameter Catalan Arrays, Journal of Integer Sequences, Vol. 18 (2015), Article 15.5.1.
M. Aigner, Enumeration via ballot numbers, Discrete Math., 308 (2008), 2544-2563.
Quang T. Bach, Jeffrey B. Remmel, Generating functions for descents over permutations which avoid sets of consecutive patterns, arXiv:1510.04319 [math.CO], 2015 (see p. 25).
P. Bala, Notes on logarithmic differentiation, the binomial transform and series reversion
Paul Barry, On the Hurwitz Transform of Sequences, Journal of Integer Sequences, Vol. 15 (2012), #12.8.7.
B. A. Bondarenko, Generalized Pascal Triangles and Pyramids, English translation published by Fibonacci Association, Santa Clara Univ., Santa Clara, CA, 1993; see p. 29.
Eduardo H. M. Brietzke, Generalization of an identity of Andrews, Fibonacci Quart. 44 (2006), no. 2, 166-171.
F. Cai, Q.-H. Hou, Y. Sun, A. L. B. Yang, Combinatorial identities related to 2x2 submatrices of recursive matrices, arXiv:1808.05736 Table 1.1.
Naiomi T. Cameron and Asamoah Nkwanta, On Some (Pseudo) Involutions in the Riordan Group, Journal of Integer Sequences, Vol. 8 (2005), Article 05.3.7.
Xi Chen, H. Liang, Y. Wang, Total positivity of recursive matrices, arXiv:1601.05645 [math.CO], 2016.
Xi Chen, H. Liang, Y. Wang, Total positivity of recursive matrices, Linear Algebra and its Applications, Volume 471, Apr 15 2015, Pages 383-393.
Johann Cigler, Some elementary observations on Narayana polynomials and related topics, arXiv:1611.05252 [math.CO], 2016. See p. 7., Catwalks, sandsteps and Pascal pyramids, J. Integer Sequences, Vol. 3 (2000), Article #00.1.6.
T.-X. He, L. W. Shapiro, Fuss-Catalan matrices, their weighted sums, and stabilizer subgroups of the Riordan group, Lin. Alg. Applic. 532 (2017) 25-41, example page 32.
Peter M. Higgins, Combinatorial results for semigroups of order-preserving mappings, Math. Proc. Camb. Phil. Soc. 113 (1993), 281-296.
A. Laradji, and A. Umar, Combinatorial results for semigroups of order-preserving full transformations, Semigroup Forum 72 (2006), 51-62.
Donatella Merlini and Renzo Sprugnoli, Arithmetic into geometric progressions through Riordan arrays, Discrete Mathematics 340.2 (2017): 160-174. See (1.1).
Pedro J. Miana, Hideyuki Ohtsuka, Natalia Romero, Sums of powers of Catalan triangle numbers, arXiv:1602.04347 [math.NT], 2016 (see 2.4).
A. Nkwanta, A. Tefera, Curious Relations and Identities Involving the Catalan Generating Function and Numbers, Journal of Integer Sequences, 16 (2013), #13.9.5.
L. W. Shapiro, W.-J. Woan and S. Getu, Runs, slides and moments, SIAM J. Alg. Discrete Methods, 4 (1983), 459-466.
L. W. Shapiro, A Catalan triangle, Discrete Math., 14, 83-90, 1976.
L. W. Shapiro, A Catalan triangle, Discrete Math. 14 (1976), no. 1, 83-90. [Annotated scanned copy]
Yidong Sun and Fei Ma, Minors of a Class of Riordan Arrays Related to Weighted Partial Motzkin Paths, arXiv preprint arXiv:1305.2015 [math.CO], 2013.
Yidong Sun and Fei Ma, Four transformations on the Catalan triangle, arXiv preprint arXiv:1305.2017 [math.CO], 2013.
Yidong Sun and Fei Ma, Some new binomial sums related to the Catalan triangle, Electronic Journal of Combinatorics 21(1) (2014), #P1.33
Charles Zhao-Chen Wang, Yi Wang, Total positivity of Catalan triangle, Discrete Math. 338 (2015), no. 4, 566--568. MR3300743.
W.-J. Woan, L. Shapiro and D. G. Rogers, The Catalan numbers, the Lebesgue integral and 4^{n-2}, Amer. Math. Monthly, 104 (1997), 926-931.
Row n: C(2n, n-k)-C(2n, n-k-2).
a(n, k) = C(2n+1, n-k)*2*(k+1)/(n+k+2) = A050166(n, n-k) = a(n-1, k-1)+2*a(n-1, k)+a(n-1, k+1) [with a(0, 0) = 1 and a(n, k) = 0 if n<0 or n<k]. - Henry Bottomley, Sep 24 2001
T(n, 0) = A000108(n+1), T(n, k) = 0 if n<k; for k>0, T(n, k) = Sum_{j=1..n} T(n-j, k-1)*A000108(j). G.f. for column k: Sum_{n>=0} T(n, k)*x^n = x^k*C(x)^(2*k+2) where C(x) = Sum_{n>=0} A000108(n)*x^n is g.f. for Catalan numbers, A000108. Sum_{k>=0} T(m, k)*T(n, k) = A000108(m+n+1). - Philippe Deléham, Feb 14 2004
T(n, k) = A009766(n+k+1, n-k) = A033184(n+k+2, 2k+2). - Philippe Deléham, Feb 14 2004
Sum_{j>=0} T(k, j)*A039599(n-k, j) = A028364(n, k). - Philippe Deléham, Mar 04 2004
Antidiagonal sum_{k=0..n} T(n-k, k) = A000957(n+3). - Gerald McGarvey, Jun 05 2005
The triangle may also be generated from M^n * [1,0,0,0...], where M = an infinite tridiagonal matrix with 1's in the super and subdiagonals and [2,2,2...] in the main diagonal. - Gary W. Adamson, Dec 17 2006
G.f.: G(t,x)=C^2/(1-txC^2), where C=[1-sqrt(1-4x)]/(2x) is the Catalan function. From here G(-1,x)=C, i.e., the alternating row sums are the Catalan numbers (A000108). - Emeric Deutsch, Jan 20 2007
Sum_{k, 0<=k<=n}T(n,k)*x^k = A000957(n+1), A000108(n), A000108(n+1), A001700(n), A049027(n+1), A076025(n+1), A076026(n+1) for x=-2,-1,0,1,2,3,4 respectively (see square array in A067345). - Philippe Deléham, Mar 21 2007, Nov 04 2011
Sum_{k, 0<=k<=n}T(n,k)*(k+1) = 4^n. - Philippe Deléham, Mar 30 2007
Sum_{j, j>=0}T(n,j)*binomial(j,k)=A035324(n,k), A035324 with offset 0 (0<=k<=n). - Philippe Deléham, Mar 30 2007
T(n,k) = A053121(2*n+1,2*k+1). - Philippe Deléham, Apr 16 2007, Apr 18 2007
T(n,k) = A039599(n,k)+A039599(n,k+1). - Philippe Deléham, Sep 11 2007
Sum_{k, 0<=k<=n+1}T(n+1,k)*k^2 = A029760(n). - Philippe Deléham, Dec 16 2007
Sum_{k, 0<=k<=n}T(n,k)*A059841(k)= A000984(n). - Philippe Deléham, Nov 12 2008
G.f.: 1/(1-xy-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-.... (continued fraction).
Sum_{k, 0<=k<=n} T(n,k)*x^(n-k) = A000012(n), A001700(n), A194723(n+1), A194724(n+1), A194725(n+1), A194726(n+1), A194727(n+1), A194728(n+1), A194729(n+1), A194730(n+1) for x = 0,1,2,3,4,5,6,7,8,9 respectively. - Philippe Deléham, Nov 03 2011
From Peter Bala, Dec 21 2014: (Start)
This triangle factorizes in the Riordan group as ( C(x), x*C(x) ) * ( 1/(1 - x), x/(1 - x) ) = A033184 * A007318, where C(x) = (1 - sqrt(1 - 4*x))/(2*x) is the o.g.f. for the Catalan numbers A000108.
Let U denote the lower unit triangular array with 1's on or below the main diagonal and zeros elsewhere. For k = 0,1,2,... define U(k) to be the lower unit triangular block array
/I_k 0\
\ 0 U/ having the k X k identity matrix I_k as the upper left block; in particular, U(0) = U. Then this array equals the bi-infinite product (...*U(2)*U(1)*U(0))*(U(0)*U(1)*U(2)*...). (End)
From Peter Bala, Jul 21 2015: (Start)
O.g.f. G(x,t) = 1/x * series reversion of ( x/f(x,t) ), where f(x,t) = ( 1 + (1 + t)*x )^2/( 1 + t*x ).
1 + x*d/dx(G(x,t))/G(x,t) = 1 + (2 + t)*x + (6 + 4*t + t^2)*x^2 + ... is the o.g.f for A094527. (End)
Conjecture: Sum_{k=0..n} T(n,k)/(k+1)^2 = H(n+1)*A000108(n)*(2*n+1)/(n+1), where H(n+1) = Sum_{k=0..n} 1/(k+1). - Werner Schulte, Jul 23 2015
From Werner Schulte, Jul 25 2015: (Start)
Sum_{k=0..n} T(n,k)*(k+1)^2 = (2*n+1)*binomial(2*n,n). (A002457)
Sum_{k=0..n} T(n,k)*(k+1)^3 = 4^n*(3*n+2)/2.
Sum_{k=0..n} T(n,k)*(k+1)^4 = (2*n+1)^2*binomial(2*n,n).
Sum_{k=0..n} T(n,k)*(k+1)^5 = 4^n*(15*n^2+15*n+4)/4. (End)
The o.g.f. G(x,t) is such that G(x,t+1) is the o.g.f. for A035324, but with an offset of 0, and G(x,t-1) is the o.g.f. for A033184, again with an offset of 0. - Peter Bala, Sep 20 2015
Triangle T(n,k) starts:
n\k 0 1 2 3 4 5 6 7 8 9 10
0: 1
1: 2 1
2: 5 4 1
3: 14 14 6 1
4: 42 48 27 8 1
5: 132 165 110 44 10 1
6: 429 572 429 208 65 12 1
7: 1430 2002 1638 910 350 90 14 1
8: 4862 7072 6188 3808 1700 544 119 16 1
9: 16796 25194 23256 15504 7752 2907 798 152 18 1
10: 58786 90440 87210 62016 33915 14364 4655 1120 189 20 1
... Reformatted and extended by Wolfdieter Lang, Nov 13 2012.
Production matrix begins:
2, 1
1, 2, 1
0, 1, 2, 1
0, 0, 1, 2, 1
0, 0, 0, 1, 2, 1
0, 0, 0, 0, 1, 2, 1
0, 0, 0, 0, 0, 1, 2, 1
0, 0, 0, 0, 0, 0, 1, 2, 1
- Philippe Deléham, Nov 07 2011
From Wolfdieter Lang, Nov 13 2012: (Start)
Recurrence: T(5,1) = 165 = 1*42 + 2*48 +1*27. The Riordan A-sequence is [1,2,1].
Recurrence from Riordan Z-sequence [2,1]: T(5,0) = 132 = 2*42 + 1*48. (End)
Example for rho(N) = 2*cos(Pi/N) powers:
n=2: rho(N)^5 = 5*R(N, 2) + 4*R(N, 4) + 1*R(N, 6) = 5*S(1, rho(N)) + 4*S(3, rho(N)) + 1*S(5, rho(N)), identical in N >= 1. For N=5 (the pentagon with only one distinct diagonal) the degree delta(5) = 2, hence R(5, 4) and R(5, 6) can be reduced, namely to R(5, 1) = 1 and R(5, 6) = -R(5,1) = -1, respectively. Thus rho(5)^5 = 5*R(N, 2) + 4*1 + 1*(-1) = 3 + 5*R(N, 2) = 3 + 5*rho(5), with the golden section rho(5). (End)
T:=(n, k)->binomial(2*n, n-k) - binomial(2*n, n-k-2); # N. J. A. Sloane, Aug 26 2013
Flatten[Table[Binomial[2n, n-k] - Binomial[2n, n-k-2], {n, 0, 9}, {k, 0, n}]] (* Jean-François Alcover, May 03 2011 *)
(Sage) # Algorithm of L. Seidel (1877)
# Prints the first n rows of the triangle.
def A039598_triangle(n) :
D = [0]*(n+2); D[1] = 1
b = True; h = 1
for i in range(2*n) :
if b :
for k in range(h, 0, -1) : D[k] += D[k-1]
h += 1
else :
for k in range(1, h, 1) : D[k] += D[k+1]
b = not b
if b : print [D[z] for z in (1..h-1) ]
A039598_triangle(10) # Peter Luschny, May 01 2012
(MAGMA) /* As triangle: */ [[Binomial(2*n, n-k) - Binomial(2*n, n-k-2): k in [0..n]]: n in [0.. 15]]; // Vincenzo Librandi, Jul 22 2015
(PARI) T(n, k)=binomial(2*n, n-k) - binomial(2*n, n-k-2) \\ Charles R Greathouse IV, Nov 07 2016
Mirror image of A050166. Row sums are A001700.
Cf. A008313, A039599, A183134, A094527, A033184, A035324, A053122.
Sequence in context: A171488 A171651 A104710 * A128738 A193673 A126181
Adjacent sequences: A039595 A039596 A039597 * A039599 A039600 A039601
nonn,tabl,easy,nice
N. J. A. Sloane
Typo in one entry corrected by Philippe Deléham, Dec 16 2007
approved
|
https://oeis.org/A039598
|
CC-MAIN-2018-39
|
refinedweb
| 3,208
| 76.32
|
Cross-Platform Mobile dev with Scala and Capacitor
These last years, the trend has been to export web technologies out of the browser, into the desktop (e.g., electron) and more and more into Mobile Applications (e.g., Ionic). Whether or not we like this trend, it is in any case a bargain for any technology that can produce HTML, CSS and JavaScript. One such technology is Scala, via its Scala-to-JavaScript compiler, Scala.js.
In this blog post, we will start from scratch and arrive at a convenient setup for building cross-platform mobile application, using Scala and Capacitor. However, if you prefer to TLDR, you can directly check the final result.
Step 1: The sbt project for Scala.js
The first thing we need to do is to setup the barebones project for using Scala.js. We will use sbt, and in that case the minimal project structure looks like this:
root
├── build.sbt
├── project
| ├── plugins.sbt
| └── build.properties
└── src
└── main
└── scala
└── main
└── Main.scala
Where the contents of
build.properties is
sbt.version=1.5.4 (latest version as of today) and the contents of the
build.sbt is
val theScalaVersion = "2.13.6"
lazy val root = project
.in(file("."))
.enablePlugins(ScalaJSPlugin)
.settings(
name := "ScalaJS-Capacitor",
version := "0.1.0",
scalaVersion := theScalaVersion,
scalaJSUseMainModuleInitializer := true,
scalaJSLinkerConfig ~= { _.withModuleKind(ModuleKind.ESModule) },
)
In the
plugins.sbt file, we need
addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.6.0")
Finally, the contents of the
Main.scala file could be
package mainobject Main {
def main(args: Array[String]): Unit = {
println("Hello!")
}
}
With this setup, we can do
sbt run and we will see “Hello!” printed to the console. What we don’t see here is that it was actually a Node.js process who ran this line of code. Note: you need to have Node.js installed on your machine. The generated JavaScript code will be available at
target/scala-2.13/scalajs-Capacitor-fastopt/main.js .
Step 2: Setup Snowpack (or any other packager)
Scala-js generates the JavaScript that corresponds to the Scala code. However, in order to integrate via the JavaScript ecosystem, it’s a good thing to have a dedicated node packager. Indeed, if we want to use some capacitor plugin (and we do!), it’s handier to rely on such tool.
The most celebrated one is probably Webpack, but one that is particularly suited for working with Scala-js is Snowpack. Snowpack’s setup is quite small. We need two additional files for Snowpack itself (a
package.json and a
snowpack.config.js ) with the following contents.
In
package.json :
{
"name": "snowpack-capacitor",
"devDependencies": {
"snowpack": "3.1.0"
}
}
And in
snowpack.config.js :
module.exports = {
buildOptions: {
out: "./target/build",
},
mount: {
public: "/",
"target/scala-2.13/scalajs-capacitor-fastopt": "/",
"src/main/resources": "/"
},
}
What we just did in the first file is to tell npm to use the version 3.1.0 of Snowpack (only as a development dependencies since we don’t need it when we ship the code). We thus need to run
npm install in order to install Snowpack. In the second file, we instructed Snowpack to consider as the root directory, all files inside the public directory, the directory where our Scala code gets compiled and, for the sake of it, also the resources folder of our project (so that it feels like a JVM Scala project in that regard).
The last thing we need before the Snowpack setup is complete is to create an HTML file, as
public/index.html and fill it with the following (minimal) content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1, shrink-to-fit=no"
/>
<meta name="theme-color" content="#000000" />
<title>Test Capacitor</title>
</head>
<body>
<div id="root"></div>
<script src="/main.js" type="module"></script>
</body>
</html>
After that, we can do
sbt fastLinkJS and then
npx snowpack dev and the browser will open to a blank page, where “hello” has been printed out in the JS console.
Step 3: Adding a UI Framework
Web technologies are all about manipulating the HTML and CSS that the user will see on the screen. And for that, we need a UI Framework (nowadays, no one manipulates the dom by hand). The best in class in the Scala.js ecosystem is probably Laminar.
Adding Laminar to the project is done by adding the line
libraryDependencies += "com.raquo" %%% "laminar" % "0.13.0"
in the
build.sbt file, in the project settings. We can now change the
Main.scala file in order to actually see something on the screen. We can add the lines
import com.raquo.laminar.api.L._
import org.scalajs.dom
val app = h1("Hello world!")
render(dom.document.getElementById("root"), app)
If the Snowpack dev server is still running, we can issue
sbt fastLinkJS again and the browser page should refresh, showing the greeting message.
Step 4: Adding Capacitor
Up until know, we haven’t done any mobile at all. We have a nice setup for making an application in the browser, but we can’t use it for mobile. For that, we need to add the npm dependency to Capacitor. We do so by adding the following lines to the
package.json file
"dependencies": {
"@capacitor/cli": "3.0.0",
"@capacitor/core": "3.0.0"
},
and then running
npm install once again. After that, we can use the Capacitor cli to configure the barebones of the app. Following here, we need to run
npx cap init (defaults are good, except for the “Web asset directory”, which needs to be
target/build since this is how we configured Snowpack). That command will merely create a file
capacitor.config.json with minimal contents in it.
Step 5: Adding Android support
We still didn’t achieve actual mobile development. This is what this step is about, by following instructions from here. In the
dependencies of the
package.json file, we add the line
"@capacitor/android": "3.0.0" and we run
npm install once again.
Finally, we can run our app on mobile. We need to run the three last commands
npx cap add android
npx snowpack build
npx cap run android
whose role are to add an Android app to the project, building the application using snowpack and finally run the Android emulator (you will need to have Android Studio installed).
Note: Similar steps would be required to build the app for iOS, but I refer to the official doc for that.
This is great! However, we still did not win anything with respect to simply making a responsive web page. In the two last steps, we will use one of Capacitor’s plugin to access native feature of our device.
Step 6: Adding ScalablyTyped
Capacitor plugins are distributed as TypeScript modules. In order to use those, we need to tell Scala of their existence (and the classes/functions/stuff they contain). This can be done automatically for us by ScalablyTyped.
Note: you can also write this by hand if you prefer, but relying on ScalablyTyped has advantages such as ensured correctness, discoverability…
We need three things for that. First the plugin. In
project/plugins.sbt we add the line
addSbtPlugin("org.scalablytyped.converter" % "sbt-converter" % "1.0.0-beta32")
Then, we need TypeScript. We can add it as a
devDependencies in the
package.json file (of course, we also need to install it):
"typescript": "4.1.3"
and then we need to add the plugin to our project via
.enablePlugins(ScalablyTypedConverterExternalNpmPlugin)
together with a few lines in the settings of our project inside the
build.sbt
import scala.sys.process.Process
[...]
externalNpm := {
Process("npm", baseDirectory.value).!
baseDirectory.value
},
stIgnore ++= List(
"@capacitor/android",
"@capacitor/cli",
"@capacitor/core"
)
And we’re all set for this. You can verify that everything is in place by running
sbt compile .
Note: on MacOS, if you try to refresh the project inside IntelliJ, you will likely hit this, but the fix works fine.
Step 7: Using the Geolocation Plugin
As an example of using a Capacitor plugin, we will display the position of the device when the page loads. This will require the Geolocation Plugin.
Add the following to the
dependencies of the
package.json (and install it):
"@capacitor/geolocation": "1.0.0"
You can once
sbt compile in order for ScalablyTyped to kick in, which will take a little bit while but after that, you will be able to use the plugin freely. As an example, let’s add the functionality to display the coordinates of the user when the page loads. For that, we can change our
Main.scala file with the following lines:
import typings.capacitorGeolocation.definitionsMod.PositionOptions
import typings.capacitorGeolocation.mod.Geolocation
[...]
val app = div(
h1("Hello world!"),
child <-- EventStream .fromJsPromise(Geolocation.getCurrentPosition(PositionOptions().setEnableHighAccuracy(true))
)
.map { position =>
s"Your position: ${position.coords.latitude}, ${position.coords.longitude}"
}
)
If you run
sbt fastLinkJS and
npx snowpack dev once more, you should see your position displayed in the browser (it will probably ask for your permission).
For the Android application, however, it’s not enough. We need to tell the
AndroidManifest.xml (inside
android/app/src/main ) that we will use the Geolocation capabilities, and that is done by adding the lines
<!-- Geolocation API -->
<uses-permission android:
<uses-permission android:
<uses-feature android:
Now we can finish with the following commands:
npx snowpack build
npx cap sync
npx cap run android
and voilà! We have a working Android application, written entirely in Scala and using native features of the device.
Other improvements
There are many things that we could improve to enhance this setup. Among others, here are a few ideas:
- Having hot reload (see, e.g., here)
- currently the build is made using the “fast optimisation” mode of Scala.js, that we could/should change to using the “full optimisation”
- Handling capacitor commands directly within sbt
Closing words and related works
We are now in a good position to do mobile development in Scala! And this with a relatively straightforward and minimal shenanigan setup. The only strong dependance we have is on Capacitor, which is an open source and independent (from Google and Apple) project. For the rest, it is a usual Scala project, coming with all the goodies that we like..
Before closing, we would like to mention this blog post for an overview of the history of Scala for mobile development, presenting also an alternative using React Native.
The result of this blog post can be found in the accompanying repo.
|
https://medium.com/geekculture/cross-platform-mobile-dev-with-scala-and-capacitor-54e69b62b50c
|
CC-MAIN-2021-43
|
refinedweb
| 1,756
| 59.3
|
- Getting Subversion
- Building Subversion
- Trac and specific Subversion versions
- Troubleshooting
- Known Issues
- Asking for More Support About Subversion
Trac and Subversion
Trac has supported the Subversion VersionControlSystem since day one. Actually, Trac was even named
svntrac back then!
However, things change and other version control systems gain in popularity… in Trac 1.0 we also support Git as an optional component below
tracopt.versioncontrol.git.*. As to not make any first class, second class distinction, we also decided to move the subversion support over there, in
tracopt.versioncontrol.svn.*.
In Trac 1.0 (trunk, starting from r11082), the components for Subversion support have been moved below
tracopt. so you need to explicitly enable them in your TracIni#components-section:
[components] tracopt.versioncontrol.svn.* = enabled
This can instead be enabled using the Admin → General → Plugins (Manage Plugins) select component the checkbox(es).
Note: For Trac < 1.2, the value must be
enabled or
on. Using a value of
true or
yes will not work for the
[components] section, whereas
true and
yes are acceptable in other sections of trac.ini. The inconsistency was fixed in Trac 1.2.
This page is intended to collect all the specific tips and tricks about Subversion support in Trac. This is not the place for general Subversion help. You can get more support options elsewhere.
Getting Subversion
From subversion.apache.org:
Debian Linux
Install the Subversion Bindings using
apt:
$ apt-get install python-subversion
Works for Subversion ≥ 1.4.
RedHat/Fedora/CentOS Linux
Install the Subversion bindings using
yum:
RedHat/CentOS 5, 6
$ sudo yum install subversion
RedHat 7 / CentOS 7 / Fedora 18 and later
$ sudo yum install subversion subversion-python
- RedHat: While
subversion-pythonpackage is available through optional channel for
7Server, for whatever reason(s) it's not the case for
7Server, however package is available through
rhel-7-workstation-optional-rpms.
- CentOS: package available in base repository
- Fedora: package available in fedora repository
You MUST have exactly the same version of
subversion and
subversion-python installed, otherwise the bindings won't work:
$ yum list subversion subversion-python Available Packages subversion.x86_64 1.7.14-6.el7 base subversion-python.x86_64 1.7.14-6.el7 base
Windows (x64)
There are fewer providers of the 64-bit Subversion bindings for Windows than for the 32-bit ones, but those provided by VisualSVN work just fine.
All of the bindings are precompiled as 64 bit DLLs (and PYDs) in Visual SVN Server 64-bits 3.5 for W2012. No problem to work together with Python 2.7.10 64 bit.
Windows (x86)
for Python 2.7
For Subversion 1.7.x there are bindings available from.
An alternative is to use the Subversion 1.7.0 bindings from CollabNet server edition: CollabNet Subversion Edge 2.1.0 (for Windows 32 or 64 bit).
You'll find there a full Python 2.7.1 installation (below the
Python25 top-level folder!) and the corresponding bindings below
lib/svn-python. It seems that even though that Python installation was built with VisualStudio 2010 (
msvcr100.dll), you can also use those svn bindings with the Python from python.org which was built with VisualStudio 2008 (
msvcr90.dll). I'm a bit disappointed, where's the DLL hell gone?
For the 1.6.x bindings, there's no "official" release to be found for Python 2.7, but thanks to dawuid, who contributed svn-win32-1.6.15_py_2.7.zip (md5
9dba3d11c4bbb91e29deb28f569e821b). I tested them, and they seem to work great. Simply unzip in your <python27install>\Lib\site-packages folder. Note that you must have the folder containing the matching Subversion libraries in your PATH (e.g. "C:\Program Files (x86)\Subversion\bin").
for Python 2.6
Get them from: you'll need the Windows installer
Setup-Subversion-1.x.y.msi (or the Windows binaries
svn-win32-1.x.y.zip) and the Python 2.6 bindings
svn-win32-1.x.y_py.zip.
The Alagazam installer updates the PATH automatically to point to the new binary directory.
The python-bindings zip file has a folder structure of
svn-win32-<ver> python libsvn svn
The
libsvn and
svn folders should be extracted into the
Python26\Lib\site-packages directory.
For Subversion 1.6 bindings then rename the binding DLLs: change
libsvn/_*.dll to
libsvn/_*.pyd (don't change the name of
libsvn_swig_py-1.dll), the 1.7.x bindings already have the correct names.
- Note: if CollabNet is providing SWIG bindings for Python 2.6, I can't find them.
- Also note: the Subversion directory structure of the Alagazam distro is slightly different than the CollabNet one: all the executables are in the
binsubdirectory.
If you end up with the infamous
ImportError: DLL load failed: ..., don't despair but have a look at the Windows troubleshooting section below.
for Python 2.5
One easy way to get Python 2.5 bindings is to install Collabnet Subversion Edge. The bindings can be found in the <install_directory>\csvn\lib\svn-python directory.
for ActivePython 2.5
One way to get the bindings is to install the CollabNet Subversion Server.
While installing ("Apache Configuration" page / "mod_dav_svn Configuration") you have to select the "Enable viewVC" option. The installer then goes to the "ViewVC Configuration" page and asks for the location of the "Active Python Directory".
Be sure to prepend the directory containing the Subversion libraries, from the server install (e.g.
C:\Program Files (x86)\CollabNet\Subversion Server) to the
PATH, or the bindings won't load.
BSDsfile that sits at top level of the Subversion source distribution
- Do
./configure ...; make; make install; if you intend to use Subversion together with Apache, be sure to configure Subversion so that it will use a compatible version of
aprand
apr-utils, ideally those of Apache. If not, you'll be able to build Subversion and the bindings, but you most certainly have issues later on, when using mod_python (e.g. #2920).
- Read
./subversion/bindings/swig/INSTALLso that it contains the
svn-pythonfolder (the one containing the
svnand
libsvnpackages).
e.g. if svn is installed in
/opt/subversion-1.4.4:If you're using TracModPython, be sure that Apache will also see this environment variable, or alternatively use the PythonPath mod_python directive.
$ export PYTHONPATH=$PYTHONPATH:/opt/subversion-1.4.4/lib/svn-python -p /var/svn/project $ mkdir -p /tmp/project/{branches,tags
Note that the notes below about Subversion releases below 1.6 are mostly there for historical reasons. Everyone should be using at least 1.6.x these days. We make no strong guarantee about running an old version of Subversion with a recent version of Trac, though there are good chances that Subversion 1.4 still works with Trac 1.0dev.
Note that Trac always had issues with Subversion repositories using the Berkeley DB backend. If you happen to have such a repository, it would be a good idea to switch it to the FSFS backend if you intend to use it together with Trac. See google:svn+convert+bdb+to+fsfs.
Trac and Subversion 1.4
Trac used to work well with Subversion 1.4.
This is now the oldest supported Subversion version (older versions might work, see version 105 of this page, but we make no guarantee)..
Issue #2611 was confirmed to be still be present for 1.4 (and probably newer versions of svn as well)..
See also: release notes for 1.4.0
Trac and Subversion 1.5
Trac works fine with Subversion 1.5. The svn:mergeinfo properties are supported since version 0.12, though if you have lots of branches and many many changesets, this could slow down the source browser (#8459). Non-inheritable mergeinfo is supported since 0.12.1 (#9622).
The new authz format is supported since 0.12.1 (#8289).
The new svn:externals format is still work in progress (#7687).
Trac and Subversion 1.6
Trac works best with Subversion 1.6. A couple of memory leaks were fixed in 1.6 and you might benefit from these fixes.
Trac and Subversion 1.7
Trac seems to work fine with 1.7.0 as well, however be sure to use Trac ≥ 0.12.3.
Trac and Subversion 1.8
Trac seems to work fine with 1.8.0 as well, however be sure to use Trac ≥ 0.12.3.
The
svn:keywords with custom keyword definitions is unsupported yet (#11364).
Trac and Subversion 1.9
Trac seems to work fine with 1.9.0 as well, however be sure to use Trac ≥ 0.12:
<Location> ... WSGIApplicationGroup %{GLOBAL} ... </Location>.
For Trac 1.0 (trunk) above [11082], another common cause might simply be that the components for Subversion, are not enabled, since they're now optional. See #tracopt above for more details.
If you use Debian an try this: TracOnDebianSarge
If you use FreeBSD try this:
$ sudoIf this succeeds, that's a good start.
$ python Python ... >>> from svn import core
If it doesn't, it usually means that your bindings are located in a place they can't be loaded from. So either move the
svnand
libsvnfound.
For Collabnet 1.3 on Windows the solution was
set PYTHONPATH=C:\csvn\lib\svn-python
If you get the message
ImportError: libsvn_swig_py-1.so.0: cannot open shared object file: No such file or directoryeven though you can see the .so file in the correct place, then try
ldconfig -vas root.
Windows Users
According to the README.txt file for the Subversion bindings, if you are using Python 2.5+ you need to rename all the .dll files in the libsvn folder to .pyd files. Upon further research, indicates you may need to have both the .pyd and .dll version of the libsvn files available. This resolved both the '
ImportError: No module named _core' error (with only the DLL) and the '
ImportError: DLL load failed' (with only the pyd) when testing from the console, and the browser. The same error ('
ImportError: DLL load failed') is reported when using apache 2.2, see #6739 for details.
Note that the bindings don't come with all the necessary files, you also need to have svn binaries (
libeay32.dll,
libsasl.dll, etc.) available on the path. If these files aren't available, you will receive the error
ImportError: DLL load failed:. Upon investigating with
depends.exe, I found that
core.pydloaded, unloaded, then failed to load. So, when downloading from for instance, the Python bindings are not sufficient. Pick up the subversion binaries as well and put them on your PATH.
Don't use 64bit version of Python. The Subversion project does not provide amd64 or ia64 setup executables, so if you want to use Subversion integration, you’ll need to either compile the bindings yourself, or use the x86 version of Python.
A good way to diagnose a DLL load failed error is to use the depends.exe tool from the console in which you'd run python.exe, and do a
depends.exe absolute-path-to/python.exeinstead. Then, press
F7(Start Profiling… - you need at least version 2.0 of depends.exe) and type
from svn import coreat the Python prompt in the new cmd window. This will try to load the bindings, but this time you'll be able to see why this fails, by spotting the .DLLs shown in red in the Module list, and there are really lots of options here ;-)
When using depends.exe be sure to set the "Starting directory" to your Apache bin directory rather than the default python one. This will help when trying to figure out why python works fine, but Apache does not.
Mac OS X Usersnon top of
/System/Library/Frameworks/Python.framework/Versions/Current/Extras/lib/python/libsvnand
/System/Library/Frameworks/Python.framework/Versions/Current/Extras/lib/python/svn.
- Check the versionVerify that the version given back matches your expectation.
>>> (core.SVN_VER_MAJOR, core.SVN_VER_MINOR, core.SVN_VER_MICRO, core.SVN_VER_PATCH) (1, 4, 3, 3)
-file and see if its location seems to be consistent with both those of the other svn libraries (
.../lib/*.so) and the location of the Python code part of the bindings (
.../lib/svn-python/svn/core.py).
- Have you got SVN disabled in your trac.ini file?
Starting with Trac 1.0, the Subversion components need to be explicitly enabled. See #tracopt above, if you haven't yet.
Before Trac 1.0, the Subversion specific modules were always enabled, but even then it could happen that for some reason, people had explicitly disabled those and possibly forgot about it. If so, set it/them to enabled (or simply delete the offending lines, since I believe they are enabled by default.)[components] trac.versioncontrol.api.repositorymanager = enabled trac.versioncontrol.svn_authz.svnauthzoptions = enabled trac.versioncontrol.svn_fs.subversionconnector = enabled trac.versioncontrol.svn_prop.subversionmergepropertydiffrenderer = enabled trac.versioncontrol.svn_prop.subversionmergepropertyrenderer = enabled trac.versioncontrol.svn_prop.subversionpropertyrenderer = enabled
(so again, the above module
svn_fs/
svn_propnames are only valid before Trac 1.0, see #tracopt starting from 1.0)
- If you're using Apache / mod_python (Linux/Windows) (first tip)
Get a similar list of libraries, but this time for one of your httpd process. Then compare the two, and pay attention to any difference between the
svnlibraries and the
aprlibraries..
- If you're using Apache - mod_python/mod_wsgi (Windows)
Try replacing the
libapr-1.dllin the Apache bin with the version that's in Python's libsvn or Subversion's bin, just substituting seems to fix it. See #6739 for more details.
Known Issues
- #1445
- [ER] Revision Graph for the Version Control Browser
- #1947
- Nicer handling of bugtraq properties
- #2611
- Problem with SVN bindings (SVN 1.3.0, Trac r2771)
- #2880
- Ignore svn properties / trees in timeline
- #3470
- improve handling of scoped repositories with copy history
- #4474
- diffing two large trees results in a massive list with a lot of empty links
- #5246
- [PATCH] Use permission system to store groups for authz access control
- #6474
- svn:externals displayed as folder in listing
- #6615
- svn:externals not correctly displayed in browser
- #7687
- Add support for svn:externals "1.5" style
- #7744
- source:path@rev targeting a file below a copied dir may fail
- #7785
- Trac slow / unreachable objects / database locked
- #8477
- Support SVN 1.5 merge tracking in Annotate view
- #8813
- next_rev is slow, particularly in the direct-svnfs case
- #9208
- Support for SVN repository on a UNC path on Windows
- #10058
- Highlight locked files in repository browser
- #10079
- improper rendering of svn:externals when not configured
- #10129
- TypeError: argument number 2:
- #10421
- RuntimeError: instance.__dict__ not accessible in restricted mode
- #10547
- glitch with mergeinfo (recreation of merge source shown as eligible)
- #11205
- "repository sync" fails for Subversion repository with unicode path
- #12121
- Subversion copies could show peg revision in browser view
- #12442
- Support per-repository authz_file
- #12549
- Provide option for blame to use merged history
- #13129
- trac-admin resync leaks memory
Asking for More Support About Subversion
- ReadTheFineBook: and/or the FAQ
- There's also a
#svnchannel on the
freenodeIRC network
- If you think you've found a bug in Subversion, read these instructions
|
https://trac.edgewall.org/wiki/TracSubversion
|
CC-MAIN-2019-39
|
refinedweb
| 2,504
| 59.4
|
Hi all,
during my refactoring of our test suite I'm wondering if we can make some renaming and repackaging of our test:
Any comment is more than welcome
S.
Remove .test package to have test classes in the same namespace of classes under test
Agreed. In order to better test package private methods.
Try to remove also unit namespace where it is there
Agreed.
We are running as Junit test only classes ending with TestCase.
As long as there is an easy way for developers to identify which classes are tests and which are support/mock classes. Using package names for this is a good idea. However, I don't see this as the highest priority for the new test suite.
At the moment I'll leave every test ala TCK
Yes, we should keep the tests described in separate so specification requirements are quite clear.
|
https://community.jboss.org/thread/159479?tstart=0
|
CC-MAIN-2015-40
|
refinedweb
| 147
| 72.26
|
tcltest - Man Page
Test harness support code and utilities
Synopsis
package require tcltest ?2.5? tcltest::test name description ?-option value ...? tcltest::test name description ?constraints? body result tcltest::loadTestedCommands tcltest::makeDirectory name ?directory? tcltest::removeDirectory name ?directory? tcltest::makeFile contents name ?directory? tcltest::removeFile name ?directory? tcltest::viewFile name ?directory? tcltest::cleanupTests ?runningMultipleTests? tcltest::runAllTests tcltest::configure tcltest::configure -option tcltest::configure -option value ?-option value ...? tcltest::customMatch mode command tcltest::testConstraint constraint ?value? tcltest::outputChannel ?channelID? tcltest::errorChannel ?channelID? tcltest::interpreter ?interp? tcltest::debug ?level? tcltest::errorFile ?filename? tcltest::limitConstraints ?boolean? tcltest::loadFile ?filename? tcltest::loadScript ?script? tcltest::match ?patternList? tcltest::matchDirectories ?patternList? tcltest::matchFiles ?patternList? tcltest::outputFile ?filename? tcltest::preserveCore ?level? tcltest::singleProcess ?boolean? tcltest::skip ?patternList? tcltest::skipDirectories ?patternList? tcltest::skipFiles ?patternList? tcltest::temporaryDirectory ?directory? tcltest::testsDirectory ?directory? tcltest::verbose ?level? tcltest::test name description optionList tcltest::bytestring string tcltest::normalizeMsg msg tcltest::normalizePath pathVar tcltest::workingDirectory ?dir?
Description..
Commands
- test name description ?-option value ...?
Defines and possibly runs a test with the name name and description description. The name and description of a test are used in messages reported by test during the test, as configured by the options of tcltest. The remaining option value arguments to test define the test, including the scripts to run, the conditions under which to run them, the expected result, and the means by which the expected and actual results should be compared. See Tests below for a complete description of the valid options and how they define a test. The test command returns an empty string.
- test name description ?constraints? body result
This form of test is provided to support test suites written for version 1 of the tcltest package, and also a simpler interface for a common usage. It is the same as “test name description -constraints constraints -body body -result result”. All other options to test take their default values. When constraints is omitted, this form of test can be distinguished from the first because all options begin with “-”.
- loadTestedCommands
Evaluates in the caller's context the script specified by configure -load or configure -loadfile. Returns the result of that script evaluation, including any error raised by the script. Use this command and the related configuration options to provide the commands to be tested to the interpreter running the test suite.
- makeFile contents name ?directory?
Creates a file named name relative to directory directory and write contents to that file using the encoding encoding system. If contents does not end with a newline, a newline will be appended so that the file named name does end with a newline. Because the system encoding is used, this command is only suitable for making text files. The file will be removed by the next evaluation of cleanupTests, unless it is removed by removeFile first. The default value of directory is the directory configure -tmpdir. Returns the full path of the file created. Use this command to create any text file required by a test with contents as needed.
- removeFile name ?directory?
Forces the file referenced by name to be removed. This file name should be relative to directory. The default value of directory is the directory configure -tmpdir. Returns an empty string. Use this command to delete files created by makeFile.
- makeDirectory name ?directory?
Creates a directory named name relative to directory directory. The directory will be removed by the next evaluation of cleanupTests, unless it is removed by removeDirectory first. The default value of directory is the directory configure -tmpdir. Returns the full path of the directory created. Use this command to create any directories that are required to exist by a test.
- removeDirectory name ?directory?
Forces the directory referenced by name to be removed. This directory should be relative to directory. The default value of directory is the directory configure -tmpdir. Returns an empty string. Use this command to delete any directories created by makeDirectory.
- viewFile file ?directory?
Returns the contents of file, except for any final newline, just as read -nonewline would return. This file name should be relative to directory. The default value of directory is the directory configure -tmpdir. Use this command as a convenient way to turn the contents of a file generated by a test into the result of that test for matching against an expected result. The contents of the file are read using the system encoding, so its usefulness is limited to text files.
- cleanupTests
Intended to clean up and summarize after several tests have been run. Typically called once per test file, at the end of the file after all tests have been completed. For best effectiveness, be sure that the cleanupTests is evaluated even if an error occurs earlier in the test file evaluation. global env array. Returns an empty string.
- runAllTests
This is a master command meant to run an entire suite of tests, spanning multiple files and/or directories, as governed by the configurable options of tcltest. See Running All Tests below for a complete description of the many variations possible with runAllTests.
Configuration Commands
- configure
Returns the list of configurable options supported by tcltest. See Configurable Options below for the full list of options, their valid values, and their effect on tcltest operations.
- configure option
Returns the current value of the supported configurable option option. Raises an error if option is not a supported configurable option.
- configure option value ?-option value ...?
Sets the value of each configurable option option to the corresponding value value, in order. Raises an error if an option is not a supported configurable option, or if value is not a valid value for the corresponding option, or if a value is not provided. When an error is raised, the operation of configure is halted, and subsequent option value arguments are not processed..
- customMatch mode script
Registers mode as a new legal value of the -match option to test. When the -match mode option is passed to test, the script script will be evaluated to compare the actual result of evaluating the body of the test to the expected result. To perform the match, the script is completed with two additional words, the expected result, and the actual result, and the completed script is evaluated in the global namespace. The completed script is expected to return a boolean value indicating whether or not the results match. The built-in matching modes of test are exact, glob, and regexp.
- testConstraint constraint ?boolean?
Sets or returns the boolean value associated with the named constraint. See Test Constraints below for more information.
- interpreter ?executableName?
Sets or returns the name of the executable to be execed by runAllTests to run each test file when configure -singleproc is false. The default value for interpreter is the name of the currently running program as returned by info nameofexecutable.
- outputChannel ?channelID?
Sets or returns the output channel ID. This defaults to stdout. Any test that prints test related output should send that output to outputChannel rather than letting that output default to stdout.
- errorChannel ?channelID?
Sets or returns the error channel ID. This defaults to stderr. Any test that prints error messages should send that output to errorChannel rather than printing directly to stderr.
Shortcut Configuration Commands
- debug ?level?
Same as “configure -debug ?level?”.
- errorFile ?filename?
Same as “configure -errfile ?filename?”.
- limitConstraints ?boolean?
Same as “configure -limitconstraints ?boolean?”.
- loadFile ?filename?
Same as “configure -loadfile ?filename?”.
- loadScript ?script?
Same as “configure -load ?script?”.
- match ?patternList?
Same as “configure -match ?patternList?”.
- matchDirectories ?patternList?
Same as “configure -relateddir ?patternList?”.
- matchFiles ?patternList?
Same as “configure -file ?patternList?”.
- outputFile ?filename?
Same as “configure -outfile ?filename?”.
- preserveCore ?level?
Same as “configure -preservecore ?level?”.
- singleProcess ?boolean?
Same as “configure -singleproc ?boolean?”.
- skip ?patternList?
Same as “configure -skip ?patternList?”.
- skipDirectories ?patternList?
Same as “configure -asidefromdir ?patternList?”.
- skipFiles ?patternList?
Same as “configure -notfile ?patternList?”.
- temporaryDirectory ?directory?
Same as “configure -tmpdir ?directory?”.
- testsDirectory ?directory?
Same as “configure -testdir ?directory?”.
- verbose ?level?
Same as “configure -verbose ?level?”.
Other Commands
The remaining commands provided by tcltest have better alternatives provided by tcltest or Tcl itself. They are retained to support existing test suites, but should be avoided in new code.
- test name description optionList
This form of test was provided to enable passing many options spanning several lines to test as a single argument quoted by braces, rather than needing to backslash quote the newlines between arguments to test. The optionList argument is expected to be a list with an even number of elements representing option and value arguments to pass to test. However, these values are not passed directly, as in the alternate forms of switch. Instead, this form makes an unfortunate attempt to overthrow Tcl's substitution rules by performing substitutions on some of the list elements as an attempt to implement a “do what I mean” interpretation of a brace-enclosed “block”. The result is nearly impossible to document clearly, and for that reason this form is not recommended. See the examples in Creating Test Suites with Tcltest below to see that this form is really not necessary to avoid backslash-quoted newlines. If you insist on using this form, examine the source code of tcltest if you want to know the substitution details, or just enclose the third through last argument to test in braces and hope for the best.
- workingDirectory ?directoryName?
Sets or returns the current working directory when the test suite is running. The default value for workingDirectory is the directory in which the test suite was launched. The Tcl commands cd and pwd are sufficient replacements.
- normalizeMsg msg
Returns the result of removing the “extra” newlines from msg, where “extra” is rather imprecise. Tcl offers plenty of string processing commands to modify strings as you wish, and customMatch allows flexible matching of actual and expected results.
- normalizePath pathVar
Resolves symlinks in a path, thus creating a path without internal redirection. It is assumed that pathVar is absolute. pathVar is modified in place. The Tcl command file normalize is a sufficient replacement.
- bytestring string
Construct a string that consists of the requested sequence of bytes, as opposed to a string of properly formed UTF-8 characters using the value supplied in string. This allows the tester to create denormalized or improperly formed strings to pass to C procedures that are supposed to accept strings with embedded NULL types and confirm that a string result has a certain pattern of bytes. This is exactly equivalent to the Tcl command encoding convertfrom identity.
Tests
The test command is the heart of the tcltest package. Its essential function is to evaluate a Tcl script and compare the result with an expected result. The options of test define the test script, the environment in which to evaluate it, the expected result, and how the compare the actual result to the expected result. Some configuration options of tcltest also influence how test operates.
The valid options for test are summarized:
test name description ?-constraints keywordList|expression? ?-setup setupScript? ?-body testScript? ?-cleanup cleanupScript? ?-result expectedAnswer? ?-output expectedOutput? ?-errorOutput expectedError? ?-returnCodes codeList? ?-errorCode expectedErrorCode? ?-match mode?
The name may be any string. It is conventional to choose a name according to the pattern:
target-majorNum.minorNum
For:
- -constraints keywordList|expression
The optional -constraints attribute can be list of one or more keywords or an expression. If the -constraints value is a list of keywords, each of these keywords should be the name of a constraint defined by a call to testConstraint. If any of the listed constraints is false or does not exist, the test is skipped. If the -constraints value is an expression, that expression is evaluated. If the expression evaluates to true, then the test is run. Note that the expression form of -constraints may interfere with the operation of configure -constraints and configure -limitconstraints, and is not recommended. Appropriate constraints should be added to any tests that should not always be run. That is, conditional evaluation of a test should be accomplished by the -constraints option, not by conditional evaluation of test. In that way, the same number of tests are always reported by the test suite, though the number skipped may change based on the testing environment. The default value is an empty list. See Test Constraints below for a list of built-in constraints and information on how to add your own constraints.
- -setup script
The optional -setup attribute indicates a script that will be run before the script indicated by the -body attribute. If evaluation of script raises an error, the test will fail. The default value is an empty script.
- -body script
The -body attribute indicates the script to run to carry out the test, which must return a result that can be checked for correctness. If evaluation of script raises an error, the test will fail (unless the -returnCodes option is used to state that an error is expected). The default value is an empty script.
- -cleanup script
The optional -cleanup attribute indicates a script that will be run after the script indicated by the -body attribute. If evaluation of script raises an error, the test will fail. The default value is an empty script.
- -match mode
The -match attribute determines how expected answers supplied by -result, -output, and -errorOutput are compared. Valid values for mode are regexp, glob, exact, and any value registered by a prior call to customMatch. The default value is exact.
- -result expectedValue
The -result attribute supplies the expectedValue against which the return value from script will be compared. The default value is an empty string.
- -output expectedValue
The -output attribute supplies the expectedValue against which any output sent to stdout or outputChannel during evaluation of the script(s) will be compared. Note that only output printed using the global puts command is used for comparison. If -output is not specified, output sent to stdout and outputChannel is not processed for comparison.
- -errorOutput expectedValue
The -errorOutput attribute supplies the expectedValue against which any output sent to stderr or errorChannel during evaluation of the script(s) will be compared. Note that only output printed using the global puts command is used for comparison. If -errorOutput is not specified, output sent to stderr and errorChannel is not processed for comparison.
- -returnCodes expectedCodeList
The optional -returnCodes attribute supplies expectedCodeList, a list of return codes that may be accepted from evaluation of the -body script. If evaluation of the -body script returns a code not in the expectedCodeList, the test fails. All return codes known to return, in both numeric and symbolic form, including extended return codes, are acceptable elements in the expectedCodeList. Default value is “ok return”.
- -errorCode expectedErrorCode
The optional -errorCode attribute supplies expectedErrorCode, a glob pattern that should match the error code reported from evaluation of the -body script. If evaluation of the -body script returns a code not matching expectedErrorCode, the test fails. Default value is “*”. If -returnCodes does not include error it is set to error..
Test Constraints
Constraints are used to determine whether or not a test should be skipped. Each constraint has a name, which may be any string, and a boolean value. Each test has a -constraints value which is a list of constraint names. There are two modes of constraint control. Most frequently, the default mode is used, indicated by a setting of configure -limitconstraints to false. The test will run only if all constraints in the list are true-valued. Thus, the -constraints option of test is a convenient, symbolic way to define any conditions required for the test to be possible or meaningful. For example, a test with -constraints unix will only be run if the constraint unix is true, which indicates the test suite is being run on a Unix platform.:
- singleTestInterp
This test can only be run if all test files are sourced into a single interpreter.
- unix
This test can only be run on any Unix platform.
- win
This test can only be run on any Windows platform.
- nt
This test can only be run on any Windows NT platform.
- mac
This test can only be run on any Mac platform.
- unixOrWin
This test can only be run on a Unix or Windows platform.
- macOrWin
This test can only be run on a Mac or Windows platform.
- macOrUnix
This test can only be run on a Mac or Unix platform.
- tempNotWin
This test can not be run on Windows. This flag is used to temporarily disable a test.
- tempNotMac
This test can not be run on a Mac. This flag is used to temporarily disable a test.
- unixCrash
This test crashes if it is run on Unix. This flag is used to temporarily disable a test.
- winCrash
This test crashes if it is run on Windows. This flag is used to temporarily disable a test.
- macCrash
This test crashes if it is run on a Mac. This flag is used to temporarily disable a test.
- emptyTest
This test is empty, and so not worth running, but it remains as a place-holder for a test to be written in the future. This constraint has value false to cause tests to be skipped unless the user specifies otherwise.
- knownBug
This test is known to fail and the bug is not yet fixed. This constraint has value false to cause tests to be skipped unless the user specifies otherwise.
- nonPortable
This test can only be run in some known development environment. Some tests are inherently non-portable because they depend on things like word length, file system configuration, window manager, etc. This constraint has value false to cause tests to be skipped unless the user specifies otherwise.
- userInteraction
This test requires interaction from the user. This constraint has value false to causes tests to be skipped unless the user specifies otherwise.
- interactive
This test can only be run in if the interpreter is in interactive mode (when the global tcl_interactive variable is set to 1).
- nonBlockFiles
This test can only be run if platform supports setting files into nonblocking mode.
- asyncPipeClose
This test can only be run if platform supports async flush and async close on a pipe.
- unixExecs
This test can only be run if this machine has Unix-style commands cat, echo, sh, wc, rm, sleep, fgrep, ps, chmod, and mkdir available.
- hasIsoLocale
This test can only be run if can switch to an ISO locale.
- root
This test can only run if Unix user is root.
- notRoot
This test can only run if Unix user is not root.
- eformat
This test can only run if app has a working version of sprintf with respect to the “e” format of floating-point numbers.
- stdio
This test can only be run if interpreter can be opened as a pipe. pass
to run exactly those tests that exercise known bugs, and discover whether any of them pass, indicating the bug had been fixed.
Running All Tests
The single command runAllTests is evaluated to run an entire test suite, spanning many files and directories. The configuration options of tcltest control the precise operations. The runAllTests command begins by printing a summary of its configuration to outputChannel. sourced tests sourced.
Configurable Options
The configure command is used to set and query the configurable options of tcltest. The valid options are:
- -singleproc boolean
Controls whether or not runAllTests spawns a child process for each test file. No spawning when boolean is true. Default value is false.
- -debug level
Sets the debug level to level, an integer value indicating how much debugging information should be printed to stdout. Note that debug messages always go to stdout, independent of the value of configure -outfile. Default value is 0. Levels are defined as:
- 0
Do not display any debug information.
- 1
Display information regarding whether a test is skipped because it does not match any of the tests that were specified using by configure -match (userSpecifiedNonMatch) or matches any of the tests specified by configure -skip (userSpecifiedSkip). Also print warnings about possible lack of cleanup or balance in test files. Also print warnings about any re-use of test names.
- 2
Display the flag array parsed by the command line processor, the contents of the global env array, and all user-defined variables that exist in the current namespace as they are used.
- 3
Display information regarding what individual procs in the test harness are doing.
- -verbose level
Sets the type of output verbosity desired to level, a list of zero or more of the elements body, pass, skip, start, error, line, msec and usec. Default value is “body error”. Levels are defined as:
- body (b)
Display the body of failed tests
- pass (p)
Print output when a test passes
- skip (s)
Print output when a test is skipped
- start (t)
Print output whenever a test starts
- error (e)
Print errorInfo and errorCode, if they exist, when a test return code does not match its expected return code
- line (l)
Print source file line information of failed tests
- msec (m)
Print each test's execution time in milliseconds
- usec (u)
Print each test's execution time in microseconds
Note that the msec and usec verbosity levels are provided as indicative measures only. They do not tackle the problem of repeatibility which should be considered in performance tests or benchmarks. To use these verbosity levels to thoroughly track performance degradations, consider wrapping your test bodies with time commands.
The single letter abbreviations noted above are also recognized so that “configure -verbose pt” is the same as “configure -verbose {pass start}”.
- -preservecore level
Sets the core preservation level to level. This level determines how stringent checks for core files are. Default value is 0. Levels are defined as:
- 0
No checking — do not check for core files at the end of each test command, but do check for them in runAllTests after all test files have been evaluated.
- 1
Also check for core files at the end of each test command.
- 2
Check for core files at all times described above, and save a copy of each core file produced in configure -tmpdir.
- -limitconstraints boolean
Sets the mode by which test honors constraints as described in Tests above. Default value is false.
- -constraints list
Sets all the constraints in list to true. Also used in combination with configure -limitconstraints true to control an alternative constraint mode as described in Tests above. Default value is an empty list.
- -tmpdir directory
Sets the temporary directory to be used by makeFile, makeDirectory, viewFile, removeFile, and removeDirectory as the default directory where temporary files and directories created by test files should be created. Default value is workingDirectory.
- -testdir directory
Sets the directory searched by runAllTests for test files and subdirectories. Default value is workingDirectory.
- -file patternList
Sets the list of patterns used by runAllTests to determine what test files to evaluate. Default value is “*.test”.
- -notfile patternList
Sets the list of patterns used by runAllTests to determine what test files to skip. Default value is “l.*.test”, so that any SCCS lock files are skipped.
- -relateddir patternList
Sets the list of patterns used by runAllTests to determine what subdirectories to search for an all.tcl file. Default value is “*”.
- -asidefromdir patternList
Sets the list of patterns used by runAllTests to determine what subdirectories to skip when searching for an all.tcl file. Default value is an empty list.
- -match patternList
Set the list of patterns used by test to determine whether a test should be run. Default value is “*”.
- -skip patternList
Set the list of patterns used by test to determine whether a test should be skipped. Default value is an empty list.
- -load script
Sets a script to be evaluated by loadTestedCommands. Default value is an empty script.
- -loadfile filename
Sets the filename from which to read a script to be evaluated by loadTestedCommands. This is an alternative to -load. They cannot be used together.
- -outfile filename
Sets the file to which all output produced by tcltest should be written. A file named filename will be opened for writing, and the resulting channel will be set as the value of outputChannel.
- -errfile filename
Sets the file to which all error output produced by tcltest should be written. A file named filename will be opened for writing, and the resulting channel will be set as the value of errorChannel.
Creating Test Suites with Tcltest
The fundamental element of a test suite is the individual test command. We begin with several examples.
- [1]
Test of a script that returns normally.
test example-1.0 {normal return} { format %s value } value
- [2]
Test of a script that requires context setup and cleanup. Note the bracing and indenting style that avoids any need for line continuation.
test example-1.1 {test file existence} -setup { set file [makeFile {} test] } -body { file exists $file } -cleanup { removeFile test } -result 1
- [3]
Test of a script that raises an error.
test example-1.2 {error return} -body { error message } -returnCodes error -result message
- [4]
Test with a constraint..
- [5]
Recommended system for writing conditional tests, using constraints to guard:
testConstraint X [expr $myRequirement] test goodConditionalTest {} X { # body } result
- [6]
Discouraged system for writing conditional tests, using if to guard: tests in a test file, the command cleanupTests should be called.
- [7]
Here is a sketch of a sample test file illustrating those points: is the default name used by runAllTests when combining multiple test suites into one testing run.
- [8]
Here is a sketch of a sample test suite master script:
package require Tcl 8.4 package require tcltest 2.2 package require example ::tcltest::configure -testdir \ [file dirname [file normalize [info script]]] eval ::tcltest::configure $argv ::tcltest::runAllTests
Compatibility
A number of commands and variables in the ::tcltest namespace provided by earlier releases of tcltest have not been documented here. They are no longer part of the supported public interface of tcltest and should not be used in new test suites. However, to continue to support existing test suites written to the older interface specifications, many of those deprecated commands and variables still work as before. For example, in many circumstances, configure will be automatically called shortly after package require tcltest 2.1 succeeds with arguments from the variable ::argv. This is to support test suites that depend on the old behavior that tcltest was automatically configured from command line arguments. New test files should not depend on this, but should explicitly include
eval ::tcltest::configure $::argv
or
::tcltest::configure {*}$::argv
to establish a configuration from command line arguments.
Known Issues
There are two known issues related to nested evaluations of test. The first issue relates to the stack level in which test scripts are executed. Tests nested within other tests may be executed at the same stack level as the outermost test. For example, in the following code:
test level-1.1 {level 1} { -body { test level-2.1 {level 2} { } } }
any script executed in level-2.1 may be executed at the same stack level as the script defined for level-1.1.
In addition, while two tests global.
Keywords
test, test harness, test suite
|
https://www.mankier.com/n/tcltest
|
CC-MAIN-2021-25
|
refinedweb
| 4,498
| 56.45
|
How to: Add a Recurring Event to Lists on Multiple Sites
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
This programming task shows how to add a recurring event with a Meeting Workspace site to the Events list of every site in a collection of subsites.
To add a recurring event with a Meeting Workspace site to the Events list of every site in a collection of subsites
Create a console application in Microsoft Visual Studio 2005, as described in How to: Create a Console Application.
Add a using or Imports directive to the opening of the .cs or .vb file for the Microsoft.SharePoint and Microsoft.SharePoint.Meetings namespaces, as follows:
Use the SPSite constructor to instantiate a specified site collection. This example uses an indexer on the AllWebs property of the SPSite class to return a specific site, and the Webs property of the SPWeb class to return the collection of subsites beneath the site. Set up a foreach loop to iterate through all the subsites and obtain the Events list for each site and the collection of list items in each Events list, as follows:
Dim evtTitle As String = Console.ReadLine() Dim siteCollection As New SPSite("Absolute_Url") Dim site As SPWeb = siteCollection.AllWebs("Site_Name") Dim subSites As SPWebCollection = site.Webs Dim subSite As SPWeb For Each subSite In subSites Dim list As SPList = subSite.Lists("Events") Dim listItems As SPListItemCollection = list.Items
string evtTitle = Console.ReadLine(); SPSite siteCollection = new SPSite("Absolute_Url"); SPWeb site = siteCollection.AllWebs["Site_Name"]; SPWebCollection subSites = site.Webs; foreach (SPWeb subSite in subSites) { SPList list = subSite.Lists["Events"]; SPListItemCollection listItems = list.Items;
Create a list item. This example uses the Add method of the SPListItemCollection class to create an uninitialized list item, uses indexers to set various properties for the new item, and then uses the Update method to finish creating the item.
Dim recEvent As SPListItem = listItems.Add() Dim recdata As String = "()
SPListItem recEvent = listItems.Add(); string recData = ""] = System();
The recData variable contains an XML fragment that specifies properties for a recurring event taking place daily for five days, and the XMLTZone indexer assigns time zone information for the current site. The XML that defines the recurrence and specifies the time zone information is contained in the ntext3 and ntext4 columns of the UserData table in the content database.
The following table shows examples of the different kinds of recurrence that can be used.
To add a Meeting Workspace site to the recurring event, use one of the Add methods of the SPWebCollection class and the LinkWithEvent method of the SPMeeting class.
Dim mwsSites As SPWebCollection = subSite.Webs Dim path As String = recEvent("Title").ToString() Dim newSite As SPWeb = mwsSites.Add(path, "Workspace_Name", _ "Description", Convert.ToUInt32(1033), "MPS#0", False, False) Dim mwsSite As SPMeeting = SPMeeting.GetMeetingInformation(newSite) Dim guid As String = list.ID.ToString() Dim id As Integer = recEvent.ID Try mwsSite.LinkWithEvent(subSite, guid, id, "WorkspaceLink", "Workspace") Catch ex As System.Exception Console.WriteLine(ex.Message) End Try Next subSite
SPWebCollection mwsSites = subSite.Webs; string path = recEvent["Title"].ToString(); SPWeb newSite = mwsSites.Add(path, "Workspace_Name", "Description", 1033, "MPS#0", false, false); SPMeeting mwsSite = SPMeeting.GetMeetingInformation(newSite); string guid = list.ID.ToString(); int id = recEvent.ID; try { mwsSite.LinkWithEvent(subSite, guid, id, "WorkspaceLink", "Workspace"); } catch (System.Exception ex) { Console.WriteLine(ex.Message); } }
After the Meeting Workspace site is created, the GetMeetingInformation method returns an SPMeeting object representing the site.
Press F5 to start the Console Application.
At the command prompt, type a name for the Meeting Workspace site, and then press ENTER to add a recurring event with a Meeting Workspace site to the Events list in all the subsites beneath a site.
|
http://msdn.microsoft.com/en-us/library/ms434156(v=office.12).aspx
|
CC-MAIN-2014-41
|
refinedweb
| 649
| 50.73
|
.form.Labelable
Ext.form.field.Field
Ext.mixin.Accessible
Ext.mixin.Bindable
Ext.mixin.ComponentDelegation
Ext.mixin.Focusable
Ext.mixin.Identifiable
Ext.mixin.Inheritable
Ext.mixin.Keyboard
Ext.mixin.Observable
Ext.state.Stateful
Ext.util.Animate
Ext.util.ElementContainer
Ext.util.Floating
Ext.util.Observable
Ext.util.Positionable
Ext.util.Renderable
Ext.util.StoreHolder
A calendar picker component. Similar to Ext.calendar.List, the items in the picker will display the title for each source calendar along with a color swatch representing the default color the that calendar's events.
The cfg-store will be the same Ext.calendar.store.Calendars instance used by your target Ext.calendar.view.Base.
The data source to which the combo / tagfield is bound. Acceptable values for this property are:
an Array : Arrays will be converted to a Ext.data.Store internally, automatically generating field names to work with all data components.
1-dimensional array : (e.g.,
['Foo','Bar'])
A 1-dimensional array will automatically be expanded (each array item will be used for both the combo valueField and displayField)
2-dimensional array : (e.g.,
[['f','Foo'],['b','Bar']])
For a multi-dimensional array, the value in index 0 of each item will be assumed to be the combo valueField, while the value at index 1 is assumed to be the combo displayField.
a Ext.data.Store config object. When passing a config you can specify the store type by alias. Passing a config object with a store type will dynamically create a new store of that type when the combo / tagfield is instantiated.
Ext.define('MyApp.store.States', {
extend: 'Ext.data.Store', alias: 'store.states', fields: ['name']
});
Ext.create({
xtype: 'combobox', renderTo: document.body, store: { type: 'states', data: [{ name: 'California' }] }, queryMode: 'local', displayField: 'name', valueField: 'name'
});
Gets the current store instance.
The store, null if one does not exist.
Sets the store to the specified store.
Available since: 5.0.0
store : Object
The underlying data value name to bind to this ComboBox.
Note: use of a
valueField requires the user to make a selection in order for a value
to be mapped. See also
displayField.
Defaults to match the value of the displayField config.
An incrementing numeric counter indicating activation index for use by the zIndexManager to sort its stack.
Defaults to:
0
Returns the value of activeCounter
Sets the value of activeCounter
If specified, then the component will be displayed with this value as its active error when first rendered. Use setActiveError or unsetActiveError to change it after component creation.
Gets the active error message for this component, if any. This does not trigger validation on its own, it merely returns any message that the component may already hold.
The active error message on the component; if there is no error, an empty string is returned.
Sets the active error message to the given string. This replaces the entire error message contents with the given string. Also see setActiveErrors which accepts an Array of messages and formats them according to the activeErrorsTpl. Note that this only updates the error message element's text and attributes, you'll have to call doComponentLayout to actually update the field's layout to match. If the field extends Ext.form.field.Base you should call markInvalid instead.
msg : String
The error message
The template used to format the Array of error messages passed to setActiveErrors into a single HTML string. if the msgTarget is title, it defaults to a list separated by new lines. Otherwise, it renders each message as an item in an unordered list.
Defaults to:
undefined
An optional string or
XTemplate configuration to insert in the field markup
at the end of the input containing element. If an
XTemplate is used, the component's
render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
after the label text. If an
XTemplate is used, the component's
render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
after the label element. If an
XTemplate is used, the component's
render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
after the subTpl markup. If an
XTemplate is used, the
component's render data serves as the context.
Specify false to validate that the value's length must be > 0. If
true, then a blank value
is always taken to be valid regardless of any vtype validation that may be
applied.
If vtype validation must still be applied to blank values, configure
validateBlank as
true;
Defaults to:
true
Specify false to automatically trim the value before validating the whether the value is blank. Setting this to false automatically sets allowBlank to false.
Defaults to:
true
The text query to send to the server to return all records for the list with no filtering
Defaults to:
''
Configure as
true to allow matching of the typed characters at any position in the
valueField's value..
Localized announcement text for validation errors. This text will be used by Assistive Technologies such as screen readers to alert the users when field validation fails.
This config is used with Ext.String#format. '{0}' will be replaced with the actual error message(s), '{1}' will be replaced with field label.
Defaults to:
'Input error. {0}.'
Optional text description for this object. This text will be announced to Assistive Technology users when the object is focused.
Defaults to:
undefined' } }
Defaults to:
{ role: 'presentation' }
Whether to adjust the component's body width to make room for 'side' error messages.
Defaults to:
true highlight
When
true, the last selected record in the dropdown list will be re-selected
upon autoSelect. Set to
false to always select the first record in the
drop-down list. For accessible applications it is recommended to set this option
to
false.
Defaults to:
true
true to automatically show the component upon creation. This config option may only be used
for cfg-floating components or components that use autoRender.
Defaults to:
false
Available since: 2.3.0
The CSS class to be applied to the body content element.
Defaults to:
Ext.baseCSSPrefix + 'form-item-body' + 'field'
An optional string or
XTemplate configuration to insert in the field markup
at the beginning of the input containing element. If an
XTemplate is used,
the component's render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
before the label text. If an
XTemplate is used, the component's
render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
before the label element. If an
XTemplate is used, the component's
render data serves as the context.
An optional string or
XTemplate configuration to insert in the field markup
before the subTpl markup. If an
XTemplate is used, the
component's render data serves as the context.
The error text to display if the allowBlank validation fails.
Defaults to:
'This field is required'.
Configure as
true to make the filtering match with exact case matching
Defaults to:
false
Defines a timeout in milliseconds for buffering checkChangeEvents that fire in rapid succession. Defaults to 50 milliseconds.
Defaults to:
50
A list of event names that will be listened for on the field's input element, which will cause the field's value to be checked for changes. If a change is detected, the change event will be fired, followed by validation if the validateOnChange option is enabled.
Defaults to ['change', 'propertychange', 'keyup'] in Internet Explorer, and ['change', 'input', 'textInput', 'keyup', 'dragdrop'] in other browsers. This catches all the ways that field values can be changed in most supported browsers; the only known exceptions at the time of writing are:
If you need to guarantee on-the-fly change notifications including these edge cases, you can call the checkChange method on a repeating interval, e.g. using Ext.TaskManager, or if the field is within a Ext.form.Panel, you can use the FormPanel's Ext.form.Panel#pollForChanges configuration to set up such a task automatically.
Defaults to:
Ext.isIE && (!document.documentMode || document.documentMode <= 9) ? [ 'change', 'propertychange', 'keyup' ] : [ 'change', 'input', 'textInput', 'keyup', 'dragdrop' ]:
{ 'hiddenDataEl': true }
Returns the value of childEls
Object / String[] / Object[]
Sets the value of childEls
childEls : Object / String[] / Object[]
When queryMode is
'local' only
As text is entered, the underlying store is filtered to match the value. When this option
is
true, any filtering applied by this field will be cleared when focus is removed
& reinstated on focus.
If
false, the filters will be left in place.
Defaults to:
Has no effect if multiSelect is
false
Configure as true to automatically collapse the pick list after a selection is made.
Defaults to:
false
Deprecated since version 5.1.0
For multiple selection use Ext.form.field.Tag or Ext.view.MultiSelector:
'textfield''
Set of options that will be used as defaults for the user-configured listConfig object.
Defaults
The character(s) used to separate the display values of multiple
selected items when
multiSelect = true.
Defaults to:
', '
Deprecated since version 5.1.0
For multiple selection use Ext.form.field.Tag or Ext.view.MultiSelector
Returns the value of delimiter
Sets the value of delimiter
The CSS class to use when the field value is dirty.
Defaults to:
Ext.baseCSSPrefix + 'form-dirty'
true to disable the component.
Defaults to:
false
Available since: 2.3.0
Enable or disable the component.
disabled : Boolean
true to disable.
CSS class to add when the Component is disabled.
Defaults to:
Ext.baseCSSPrefix + 'item-disabled'
Specify true to disable input keystroke filtering. This will ignore the maskRe field.
Defaults to:
false
The underlying data field name to bind to this ComboBox.
See also
valueField.
Defaults to:
'text'
Returns the value of displayField
Sets the value of displayField
The template to be used to display selected records inside the text field. An array of the selected records' data will be passed to the template. Defaults to:
'<tpl for=".">' + '{[typeof values === "string" ? values : values["' + me.displayField + '"]]}' + '<tpl if="xindex < xcount">' + me.delimiter + '</tpl>' + '</tpl>'
By default only the immediate data of the record is passed (no associated data). The getRecordDisplayData can be overridden to extend this.
Defaults to:
null
Returns the value of displayTpl
String / String[] / Ext.XTemplate
Sets the value of displayTpl
displayTpl : String / String[] / Ext.XTemplate
False to prevent the user from typing text directly into the field; the field can only have its value set via selecting a value from the picker. In this state, the picker can also be opened by clicking directly on the input field itself.
Defaults to:
true
Returns the value of editable
Sets the value of editable
The CSS class to apply to an empty field to style the emptyText. This class is automatically added and removed as needed depending on the current field value.
Defaults to:
Ext.baseCSSPrefix + 'form-empty-field'
The default text to place into an empty field.
Note that normally this value will be submitted to the server if this field is enabled; to prevent this you can set the submitEmptyText option of Ext.form.Basic#submit to false.
Also note that if you use inputType:'file', emptyText is not supported and should be avoided.
Note that for browsers that support it, setting this property will use the HTML 5 placeholder attribute, and for older browsers that don't support the HTML 5 placeholder attribute the value will be placed directly into the input element itself as the raw value. This means that older browsers will obfuscate the emptyText value for password input fields.
Defaults to:
''
Returns the value of this field's cfg-emptyText
The value of this field's emptyText
Sets the default text to place into an empty field
value : String
The cfg-emptyText value for this field
this
true to enable the proxying of key events for the HTML input field
Defaults to:
false
When queryMode is
'local' only
Set to
true to have the ComboBox use the typed value as a RegExp source to filter the store
to get possible matches. Invalid regex values will be ignored.
True to set the maxLength property on the underlying input field. Defaults to false
The CSS class to be applied to the error message element.
Defaults to:
Ext.baseCSSPrefix + 'form-error-msg'
An extra CSS class to be applied to the body content element in addition to baseBodyCls.
Defaults to:
Ext.baseCSSPrefix + 'form-text-field-body'
The default CSS class for the field input
Defaults to:
Ext.baseCSSPrefix + 'form-field'
The label for the field. It gets appended with the labelSeparator, and its position and sizing is determined by the labelAlign and labelWidth configs.
Defaults to:
undefined
Returns the label for the field. Defaults to simply returning the fieldLabel config. Can be overridden to provide a custom generated label.
The configured field label, or empty string if not defined
This is a template method. a hook into the functionality of this class. Feel free to override it in child classes.
Set the label of this field.
label : String
The new label. The labelSeparator will be automatically appended to the label string.
Optional CSS style(s) to be applied to the field input element. Should be a valid argument to Ext.dom.Element#applyStyles. Defaults to undefined. See also the setFieldStyle method for changing the style after initialization.
Set the CSS style of the field input element.
style : String/Object/Function
The style(s) to apply. Should be a valid argument to Ext.dom.Element#applyStyles.
The content of the field body is defined by this config option.
Defaults to:
[ // note: {id} here is really {inputId}, but {cmpId} is available '<input id="{id}" data-', '{%if (values.maxLength !== undefined){%}', '<tpl foreach="ariaElAttributes"> {$}="{.}"</tpl>', '</tpl>', '<tpl foreach="inputElAriaAttributes"> {$}="{.}"</tpl>', ' class="{fieldCls} {typeCls} {typeCls}-{ui} {editableCls} {inputCls} {fixCls}" autocomplete="off"/>', { disableFormats: true } ]
The CSS class to use when the field receives focus
Defaults to:
'form-focus'
Specifies whether the floated component should be automatically focused when it is brought to the front.
Defaults to:
true
true to restrict the selected value to one of the values in the list,
false to allow
the user to set arbitrary text into the field.
Defaults to:
false
Helpful text describing acceptable format for field values. This text will be announced by Assistive Technologies such as screen readers when the field is focused.
This option is superseded by ariaHelp.
Deprecated since version 6.2.0
This config is deprecated.
When inside FormPanel, any component configured with
formBind: true will
be enabled/disabled depending on the validity state of the form.
See Ext.form.Panel for more information and example.
Defaults to:
false
A CSS class to be applied to the outermost element to denote that it is participating in the form field layout.
Defaults to:
Ext.baseCSSPrefix + 'form-item'.
true if this field should automatically grow and shrink to its content
Defaults to:
false
The maximum width to allow when
grow = true
Defaults to:
800
The minimum width to allow when
grow = true
Defaults to:
30
false to not allow the component to resize itself when its data changes
(and its grow property is
true)
Defaults to:
true
When set to true, the label element (fieldLabel and labelSeparator) will be automatically hidden if the fieldLabel is empty. Setting this to false will cause the empty label element to be rendered and space to be reserved for it; this is useful if you want a field without a label to line up with other labeled fields in the same form.
If you wish to unconditionall hide the label even if a non-empty fieldLabel is configured, then set the hideLabel config to true.
Defaults to:
true
Set to true to completely hide the label element (fieldLabel and labelSeparator). Also see hideEmptyLabel, which controls whether space will be reserved for an empty fieldLabel.
Defaults to:
false
true to hide all triggers
Defaults to:
false
Returns the value of hideTrigger
Sets the value of hideTrigger.
An optional string or
XTemplate configuration to insert in the field markup
inside the input element (as attributes). If an
XTemplate is used, the component's
subTpl data serves as the context.
The id that will be given to the generated input DOM element. Defaults to an automatically generated id. If you configure this manually, you must make sure it is unique in the document.
Returns the input id for this field. If none was specified via the inputId config, then an id will be automatically generated.. radio, text, password, file. The extended types supported by HTML5 inputs (url, email, etc.) may also be used, though using them will cause older browsers to fall back to 'text'.
The type 'password' must be used to render that field type currently -- there is no separate Ext component for that. You can use Ext.form.field.File which creates a custom-rendered file upload field, but if you want a plain unstyled file input you can use a Base with inputType:'file'.
Defaults to:
'text'
The CSS class that is added to the element wrapping the input element
Defaults to:
Ext.baseCSSPrefix + 'form-text-wrap'
The CSS class to use when marking the component invalid.
Defaults to:
Ext.baseCSSPrefix + 'form-invalid'
The error text to use when marking a field invalid and no message is provided
Defaults to:
'The value in this field is invalid'
true if this field renders as a text input.
Defaults to:
true
Available since: 5.0'
The rendering template for the field decorations. Component classes using this mixin should include logic to use this as their renderTpl, and implement the getSubTplMarkup method to generate the field body content.
Defaults to:
[ '{beforeLabelTpl}', '<label id="{id}-labelEl" data-', '{beforeLabelTextTpl}', '<span id="{id}-labelTextEl" data-', '<tpl if="fieldLabel">{fieldLabel}', '<tpl if="labelSeparator">{labelSeparator}</tpl>', '</tpl>', '</span>', '{afterLabelTextTpl}', '</span>', '</label>', '{afterLabelTpl}', '<div id="{id}-bodyEl" data-', ' {fieldBodyCls} {fieldBodyCls}-{ui}</tpl> {growCls} {extraFieldBodyCls}"', '<tpl if="bodyStyle">', '<tpl if="ariaHelp">', '<span id="{id}-ariaHelpEl" data-', '{ariaHelp}', '</span>', '</tpl>', '<span id="{id}-ariaStatusEl" data-', '{ariaStatus}', '</span>', '<span id="{id}-ariaErrorEl" data-', '</span>', '</tpl>', '</div>', '<tpl if="renderError">', '<div id="{id}-errorWrapEl" data-', '<div role="presentation" id="{id}-errorEl" data-', '</div>', '</div>', '</tpl>', { disableFormats: true } ]
Controls the position and alignment of the fieldLabel. Valid values are:
Defaults to:
'left'
An optional string or
XTemplate configuration to insert in the field markup
inside the label element (as attributes). If an
XTemplate is used, the component's
render data serves as the context.
The CSS class to be applied to the label element. This (single) CSS class is used to formulate the renderSelector and drives the field layout where it is concatenated with a hyphen ('-') and labelAlign. To add additional classes, use labelClsExtra.
Defaults to:
Ext.baseCSSPrefix + 'form-item-label'
An optional string of one or more additional CSS classes to add to the label element. Defaults to empty.
The amount of space in pixels between the fieldLabel and the field body.
This defaults to
5 for compatibility with Ext JS 4, however, as of Ext JS 5
the space between the label and the body can optionally be determined by the theme
using the $form-label-horizontal-spacing (for side-aligned labels) and
$form-label-vertical-spacing (for top-aligned labels) SASS variables.
In order for the stylesheet values as to take effect, you must use a labelPad value
of
null.
Defaults to:
5
Character(s) to be inserted at the end of the label text.
Set to empty string to hide the separator completely.
Defaults to:
':'
A CSS style specification string to apply directly to this field's label.
The width of the fieldLabel in pixels. Only applicable if labelAlign is set to "left" or "right".
Defaults to:
100:
true
An optional set of configuration properties that will be passed to the Ext.view.BoundList's constructor. Any configuration that is valid for BoundList can be included. Some of the more useful ones are:
'Loading...'
70
undefined
300
false
'sides'
undefined(automatically set to the width of the ComboBox field if matchFieldWidth is true)
getInnerTpl A function which returns a template string which renders the ComboBox's displayField value in the dropdown. This defaults to just outputting the raw value, but may use any Ext.XTemplate methods to produce output.
The running template is configured with some extra properties that provide some context:
- field Ext.form.field.ComboBox This combobox - store Ext.data.Store This combobox's data store
An input mask regular expression that will be used to filter keystrokes (character being typed) that do not match. Note: It does not filter characters already in the input.
Whether the picker dropdown's width should be explicitly set to match the width of the field. Defaults to true.
Defaults to:
true
The maximum value in pixels which this Component will set its height to.
Warning: This will override any size management applied by layout managers.
Defaults to:
null
Returns the value of maxHeight
Sets the value of maxHeight
Maximum input field length allowed by validation. This behavior is intended to provide instant feedback to the user by improving usability to allow pasting and editing or overtyping and back tracking. To restrict the maximum number of characters that can be entered into the field use the enforceMaxLength option.
Defaults to Number.MAX_VALUE.
Defaults to:
Number.MAX_VALUE
Error text to display if the maximum length validation fails.
Defaults to:
'The maximum length for this field is {0}'
The maximum value in pixels which this Component will set its width to.
Warning: This will override any size management applied by layout managers.
Defaults to:
null
Returns the value of maxWidth
Sets the value of maxWidth
The minimum number of characters the user must type before autocomplete and typeAhead activate.
Defaults to
4 if
queryMode = 'remote' or
0 if
queryMode = 'local',
does not apply if
editable = false.
The minimum value in pixels which this Component will set its height to.
Warning: This will override any size management applied by layout managers.
Defaults to:
null
Returns the value of minHeight
Sets the value of minHeight
Minimum input field length required
Defaults to:
0
Error text to display if the minimum length validation fails.
Defaults to:
'The minimum length for this field is {0}'.
The location where the error message text should display. Must be one of the following values:
qtip Display a quick tip containing the message when the user hovers over the field.
This is the default.
Ext.tip.QuickTipManager#init must have been called for this setting to work.
title Display the message in a default browser title attribute popup.
underAdd a block div beneath the field containing the error message.
sideAdd an error icon to the right of the field, displaying the message in a popup on hover.
noneDon't display any error message. This might be useful if you are implementing custom error display.
[element id]Add the error message directly to the innerHTML of the specified element.
Defaults to:
'qtip'
If set to
true, allows the combo field to hold more than one value at a time, and allows
selecting multiple items from the dropdown list. The combo's text field will show all
selected values separated by the delimiter.
Defaults to:
false
Deprecated since version 5.1.0
Use Ext.form.field.Tag or Ext.view.MultiSelector
The name of the field. This is used as the parameter name when including the field value in a form submit(). If no name is configured, it falls back to the inputId. To prevent the field from being included in the form submit, set submitValue to false.
Returns the name attribute of the field. This is used as the parameter name when including the field value in a form submit().
Set to
true for this component's
name property to be tracked by its containing
nameHolder.
Defaults to:
false
A class to be added to the field's bodyEl element when the picker is opened.
Defaults to:
'x-pickerfield-open').
If greater than
0, a Ext.toolbar.Paging is displayed in the footer of the dropdown
list and the filter queries will execute with page start
and limit parameters.
Only applies when
queryMode = 'remote'.
Defaults to:
0
The alignment position with which to align the picker. Defaults to "tl-bl?"
Defaults to:
'tl-bl?'
An offset [x,y] to use in addition to the pickerAlign when positioning the picker. Defaults to undefined.
Has no effect if multiSelect is
false
Configure as
false to automatically collapse the pick list after a selection is made.
Defaults to:
true
Deprecated since version 5.1.0
For multiple selection use Ext.form.field.Tag or Ext.view.MultiSelector disable displaying any error message set on this object.
When true, this prevents the combo from re-querying (either locally or remotely) when the current query is the same as the previous query.
Defaults to:
true
The length of time in milliseconds to delay between the start of typing and sending the query to filter the dropdown list.
Defaults to
500 if
queryMode = 'remote' or
10 if
queryMode = 'local'
The mode in which the ComboBox uses the configured Store. Acceptable values are:
'remote' :
In
queryMode: 'remote',atically added to the Store which are then passed with every load
request which allows the server to further refine the returned dataset.
Typically, in an autocomplete situation, hideTrigger is configured
true
because it has no meaning for autocomplete.
'local' :
ComboBox loads local data
var combo = new Ext.form.field.ComboBox({ renderTo: document.body, queryMode: 'local', store: new Ext.data.ArrayStore({ id: 0, fields: [ 'myId', // numeric value is the key 'displayText' ], data: [[1, 'item1'], [2, 'item2']] // data is local }), valueField: 'myId', displayField: 'displayText', triggerAction: 'all' });
Defaults to:
'remote'
Name of the parameter used by the Store to pass the typed string when the ComboBox
is configured with
queryMode: 'remote'. If explicitly set to a falsy value
it will not be sent.
Defaults to:
'query'
true to prevent the user from changing the field, and hide all triggers.
Sets the read-only state of this field.
readOnly : Boolean
True to prevent the user changing the field and explicitly hide the trigger(s). Setting this to true will supersede settings editable and hideTrigger. Setting this to false will defer back to editable and hideTrigger.
The CSS class applied to the component's main element when it is readOnly.
Defaults to:
Ext.baseCSSPrefix + 'form-readonly'
A JavaScript RegExp object to be tested against the field value during validation. If the test fails, the field will be marked invalid using either regexText or invalidText.
The error text to display if regex is used and the test fails during validation
Defaults to:
'')%}'
true to attach a Ext.util.ClickRepeater to the trigger(s).
Click repeating behavior can also be configured on the individual trigger instances using the trigger's repeatClick config.
Defaults to:
false
The CSS class to apply to a required field, i.e. a field where allowBlank is false.
Defaults to:
Ext.baseCSSPrefix + 'form-required-field'
The selected model. Typically used with binding.
Defaults to:
null
Returns the combobox's selection.
The selected record
Sets the value of selection
selection : Ext.data.Model
true to automatically select any existing field text when the field receives input
focus. Only applies when editable = true
Defaults to:
false
Whether the Tab key should select the currently highlighted item.
Defaults to:.
Defaults to:
true
An initial value for the 'size' attribute on the text input element. This is only used if the field has no configured width and is not given a width by its container's layout. Defaults to 20.
Deprecated since version 6.5.0
Please use width instead.
Gets the current size of the component's underlying element.
contentSize : Boolean (optional)
true to get the width/size minus borders and padding
An object containing the element's size:
Sets the width and height of this Component. This method fires the resize event.
This method can accept either width and height as separate arguments, or you can pass
a size object like
{ width:10, height:20 }.
width : Number/String/Object
The new width to set. This may be one of:
{width: widthValue, height: heightValue}.
undefinedto leave the width unchanged.
height : Number/String
The new height to set (not required if a size object is passed as the first arg). This may be one of:
undefinedto leave the height unchanged.
this.
A JavaScript RegExp object used to strip unwanted content from the value
during input. If
stripCharsRe is specified,
every character sequence matching
stripCharsRe will be removed.
Setting this to false will prevent the field from being submitted even when it is not disabled.
Defaults to:
true
Returns the value that would be included in a standard form submit for this field. This will be combined with the field's name to form a name=value pair in the submitted parameters. If an empty string is returned then just the name= will be submitted; if null is returned then nothing will be submitted.
Note that the value returned will have been processed but may or may not have been successfully validated.
The value to be submitted, or null.
Sets a DOM tabIndex for this field. tabIndex may be set to
-1 in order to remove
the field from the tab rotation.
Note: tabIndex only applies to fields that are rendered. It does not effect fields built via applyTo
Return the actual tabIndex for this Focusable.
tabIndex attribute value
Set the tabIndex property for this Focusable. If the focusEl is available, set tabIndex attribute on it, too.
newTabIndex : Number
new tabIndex to set
The id, DOM node or Ext.dom.Element of an existing HTML
<select> element to convert
into a ComboBox. The target select's options will be used to build the options
in the ComboBox dropdown; a configured store will take precedence over this.
true to automatically render this combo box in place of the select element that is being
transformed. If
false, this combo will be rendered using the normal
rendering, either as part of a layout, or using renderTo or method-render.
Defaults to:
true
The action to execute when the trigger is clicked.
'all': run the query specified by the
allQuery
config option
'last': run the query using the
last query value.
'query': run the query using the
raw value.
See also
queryParam.
Defaults to:
'all'
An additional CSS class used to style the trigger button. The trigger will always get
the Ext.form.trigger.Trigger#baseCls by default and
triggerCls will be
appended if specified.
Defaults to:
Ext.baseCSSPrefix + 'form-arrow-trigger'
Ext.form.trigger.Trigger to use in this field. The keys in this object are unique identifiers for the triggers. The values in this object are Ext.form.trigger.Trigger configuration objects.
Ext.create('Ext.form.field.Text', { renderTo: document.body, fieldLabel: ComboBox.
Ext.create('Ext.form.field.ComboBox', { renderTo: Ext.getBody(), fieldLabel: 'My Custom Field', triggers: { foo: { cls: 'my-foo-trigger', weight: -2, // negative to place before default triggers handler: function() { console.log('foo trigger clicked'); } }, bar: { cls: 'my-bar-trigger', weight: -1, handler: function() { console.log('bar trigger clicked'); } } } });
Defaults to:
undefined
Returns the value of triggers
Sets the value of triggers
The CSS class that is added to the div wrapping the input element and trigger button(s).
Defaults to:
Ext.baseCSSPrefix + 'form-trigger-wrap'
true to populate and autoselect the remainder of the text being typed after a configurable
delay (typeAheadDelay) if it matches a known value.
Defaults to:
false
The length of time in milliseconds to wait until the typeahead text is displayed if
typeAhead = true
Defaults to:
250[]
Specify as
true to modify the behaviour of allowBlank so that blank values
are not passed as valid, but are subject to any configure vtype validation.
Defaults to:
false
Whether the field should validate when it loses focus. This will cause fields to be validated as the user steps through the fields in the form regardless of whether they are making changes to those fields along the way. See also validateOnChange.
Set to
true to validate the field
when focus leaves the field's component hierarchy entirely.
The difference between validateOnBlur and this option is that the former will happen when field's input element blurs. In complex fields such as ComboBox or Date focus may leave the input element to the drop-down picker, which will cause validateOnBlur to happen prematurely.
Using this option is recommended for accessible applications. The default value
is
false for backwards compatibility; this option and validateOnBlur
are mutually exclusive.
Defaults to:
false
Available since: 6.5.3
value : Object
record : Object
A custom validation function to be called during field validation (getErrors). If specified, this function will be called first, allowing the developer to override the default validation process.
Ext.create('Ext.form.field.Text', { renderTo: document.body, name: 'phone', fieldLabel: 'Phone Number', validator: function(val) { // remove non-numeric characters var tn = val.replace(/[^0-9]/g,''), errMsg = "Must be a 10 digit telephone number"; // if the numeric value is not 10 digits return an error message return (tn.length === 10) ? true : errMsg; } });
value : Object
The current field value
response
A value to initialize this field with.
Returns the current data value of the field. The type of value returned is particular to the type of the particular field (e.g. a Date object for Ext.form.field.Date), as the result of calling rawToValue on the field's processed String value. To return the raw String value, see getRawValue.
value The field value
Sets.
value : String/String[]
The value(s) to be set. Can be either a single String or Ext.data.Model, or an Array of Strings or Models.
this
When using a name/value combo, if the value passed to setValue is not found in the store, valueNotFoundText will be displayed as the field text if defined. If this default text is used, it means there is no value set and no validation will occur on this field.
Defaults to:
null
Returns the value of valueNotFoundText
Sets the value of valueNotFoundText
valueNotFoundText : String
The event name(s) to use to publish the value Ext.form.field.Base#bind for this field.
Defaults to:
'change'
Available since: 5.0.1 validation type name as defined in Ext.form.field.VTypes
A custom error message to display in place of the default message provided for the
vtype currently set for this field.
Note: only applies if
vtype is set, else ignored..Panel's:
'inputEl'
Instance specific ARIA attributes to render into Component's ariaEl. This object is only used during rendering, and is discarded afterwards.
ARIA role for this Component, defaults to no role. With no role, no other ARIA attributes are set.
Defaults to:
'combobox'
This property allows the object
to destroy bound stores that have Ext.data.AbstractStore#autoDestroy
option set to
true.
Defaults to:
true
true indicates an
id was auto-generated rather than provided by configuration.
Defaults to:
false
The div Element wrapping the component's contents. Only available after the component has been rendered.
Defaults to:
true:
'value'
Defaults to:
false
This property is set to
true after the
destroy method is called.
Defaults to:
false
The dirty state of the field.
Defaults to:
false
The div Element that will contain the component's error message(s). Note that depending on the configured msgTarget, this element may be hidden in favor of some other form of presentation, but will always be present in the DOM for use by assistive technologies.
Initial suspended call count. Incremented when suspendEvents is called, decremented when resumeEvents is called.
Defaults to:
0
True if there are extra
filters appllied to this component.
Defaults to:
false
Available since: 5.0:
'inputEl'
A reference to the element that wraps the input element. Only set after the field has been rendered.
Defaults to:
me.inputWrap
Deprecated since version 5.0
use inputWrap instead
The input Element for this Field. Only available after the field has been rendered.
A reference to the element that wraps the input element. Only set after the field has been rendered.
true in this class to identify an object as an instantiated Component, or subclass thereof.
Defaults to:
true
This property is set to
true during the call to
initConfig.
Defaults to:
false
Available since: 5.0.0
True if the picker is currently expanded, false if not.
Defaults to:
false
Flag denoting that this object is labelable as a field. Always true.
Defaults to:
true
This property is set to
true if this instance is the first of its class.
Defaults to:
false
Available since: 5.0.0
Flag denoting that this component is a Field. Always true.
Defaults to:
true Picker Field,
or subclass thereof.
Defaults to:
true
The label Element for this component. Only available after the component has been rendered.
The last key event processed is cached on the component for use in subsequent event handlers.
Available since: 6.6.0
The value of the match string used to filter the store. Delete this property to force a requery. Example use:
var combo = new Ext.form.field.ComboBox({ ... queryMode: 'remote', listeners: { // delete the previous query in the beforequery event or set // combo.lastQuery = null (this will reload the store the next time it expands) beforequery: function(qe){ delete qe.combo.lastQuery; } } });
To make sure the filter in the store is not cleared the first time the ComboBox trigger
is used configure the combo with
lastQuery=''. Example use:
var combo = new Ext.form.field.ComboBox({ ... queryMode: 'local', triggerAction: 'all', lastQuery: '' });:
false
Map for msg target lookup, if target is not in this map it is assumed to be an element id
Defaults to:
{ qtip: 1, title: 1, under: 1, side: 1, none: 1 }
Tells the layout system that the height can be measured immediately because the width does not need setting.
Defaults to:
true
The original value of the field as configured in the value configuration,
or as loaded by the last form load operation if the form's
trackResetOnLoad setting is
true.
Defaults to:
me.getValue()
The default CSS class for the placeholder label cover need when the browser does not support a Placeholder.
Defaults to:
Ext.baseCSSPrefix + 'placeholder-label':
0
This property is
true if the component was created internally by the framework
and is not explicitly user-defined. This is set for such things as
Splitter
instances managed by
border and
box layouts.
Defaults to:
false
Defaults to:
Ext.baseCSSPrefix + 'form-item-label-top'
Deprecated since version 5.0
A composite of all the trigger button elements. Only set after the field has been rendered.
A reference to the element which encapsulates the input field and all trigger button(s). Only set after the field has been rendered..
Adds a value or values to the current value of the field.
A method called when the filtering caused by the doQuery call is complete
and the store has been either filtered locally (if queryMode is
"local"),
or has been loaded using the specified filtering.
queryPlan : Object
An object containing details about the query.
Aligns the picker to the input element
Automatically grows the field to accommodate the width of the text up to the maximum field width allowed. This only takes effect if grow = true, and fires the autosize event if the width changes...
A method which may modify aspects of how the store is to be filtered (if queryMode
is
"local") of loaded (if queryMode is
"remote").
This is called by the doQuery method, and may be overridden in subclasses to modify the default behaviour.
This method is passed an object containing information about the upcoming query operation which it may modify before returning.
queryPlan : Object
An object containing details about the query to before a field is reset..
Binds a store to this instance.
store : Ext.data.AbstractStore/String (optional)
The store to bind or ID of the store.
When no store given (or when
null or
undefined passed), unbinds the existing store.
Center this Component in its container.
this
newValue : Object
oldValue : Object
constrainMethod : Object
styleName : Object
sizeName : Object
Checks whether the value of the field has changed since the last time it was checked. If the value has changed, it:
Checks the isDirty state of the field and if it has changed since the last time it was checked, fires the dirtychange event.
Cleans up values initialized by this Field mixin on the current instance. Components using this mixin should call this method before being destroyed.
Clears any clipping applied to this component by method-clipTo.
Clears all listeners that were attached using the "delegate" event option. Users should not invoke this method directly. It is called automatically as part of normal clearListeners processing.
Clear any invalid styles/messages for this field..
Removes all listeners for this object including the managed listeners
Removes all managed listeners for this object.
Clears any value currently set in the ComboBox.
Collapses this field's picker dropdown.
Runs on touchstart of doc to check to see if we should collapse the picker.
e : Object
Called when focus leaves this input field. Used to postprocess raw values and perform conversion and validation. and returns the component to be used as this field's picker. Must be implemented by subclasses of Picker.
Creates an event handling function which re-fires the event from this object as the passed event name.
newName : String
The name under which to re-fire the passed parameters.
beginEnd : Array (optional)
The caller can specify on which indices to slice..
Checks if the value has changed. Allows subclasses to override for any more complex logic.
newVal : Object
oldVal : Object
Disable the component.
Available since: 1.1.0
silent : Boolean (optional)
Passing
true will suppress the
disable event
from being fired.
Defaults to: false
Performs the alignment on the picker using the class defaults
If the autoSelect config is true, and the picker is open, highlights the first item.
Executes a query to filter the dropdown list. Fires the beforequery event prior to performing the query allowing the query action to be canceled if needed.
queryString : String
The string to use to filter available items by matching against the configured valueField.
forceAll : Boolean (optional)
true to force the query to execute even if there are
currently fewer characters in the field than the minimum specified by the
minChars config option. It also clears any filter previously saved in the current
store.
Defaults to: false
rawQuery : Boolean (optional)
Pass as true if the raw typed value is being used as the query string. This causes the resulting store load to leave the raw value undisturbed.
Defaults to: false
true if the query was permitted to run, false if it was cancelled by a beforequery handler.
Execute the query with the raw contents within the textfield.
Sets or adds a value/values
value : Object
add : Object this field's picker dropdown..
e :
Finds the record by searching for a specific field/value combination.
field : String
The name of the field to test.
value : Object
The value to match the field against.
The matched record or false.
Finds the record by searching values in the displayField.
value : Object
The value to match the field against.
The matched record or
false.
Finds the record by searching values in the valueField.
value : Object
The value to match the field against.
The matched record or
false..
e : Object
eOpts : Object an Array of any active error messages currently applied to the field. This does not trigger validation on its own, it merely returns any messages that the component may already hold.
The active error messages on the component; if there are no errors, an empty Array is returned..
Generates the string value to be displayed in the text field for the currently stored value
tplData : Object
Retrieves the top level element representing this component.
Available since: 1.1.0
Validates a value according to the field's validation rules and returns an array of errors for any failing validations. Validation rules are processed in the following order:
Field specific validator
A validator offers a way to customize and reuse a validation specification.
If a field is configured with a
validator
function, it will be passed the current field value. The
validator
function is expected to return either:
trueif the value is valid (validation continues).
Basic Validation
If the
validator has not halted validation,
basic validation proceeds as follows:
allowBlank : (Invalid message =
blankText)
Depending on the configuration of
allowBlank, a
blank field will cause validation to halt at this step and return
Boolean true or false accordingly.
minLength : (Invalid message =
minLengthText)
If the passed value does not satisfy the
minLength
specified, validation halts.
maxLength : (Invalid message =
maxLengthText)
If the passed value does not satisfy the
maxLength
specified, validation halts.
Preconfigured Validation Types (VTypes)
If none of the prior validation steps halts validation, a field
configured with a
vtype will utilize the
corresponding Ext.form.field.VTypes validation function.
If invalid, either the field's
vtypeText or
the VTypes vtype Text property will be used for the invalid message.
Keystrokes on the field will be filtered according to the VTypes
vtype Mask property.
Field specific regex test
If none of the prior validation steps halts validation, a field's
configured
regex test will be processed.
The invalid message for this test is configured with
regexText
value : Object
The value to validate. The processed raw value will be used if nothing is passed.
Array of any validation errors
Returns the
Ext.util.FilterCollection. Unless
autoCreate is explicitly passed
as
false this collection will be automatically created if it does not yet exist.
autoCreate : Object (optional)
Pass
false to disable auto-creation of the collection.
Defaults to: true
The collection of filters.
Generates the arguments for the field decorations rendering template.
data : Object
optional object to use as the base data object. If provided, this method will add properties to the base object instead of creating a new one.
The template arguments
|
https://docs.sencha.com/extjs/7.0.0/classic/Ext.calendar.form.CalendarPicker.html
|
CC-MAIN-2020-29
|
refinedweb
| 7,673
| 58.08
|
inet6_option_alloc()
Append IPv6 hop-by-hop or destination options into ancillary data object
Synopsis:
#include <netinet/in.h> u_int8_t * inet6_option_alloc(struct cmsghdr *cmsg, int datalen, int multx, int plusy);
Arguments:
- cmsg
- A pointer to the cmsghdr structure that must have been initialized by inet6_option_init().
- datalen
- The length of the option, in bytes. This value is required as an argument to allow the function to determine if padding should be appended at the end of the option, argument since the option data length must already be stored by the caller (the inet6_option_append() function doesn't need a data length).
- multx
- The value x in the alignment term xn + y. It must have a value of 1, 2, 4, or 8.
- plusy
- Value y in the alignment term xn + y. It must have a value between 0 and 7, inclusive.
Library:
libsocket
Use the -l socket option to qcc to link against this library.
Description:
This inet6_option_alloc() function appends a hop-by-hop option or a destination option into an ancillary data object that has been initialized by inet6_option_init().
The difference between this function and inet6_option_append() is that the latter copies the contents of the previously built option into the ancillary data object. This function returns a pointer to the space in the data object where the option's type-length-value or TLV must then be built by the caller.:
A pointer to the 8-bit option type field that starts the option, or NULL if an error has occurred.
|
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/inet6_option_alloc.html
|
CC-MAIN-2019-35
|
refinedweb
| 249
| 53
|
Hi there,
I’m trying to run a MAF tutorial in a jupyter notebook and I am getting this error when I try to start jupyter:
*******@sc:~/lsst_stack$ jupyter notebook
Traceback (most recent call last):
File "/home/nicholas/anaconda3/lib/python3.6/site-packages/notebook/nbextensions.py", line 18, in <module>
from urllib.request import urlretrieve
File "/home/nicholas/anaconda3/lib/python3.6/urllib/request.py", line 88, in <module>
import http.client
File "/home/nicholas/lsst_stack/Linux64/python_future/0.16.0/lib/python/future-0.16.0-py2.7.egg/http/_.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nicholas/anaconda3/bin/jupyter-notebook", line 4, in <module>
import notebook.notebookapp
File "/home/nicholas/anaconda3/lib/python3.6/site-packages/notebook/__init__.py", line 25, in <module>
from .nbextensions import install_nbextension
File "/home/nicholas/anaconda3/lib/python3.6/site-packages/notebook/nbextensions.py", line 20, in <module>
from urlparse import urlparse
ModuleNotFoundError: No module named 'urlparse'
Here are the steps I took to install the LSST stack and MAF:
mkdir -p $HOME/lsst_stack
cd $HOME/lsst_stack
unset LSST_HOME EUPS_PATH LSST_DEVEL EUPS_PKGROOT REPOSITORY_PATH
curl -OL
bash newinstall.sh
[yes to miniconda installation]
source loadLSST.bash
eups distrib install lsst_sims -t sims
setup sims_maf -t sims
eups list -v sims_maf
Where the output from that last command is:
2.3.6.sims /home/nicholas/lsst_stack /home/nicholas/lsst_stack/Linux64/sims_maf/2.3.6.sims sims current sims_2_3_6 setup
So I feel that I have installed the LSST stack properly, but I don’t know how to get jupyter notebook running without the above error.
I can run jupyter notebook fine from a new terminal ,i.e. before I run
source loadLSST.bash
setup sims_maf -t sims
But if I try to run the python notebook tutorial, it breaks here:
# Import required dependencies from LSST stack
import lsst.sims.maf.db as db
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-5804c9d8b44d> in <module>()
5
6 # Import required dependencies from LSST stack
----> 7 import lsst.sims.maf.db as db
8 import lsst.sims.maf.metrics as metrics
9 import lsst.sims.maf.slicers as slicers
ModuleNotFoundError: No module named 'lsst'
I think there is some conflict arising from the installation of python3 which was on the system before I started the installation of the LSST stack. Is there a solution?
This is your python 3 installation.
This is your python 2.7 stack.
I assume the anaconda3 is the python running inside your notebook.
When you ran source loadLSST.bash from the shell that should have put your python2 into the front of $PATH and let you built your own stack. That all seems fine. The next question is what python your Jupyter notebook is using. If that is a python3 then there is no way it can load a stack you built for python2. If you are building your own code inside a notebook environment you have to ensure that you are using the same python that is running your notebook.
anaconda3
source loadLSST.bash
$PATH
As an aside, is there a reason you are using a python2 stack build and not python3?
Thanks @timj, much appreciated. Your comments all make sense. I suspect that I built Jupyter using Python3 on the machine showing the problem.
I installed and built Jupyter using Python2 on my laptop following the same install sequence I gave in my question and all appears to be working fine (so far).
The machine I want to us is the one on which I’ve installed Jupyter using python3, rather than my laptop.
From your aside, it appears that there is a python3 stack available. How do I amend my LSST stack install commands (“Here are the steps I took to install the LSST stack and MAF:”) above, to pull down and install a python3 LSST stack?
Add -3 option to newinstall.sh (bash newinstall.sh -3)
-3
newinstall.sh
bash newinstall.sh -3
|
https://community.lsst.org/t/running-jupyter-notebook-existing-python-3-5-interferes/2152
|
CC-MAIN-2018-17
|
refinedweb
| 667
| 67.04
|
Hi guys,
I've written some code to merge two black and white bitmaps. The code crashes whenwriting to the output bitmap.
I've done some detective work but not found out what's wrong. The bad code is somewhere near the end this code :
I just skimmed over, but two things caught my eye:
lock3 = al_lock_bitmap( output, ALLEGRO_PIXEL_FORMAT_RGBA_8888, ALLEGRO_LOCK_WRITEONLY );
if you are reading and writing to the bitmap, shouldn't you use ALLEGRO_LOCK_READWRITE? Not sure if that's causing the crash.
Also, this at the end:
*((uint32_t *) ptr3) = 0xffffffff;
It seems like you're setting the alpha to the combined value of the colour. It should be more like:
*((uint32_t *) ptr3) = 0xff; //a
*((uint32_t *) ptr3+1) = 0xff; //b
*((uint32_t *) ptr3+2) = 0xff; //g
*((uint32_t *) ptr3+3) = 0xff; //r
What I think is happening is that the value you set for the pixel is out of range, causing the crash.
Also, this at the end:
It's perfectly ok to do that. That would set the pixel to white.
I'm not sure what the problem is William, did you try running it through a debugger?
Doesn't that set the value of the array at position ptr3 to 0xffffffff (4294967295)? Let's say I actually wanted to set unsigned char at that position to that value, that's how I would do it, right? (although it would become 0 or something)
Yeah that's what it does.
This:
unsigned char *ptr;
*((uint32_t *)ptr) = 0xaabbccdd;
Is identical to this:
unsigned char *ptr;
*(ptr+0) = 0xdd;
*(ptr+1) = 0xcc;
*(ptr+2) = 0xbb;
*(ptr+3) = 0xaa;
I wrote this test code:
unsigned char* test = new unsigned char[4];
test[0]=0;
test[1]=0;
test[2]=0;
test[3]=0;
*((unsigned char*)test) = 0xffffffff;
std::cout<<(int)test[0]<<", "<<(int)test[1]<<", "<<(int)test[2]<<", "<<(int)test[3]<<std::endl;
delete []test;
and the output was 255, 0, 0, 0
That's because you casted to unsigned char instead of uint32_t.
Oh, I see now...
Program received signal SIGSEGV, Segmentation fault.0x004024fd in _fu63___ZSt4cout ()(gdb)
I've rewritten the code a bit :
The symbol name suggests it's crashing in cout. Compile with debugging (use -g command line switch to gcc/g++). Then try again, it should tell you a line number. Also run "bt" in gdb after it crashes.
g++ colourer.cpp shade_coloured_picture.cpp merge_black_and_white.cpp text_output_management.cpp -o colourer.exe -lallegro-5.0.5-monolith-md
Where do I put the -g in this line please ?
Doesn't really matter, but let's just say "g++ -g ..." .
Thanks.
Got some better output :-
Program received signal SIGSEGV, Segmentation fault.0x00402530 in _fu64___ZSt4cout () at merge_black_and_white.cpp:228228 r2 = (int) *(ptr2 + 3);(gdb)
So that means line 228 in merge_black_and_white.cpp is what's crashing. Is that what we're looking at? If so I don't think your code listing is up to date because that's not what line 228 looks like.
EDIT: make sure your code and executable are matching.
Okay. nm the code so far.
This code :
...gives this output :-
Which means the pitch of lock3 is changed to 255 at some point.
Any ideas what this could be ?
Memory corruption ?
EDIT :
Would this do anything :-
#include <stdint.h>
It's at the top of the source file.I tried doing
#include <cstdint>
..but the compiler complained.
Oop! I didn't notice this... instead of using ((unsigned char *)lock1) etc, use lock1->data.
Many thanks Trent!
|
https://www.allegro.cc/forums/thread/613082/988461
|
CC-MAIN-2018-43
|
refinedweb
| 577
| 84.07
|
As well as being a programmer, I am a mad keen guitarist, and over the years, I have built up a sizeable collection of guitars of all types and models. One thing about guitars though (acoustic guitars in particular), is that they are quite sensitive to environmental conditions such as temperature and humidity.
Similar to people, guitars like to kept and a relatively cool temperature and somewhere not too dry or damp. Seeing as I live in the tropics, this can be a challenge at time, which is why I try and keep my guitars in my home office, which is secure, as well as air conditioned most of the time.
However, air conditioning is not perfect, and sometimes things like a power failure or someone leaving a window ajar can affect the overall climate of the room. Because I often travel for work and am away from the home office for days at a time, I’d like to keep an eye on any anomalies, so I can advise another family member at home to check or rectify the situation.
What better way than to try and use my programming skills to (a) learn some new skills, and (b) do some experimenting with this whole IoT (internet of things) buzz. Please note that my normal programming work involves business and enterprise type databases and reporting tools, so programming hardware devices is a new thing for me.
The end result is that I wanted a web page that I could access from ANYWHERE in the world, which would give me real time stats as to the temperature and humidity variations in the guitar room throughout a 24 hour period.
Please bear in mind, I am going to try and document ALL the steps I took to build this system, so this blog post is VERY long, but hopefully will serve as a guide for someone else who wants to build something similar.
The steps I will be going through here are:
1. Setting up the Omega Onion to work with my PC2. Hooking up the DHT22 temperature and humidity sensor to my Onion3. Installing all the requisite software on the Onion to be able to do what I want4. Set up Amazon IoT so that the Onion can be a ‘thing’ on the Amazon IoT cloud5. Setting up a DynamoDB database on Amazon AWS to store the temperature/humidity readings from the Onion6. Setting up a web page to read the data from DynamoDB to present it as a chart.
Here is what the final dashboard will look like:
Hat tip: I used this blog post as inspiration for designing the dashboard and pulling data from DynamoDB.
THE HARDWARE
Well, over a year ago I participated in the Onion Omega kickstarter project. I’d got one of these tiny little thumb sized Linux computers but didn’t quite know what to do with it so it sat in its box for a long while until I decided to dust it off this week.
Connecting the Onion up to it’s programming board, I hooked it up to a USB cable from my iMac. In order to get communications happening, I had to download and install a USB to UART driver from here:
Full instructions on connecting the Omega Onion to your Mac is on their Wiki page:
Once I had connected the two devices, I was able to issue the command
screen /dev/tty.SLAB_USBtoUART 115200
from a Terminal screen to connect to the device. Yay!
First thing I had to do was to set up the WiFi so that I could access the device using my local home office WIFi network. That was a simple case of issuing the command
wifisetup
It is a simple step by step program that asks you for your WiFi access point name and security key. Once again, the Wiki link above explains it in more detail.
Once the Wifi is setup on the Onion, you can then access it via its IP address using a web browser. My device ended up being 192.168.15.11, so it was a matter of entering that address in Chrome. Once logged in (the default username is ‘root’ and password ‘onioneer’), you get to see this:
First things first, because my device was so old, I had to go to ‘Settings’ and run a Firmware Update.
I also dug out an old HDT22 sensor unit which I played around with when I dabbled in Arduino projects a while back. I wondered if I could pair the HDT22 with the Onion device, and lo behold, a quick search on the Onion forums showed that this had been done before, quite easily. Here is a blog post detailing how to hook up the HDT22 to the Onion:
The article shows you how to wire the two devices together using only 3 wires. In short, the wiring is as follows on my unit:
Pin 1 from the HDT22 goes to the 5.5V plug on the Omega OnionPin 2 from the HDT22 goes to GPIO port 6 on my OnionPin 3 is unused on the HDT22Pin 4 from the HDT22 goes to the GND (Ground) plug on the Onion
THE SOFTWARE
Now we come to all the software that we will need to be able to collect the data, and send it along to Amazon. In short, we will be writing all our code in Node.js. But we will also be calling some command line utilities to (a) read the data from the HDT22 and (b) send it to the Amazon IoT cloud.
To collect the data, we will be using an app called ‘checkHumidity’ which is detailed on the page above about setting up the DHT22. To talk to the Amazon IoT cloud, we need to use the MQTT protocol. To do this, will be using an app called ‘mosquitto’ which is a nice, neat MQTT wrapper. We can use HTTPS, but MQTT just seemed more efficient and I wanted to experiment with it.
So lets go through these steps for installation. All the packages are fairly small, so it won’t take up much room on the 16MB storage on the Onion. I think my Onion still has about 2MB left after all installs. Here goes (from the Onion command line):
(1) Install the checkHumidity app and set the permissions for running it. checkHumidity is so much cleaner than trying to read the pins on the Onion in Node.js. Running it returns the temperature (in degrees Celsius) and the humidity (as a percentage) in a text response.
opkg updateopkg install wgetcd /rootwget -zxvf 1450434316215-checkhumidity.tar.gzchmod -R 755 /root/checkHumidity/bin/checkHumidity
If your HDT 22 is connected to pin 6 like my board, try it out:
/root/checkHumidity/bin/checkHumidity 6 HDT2229.649.301
Showing me 29.6 degrees C wilth 49.301% humidity!
(2) Install Node.js on the Onion. From here on in, we will be using the opkg manager to install:
opkg install nodejs
(3) I also installed nano because it is my favourite editor on Linux. You can bypass this if you are happy with any other editor (Note: There is also an editor on the web interface, but I had some issues with saving on it):
opkg install nano
(4) Install the mosquitto app for MQTT conversations:
opkg install mosquittoopkg install mosquitto-client
This installs the mosquitto broker and client. We won’t really be using the broker, mainly the client, but it is handy to have if you want to set up your Onion as an MQTT bridge later.
AMAZON IOT
Ok, now we have almost everything prepped on the device itself, we need to set up a ‘thing’ on Amazon’s IoT cloud to mimic the Onion. The ‘thing’ you set up on Amazon acts as a cloud repository for information you want to store on your IoT device. Amazon uses a concept of a ‘shadow’ for the ‘thing’ that can store the data. That way, even if your physical ‘thing’ is powered off or offline, You can still send MQTT packets of data to the ‘thing’, and the data will be stored on the ‘shadow’ copy of the ‘thing’ in the cloud until the device comes back online, at which point Amazon can copy the ‘shadow’ data back to the physical device.
You see, our Node.js app will be pushing temperature and humidity data to the shadow copy of the ‘thing’ in the cloud. From there, we can set up a rule on Amazon IoT to further push that data into a DynamoDB database.
Setting up the ‘thing’ on the cloud can be a little tricky. Mainly due to the security. Because the physical device will be working unattended and pretty much anonymously, authentication is carried out using security certificates. Lets step through the creation of a ‘thing’. (Note: This tutorial assumes you already have an AWS account set up).
From the Amazon Console, click on ‘Services’ on the top toolbar, then choose ‘AWS IoT’ under ‘Internet Of Things’.
On the left hand menu, click on ‘Registry’, then ‘Things’.
Your screen will probably be blank if you have never created a thing before. Click on ‘Create’ way over on the top right hand side of your screen.
You will need to give you thing a name. Call it anything you like. I just used the unique name for my Omega Onion, which looks like Omega-XXXX.
Great! Next, you will be taken to a screen showing all the information for your ‘thing’. Click on the ‘Security’ option on the left hand side.
Click on the ‘Create Certificate’ button.
You can now download all four certificates from this screen and store them in a safe place.
NOTE: DON’T FORGET to click on the link for ‘A root CA for AWS IoT Download’. This is the Root CA certificate that we will need later. Store all 4 certificates in a safe place for now on your local hard drive. Don’t lose them or you will have to recreate the certificates again and re-attach policies etc. Messy stuff.
Lastly, click on ‘Activate’ to activate your certificates and your thing.
Next, we have to attach a policy to this certificate. There is a button marked ‘Create Policy’ on this security screen. Click it, and you will see the next screen asking you to create a new policy.
We are going to create a simple policy that lets us perform any IoT action against any device. This is rather all encompassing, and in a production environment, you may want to restrict the policy down a little, but for the sake of this exercise, we will enable all actions to all devices under this policy:
In the ‘Action’ field, enter ‘iot:*’ for all IoT actions, and in the ‘Reource ARN’ field, enter ‘*’ for all devices and topics etc. Don’t forget to tick the ‘Allow’ button below, then click ‘Create’.
You now have a thing, a set of security certificates for the thing, and a policy to control the certificates against the thing. Hopefully the policy should be attached to the certificates that you just created. If not, you will have to manually attach the policy to the certificates. To do this, click on ‘Security’ on the left hand menu, then click on ‘Certificates’, then click on the certificate that you just created.
Click on the ‘Policies’ on the left hand side of the certificate screen.
If you see ‘There are no policies attached to this certificate’, then you need to attach it by clicking on the ‘Actions’ drop down on the top right, then choosing ‘Attach Policy’ from the drop down menu.
Simply tick the policy you want to attach to this certificate, then click ‘Attach’.
You may want to now click on ‘Things’ on the left hand menu to ensure that the thing you created is attached to the certificate as well.
To ensure all your ducks are in a row:-
The ‘thing’ -> needs to have -> Security Certificate(s) -> needs to be attached to -> A Policy
Actually, there is one more factor that we want to note on here which is important for later. Go ahead and click on the ‘Registry’ then ‘Things’ on the IoT dashboard. Choose the thing you just created, and then click on the ‘Interact’ option on the left hand menu that pops up.
Notice under HTTPS, there is a REST API endpoint shown. Copy this information down and keep it aside for now, because we will need it in our Node.js code later to specify which host we want to talk to. This host address is unique for each Amazon IoT account, so keep it safe and under wraps.
Also note on this screen that there are some special Amazon IoT reserved topics that can be used to update or read the shadow copy of your IoT thing. We won’t really be using these in this project, but it is handy to know for more complex projects where you might have several devices talking to each other, and also devices that may go on and offline a lot. The ‘shadow’ feature allows you to still ‘talk’ to those devices even though they are offline or unavailable, and lets them sync up later. Very powerful stuff.
Next, we will take a break from the IoT section, and set up a DynamoDB table to collect the data from the Onion.
AMAZON DYNAMODB
Click on ‘Services’ then ‘Dynamo DB’ under ‘Databases’.
Click on ‘Create Table’.
Give the table a meaningful name. Important: Give the partition key the name of ‘id’ and set it to a ‘String’ type. Tick the box that says ‘Add sort key’ and give the key a name of ‘timestamp’ and set it to a ‘Number’ type. This is very important, and you cannot change it later, so please ensure your setup looks like above.
Tip: Once you have created your DynamoDB table, copy down the “Amazon Resource Name (ARN)” on the bottom of the table information screen (circled in red above). You will need this bit of information later when creating a security policy for reading data from this table to show on the web site chart.
Ok, now that you have a table being created, you can go back to the Amazon IoT Dashboard again for the next step (‘Services’ then ‘AWS IoT’ in your console top menu). What we will do now is create a ‘Rule’ in IoT which will handball any data coming in to a certain topic across to DynamoDB to store in a data file.
Tip: When you transmit data to an IoT thing using MQTT, you generally post the data to a ‘topic’. The topic can be anything you like. Amazon IoT has some reserved topic names that do certain things, but you can post MQTT packets to any topic name you make up on the spot. Your devices can also listen on a particular topic for data coming back from Amazon etc. MQTT is really quite a nice, powerful and simple way to interact with IoT devices and servers.
In the IoT dashboard, click on ‘Rules’ on the left hand side, then click the ‘Create’ button.
The ‘Name’ can be something distinctive that you make up. Add a ‘Description’ to help you remember what this rule does. For the ‘SQL Version’, just choose ‘2016–03–23’ which is the latest one at time of writing.
Below that, on ‘Attribute’, type in ‘*’ because we will be selecting ALL fields sent to us. In the ‘Topic Filter’, type in ‘temp-humidity/+’. This is the topic name that we will be listening out for. You can call it anything you like. We include a ‘/+’ at the end of the topic name because we can add extra data after this, and we want the query to treat this extra data as a ‘wildcard’ and still select it. (Note: We will be adding the device name to the end of the topic as an identifier (e.g. temp-humidity/Omega-XXXX). This way, if we later have multiple temperature/humidity sensors, we can identify each one via a different topic suffix, but still get all the data from all sensors sent to DynamoDB).
ERRATA: The screenshot above shows ‘temp-humidity’ in the ‘Topic Filter’ field, but it should actually be ‘temp-humidity/+’.
Leave the ‘Condition’ blank.
Now below this, you will see an ‘Add Action’ button. Click this, and choose ‘Insert a message into a DynamoDB table’.
As you can see, there is a myriad of other things you can do, including on forwarding the data to another IoT device. But for now, we will just focus on writing the data and finishing there. Click on the ‘Configure Action’ button at the bottom of the screen.
Choose the DynamoDB table we just created from the drop down ‘Table Name’. The ‘Hash Key’ should be ‘id’, of type ‘STRING’, and in the ‘Hash Key Value’, enter ‘${topic()}’. It means we will be storing the topic name as the main key.
The ‘Range Key’ should be ‘timestamp’ with a type of ‘NUMBER’. The ‘Range Key Value’ should be ‘${timestamp()}’. This will place the contents of the packet timestamp in this field.
Lastly, in the the ‘Write Message Data To This Column’, I enter in ‘payload’. This is the name of the data column that contains the object with the JSON data packet sent from the device. You can call this column anything you like, but I like to call it ‘payload’ or ‘iotdata’ or similar so that I know all the packet information is stored under here.
One more thing to do, for security purposes, we have to set up an IAM role which will allow us to add data to the DynamoDB table. This is actually quite easy to do from here. Click the ‘Create A New Role’ button.
Give the role a meaningful name, then click ‘Create A New Role’. A new button will show up with the text next to it saying ‘Give AWS IoT permission to send a message to the selected resource’. Click on the ‘Update Role’ button.
Important: You must click the ‘Update Role’ button to set the privileges properly. Once completed, click the ‘Update’ button.
Thats It! We are pretty much done as far as Amazon IoT and DynamoDB setup. It was quite a rigmarole wasn’t it? Lots of steps that have to be done in a certain order. But the good news is that once this is done, the rest of the project is quite easy, AND FUN!
INSTALLING CERTIFICATES
Oh, Wait — One more slightly tedious step to do. Remember those 4 certificates we downloaded much earlier? Now is the time we need to put them to good use (well, 3 out of the 4 at least). We need to copy these certificates to the Onion. I found it easiest to copy and paste the text contents of the certificate over onto the ‘/home/certs’ folder on the Onion. I simply used the web interface editor to create the files in the ‘/home/certs’ folder and paste the contents of the certificate I downloaded. The three certificates I needed (and which I copied and renamed) are:
- VeriSign-Class3-Public-Primary-Certification-Authority-G5.pem -> /home/certs/rootCA.pem
- x1234abcd56ef-certificate.pem.crt -> /home/certs/certificate.pem
- x1234abcd56ef-private.pem.key -> /home/certs/private.key
As you can see, I shortened down the file name for ease of handling, and put them all into one folder for easy access from my Node.js app too. That’s it. Once done, you don’t have to muck about with certificates any more.
Exactly where you store the certificates or what you call them is not important, you just need to know the details later when writing the Node.js script.
WRITING CODE
Ok, back to the Omega Onion now, where we will write the code to grab information from the HDT22 and transmit it to Amazon IoT. This is where the rubber hits the road. Using nano, or the web editor on the Onion, create a file called ‘/home/app.js’ and enter the following:
var util = require('util');var spawn = require('child_process').spawn;var execFile = require('child_process').execFile;
var mosqparam = ['--cafile', '/home/certs/rootCA.pem','--cert', '/home/certs/certificate.pem','--key', '/home/certs/private.key','-h', 'a1b2c3d4e5f6g7.iot.us-east-1.amazonaws.com','-p', '8883'];
setInterval(function() {execFile('/root/checkHumidity/bin/checkHumidity', ['6','DHT22'], function(error, stdout, stderr) {var dataArray = stdout.split("\n");var logDate = new Date()var postData = {datetime: logDate.toISOString(),temperature: parseFloat(dataArray[1]),humidity: parseFloat(dataArray[0])}// publish to main data queue (for DynamoDB)execFile('mosquitto_pub', mosqparam.concat('-t', 'temp-humidity/Omega-XXXX', '-m', JSON.stringify(postData)), function(error, stdout, stderr) {// published});// publish to device shadowvar shadowPayload = {state: {desired: {datetime: logDate.toISOString(),temperature: parseFloat(dataArray[1]),humidity: parseFloat(dataArray[0])}}}execFile('mosquitto_pub', mosqparam.concat('-t','$aws/things/Omega-XXXX/shadow/update', '-m', JSON.stringify(shadowPayload)), function(error, stdout, stderr) {// shadow update done});});}, 1000 * 60 * 5);
NOTE: I have obfuscated the name of the Omega device here, as well as the Amazon IoT host name for my own security. You will need to ensure that the host name and device name correspond to your own setups above.
Lets go through this code section by section. At the top are the ‘require’ statements for the Node.js modules we need. Luckily no NPM installs needed here, as the modules we want are part of the core Node.js install.
Then we define an array called ‘mosqparam’. These are actually the parameters that we need to pass to the mosquitto command line each time — mainly so it know the MQTT host (-h) and port (-p) it will be talking to, and where to find the 3 certificates that we downloaded from Amazon IoT and copied across earlier.
Tip: If your application fails to run, it is almost certain that the certificate files either cannot be found, or else they have been corrupted during download or copying across to the Onion. The mosquitto error messages are cryptic at best, and a certificate error doesn’t always present to obviously. Take care with this bit.
After this is the meat of the code. We are basically running a function within a javascript setInterval() function which fires once every five minutes.
What this function does is run an execFile() to execute the checkHumidity app that we downloaded and installed earlier. It then takes the two lines that the app returns and splits them by the carriage return (\n) to form an array with two elements. We then create a postData object which contains the temperature, the humidity, and the log time as an ISO8601 string.
Then we transmit that postData object to Amazon IoT by calling execFile() on the ‘mosquitto_pub’ command that we also installed earlier as part of the mosquitto package. mosquitto_pub basically stands for ‘MQTT Publish’, and it will send the message (-m) consisting of the postData object translated to JSON, to the topic (-t) ‘temp-humidity/Omega-XXXX’.
That is really all we need to do, however, in the code above, I’ve done something else. Straight after publishing the data packet to the ‘temp-humidity/Omega-XXXX’ topic, I did a second publish to the ‘$aws/things/Omega-XXXX/shadow/update’ topic as well, with essentially the same data, but with some extra object wrappers around it in shadowPayload.
Why did I do this? Well, the ‘$aws/things/Omega-XXXX/shadow/update’ topic is actually a special Amazon IoT topic which stores the data packet within the ‘shadow’ copy of the Omega-XXXX thing in the cloud. That means that later on, I can use another software system from anywhere in the world to interrogate the Omega-XXXX shadow in the cloud to see what the latest data readings are.
If for any reason the Onion goes offline or the home internet goes down, I can interrogate the shadow copy to see what and when the last reading was. I don’t need to set this up, but for future plans I have, I thought it would be a good idea.
Enough talk — save the above file, lets run the code
cd /homenode app.js
You won’t see anything on the screen, but in the background, every 5 minutes, the Omega Onion will read the data and transmit it to. Hopefully it is working.
If it doesn’t work — things to check are the location and validity of the certificate file. Also check that your home or work firewall isn’t blocking port 8883 which is the port MQTT uses to communicate with Amazon IoT.
Now ideally we want our Node.js app to run as a service on the Omega Onion. That way, if the device reboots or loses power and comes back online, the app will auto start and keep logging data regardless. Fortunately, this is easy as well.
Using nano, create a script file called /etc/init.d/iotapp and save the following in it:
#!/bin/sh /etc/rc.common# Auto start iot app script
START=40
start() {echo startservice_start /usr/bin/node /home/app.js &}
stop() {echo stopservice_stop /usr/bin/node /home/app.js}
restart() {stopstart}
Save the file, then make it executable:
chmod +x /etc/init.d/iotapp
Now register it to auto-run:
/etc/init.d/iotapp enable
Done. The service should start at bootup, and you can start/stop it anytime from the command line via:
/etc/init.d/iotapp stop
or
/etc/init.d/iotapp start
If you go back to your DynamoDB dashboard, click on the table you created, you should be able to see the packet data being sent and updated every 5 or so minutes.
Also, if you go to the Amazon IoT dashboard and click on ‘Registry’ then ‘Things’ and then choose your IoT thing, then click on ‘Activity’, you should see a history of activity from the physical board to the online thing. You can click on each activity line to show the data being sent.
Hopefully everything is working out for you here. Feel free to adjust the setInterval() timing to one minute or so, just so you don’t have to wait so long to see if data is being streamed. In fact, tweak the interval setting to whatever you like to suit your own needs. 5 minutes may be too short a span for some, or it may be too long for others. The value is in the very last line of the Node.js code:
1000 (milliseconds) x 60 (seconds in a minute) x 5 (minutes)
SET UP THE WEBSITE
Final stretch now. Funny to think that all that hard work we did above is essentially invisible. But this bit here is what we, as the end user, will see and interact with.
What we will do here is to set up a simple web site which will read the last 24 hours of data from our DynamoDB table we created above, and display it in a nice Chart.js line chart showing us the temperature and humidity plot over that time. The web site itself is a simple Bootstrap/jQuery based one, with a single HTML file and a single .js file with our script to create the charts.
Since I am using Amazon for nearly everything else, I decided to use Amazon S3 to host my website. You don’t have to do this, but it is an incredibly cheap and effective way to quickly throw up a static site.
A bigger problem would be how to read DynamoDB data within a javascript code block on a web page. Doing everything client side means that my Amazon credentials will have to be exposed on a publicly accessible platform — meaning anyone can grab it and use it in their own code.
Most knowledgebase articles I scanned suggested using Amazon’s Cognito service ‘Identity Pools’ to set up authentication, but setting up identity pools is another long and painful process. I was fatigued after doing all the above set up by now, so opted for the quick solution of setting up a ‘throwaway’ Amazon IAM user with just read only privileges on my DynamoDB data table. This is not ‘best practice’, but I figured for a non critical app like this (I don’t really care who can see the temperature setting in my guitar room — it’s not like a private video or security feed) that it would do for what I needed.
Additionally, I have CloudWatch alarms set up on my DynamoDB tables so if I see excessively high read rates from nefarious users, I can easily revoke the IAM credentials or shut down the table access.
AMAZON IAM
To set up a throwaway IAM, go to the ‘Services’ menu in your AWS console and choose ‘IAM’ under ‘Security, Identity and Compliance’.
Click on the ‘Users’ option on the menu down the left, then click ‘Create’ to create a new IAM user:
Give the user any name you like, but ensure you tick the box saying ‘Programmatic Access’. Then click the ‘Next: Permissions’ button.
On the next screen, click on the third image at the top which says ‘Attach existing policies directly’. Then click on the button that says ‘Create Policy’.
Note: This will open the Create Policy screen on a new browser tab.
On the Create Policy screen, click the ‘Select’ button on the LAST option, i.e. ‘Create Your Own Policy’.
Enter in the policy details as below. Ensure that the ‘Resource’ line contains the ARN of your DynamoDB table like we found out above.
Here is the policy that you can cut and paste into the editor yourself (after substituting your DynamoDB ARN in it):
{"Version": "2012-10-17","Statement": [{"Sid": "ReadOnlyIoTDataTable","Effect": "Allow","Action": ["dynamodb:DescribeTable","dynamodb:Query","dynamodb:Scan"],"Resource": "<insert your DynamoDB ARN here>"}]}
Once done, click on ‘Validate Policy’ to ensure everything is OK, the ‘Create Policy’.
Now go back to the previous browser tab where you were creating the user, and click the ‘Refresh’ button. You should now see the policy you just create on the list. (Hint: You can do a search on the policy name). Tick it.
Click ‘Next’ to go to the review screen, then click ‘Create User’.
Copy down the key and click on ‘Show’ to show the secret. Copy both of these and keep them safely aside. We will need them in our web site script below.
Ok, now lets set up the Amazon S3 bucket to host our website.
AMAZON S3
Click on ‘Service’ on your AWS Console, then choose ‘S3’ under ‘Storage’. You should see a list of buckets if you have used S3 before. Click on ‘Create Bucket’ on the top left to create a new bucket to host your website.
Give your bucket a meaningful name.
Tip: The bucket name will be part of your website name that you will need to type in your browser, so it helps to make it easy to remember and if it gives a hint as to what it does.
Once the bucket is created, select it from the list of buckets by clicking on the name. Your bucket is obviously empty for now.
Click on the ‘Properties’ button on the top right, then expand the ‘Permissions’ section. You will see your own username as a full access user.
Click on the ‘Add more permissions’ button here, and choose ‘Everyone’ from the drop down, and tick the ‘List’ checkbox. This will give all public users the ability to see the contents of this bucket (i.e. your web page). Click on ‘Save’ to save these permissions.
Next, expand the section below that says ‘Static Website Hosting’.
Click on the radio button which says ‘Enable website hosting’, and enter in ‘index.html’ in the ‘Index Document’ field.
Click ‘Save’.
That is about it — this is the minimum required to set up a website on S3. You can come back later to include an error page filename and set up logging etc., but this is all we need for now.
NOTE: Copy down the ‘Endpoint’ link on this page (circled in red). This will be the website address you need to type into your browser bar later to get access to the web page we will be setting up.
Tip: You can use Amazon Route53 to set up a more user friendly name for your website, but we won’t go into that in this already lengthy tutorial. There are plenty of resources on Google which go into that in detail.
THE CODE
Now for the web site code itself. Use your favourite editor to create this index.html file:
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta http-<meta name="viewport" content="width=device-width, initial-scale=1"><meta name="description" content=""><meta name="author" content="">
<title>Home Monitoring App</title>
<!-- Bootstrap core CSS --><link rel="stylesheet" href="">
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries --><!--[if lt IE 9]><script src=""></script><script src=""></script><![endif]-->
</head>
<body>
<div class="container"><br /><div class="jumbotron text-center"><h1>Temperature & Humidity Dashboard</h1><p class="lead">Guitar Storage Room</p></div>
<div class="row">
<div class="col-md-6">
<canvas id="temperaturegraph" class="inner cover" width="500" height="320"></canvas>
<br /><div class="panel panel-default"><div class="panel-body"><div class="row"><div class="col-sm-3 text-right"><span class="label label-danger">High</span> </div><div class="col-sm-9"><span id="t-high" class="text-muted">(n/a)</span></div></div><div class="row"><div class="col-sm-3 text-right"><span class="label label-success">Low</span> </div><div class="col-sm-9"><span id="t-low" class="text-muted">(n/a)</span></div></div></div></div></div>
<div class="col-md-6">
<canvas id="humiditygraph" class="inner cover" width="500" height="320"></canvas>
<br /><div class="panel panel-default"><div class="panel-body"><div class="row"><div class="col-sm-3 text-right"><span class="label label-danger">High</span> </div><div class="col-sm-9"><span id="h-high" class="text-muted">(n/a)</span></div></div><div class="row"><div class="col-sm-3 text-right"><span class="label label-success">Low</span> </div><div class="col-sm-9"><span id="h-low" class="text-muted">(n/a)</span></div></div></div></div></div></div>
<div class="row"><div class="col-md-12"><p class="text-center">5 minute feed from home sensors for the past 24 hours.</p></div></div>
<footer class="footer"><p class="text-center">Copyright © Devan Sabaratnam - Blaze Business Software Pty Ltd</p></footer>
</div> <!-- /container -->
<script src=""></script><script src=""></script><script src=""></script><script src="refresh.js"></script></body></html>
Nothing magical here — just a simple HTML page using bootstrap constructs to place the chart canvas elements on the page in two columns. We are loading all script and css goodies using external CDN links for Bootstrap, jQuery, Amazon SDK and Chart.js etc. so we don’t have to clutter up our web server with extra .js and .css files.
Next we code up the script, in a file called refresh.js:
AWS.config.region = 'us-east-1'; // RegionAWS.config.credentials = new AWS.Credentials('AKIZBYNOTREALPQCRTVQ', 'FYu9Jksl/aThIsNoT/ArEaL+K3yTR8fjpLkKg');
var dynamodb = new AWS.DynamoDB();var datumVal = new Date() - 86400000;var params = {TableName: 'iot-temperature-humidity',KeyConditionExpression: '#id = :iottopic and #ts >= :datum',ExpressionAttributeNames: {"#id": "id","#ts": "timestamp"},ExpressionAttributeValues: {":iottopic": { "S" : "temperature-humidity/Omega-08AD"},":datum": { "N" : datumVal.toString()}}};
/* Create the context for applying the chart to the HTML canvas */var tctx = $("#temperaturegraph").get(0).getContext("2d");var hctx = $("#humiditygraph").get(0).getContext("2d");
/* Set the options for our chart */var options = {responsive: true,showLines: true,scales: {xAxes: [{display: false}],yAxes: [{ticks: {beginAtZero:true}}]}};
/* Set the inital data */var tinit = {labels: [],datasets: [{label: "Temperature °C",backgroundColor: 'rgba(204,229,255,0.5)',borderColor: 'rgba(153,204,255,0.75)',data: []}]};
var hinit = {labels: [],datasets: [{label: "Humidity %",backgroundColor: 'rgba(229,204,255,0.5)',borderColor: 'rgba(204,153,255,0.75)',data: []}]};
var temperaturegraph = new Chart.Line(tctx, {data: tinit, options: options});var humiditygraph = new Chart.Line(hctx, {data: hinit, options: options});
$(function() {getData();$.ajaxSetup({ cache: false });setInterval(getData, 300000);});
/* Makes a scan of the DynamoDB table to set a data object for the chart */function getData() {dynamodb.query(params, function(err, data) {if (err) {console.log(err);return null;} else {
// placeholders for the data arraysvar temperatureValues = [];var humidityValues = [];var labelValues = [];
// placeholders for the data readvar temperatureRead = 0.0;var humidityRead = 0.0;var timeRead = "";
// placeholders for the high/low markersvar temperatureHigh = -999.0;var humidityHigh = -999.0;var temperatureLow = 999.0;var humidityLow = 999.0;var temperatureHighTime = "";var temperatureLowTime = "";var humidityHighTime = "";var humidityLowTime = "";
for (var i in data['Items']) {// read the values from the dynamodb JSON packettemperatureRead = parseFloat(data['Items'][i]['payload']['M']['temperature']['N']);humidityRead = parseFloat(data['Items'][i]['payload']['M']['humidity']['N']);timeRead = new Date(data['Items'][i]['payload']['M']['datetime']['S']);
// check the read values for high/low watermarksif (temperatureRead < temperatureLow) {temperatureLow = temperatureRead;temperatureLowTime = timeRead;}if (temperatureRead > temperatureHigh) {temperatureHigh = temperatureRead;temperatureHighTime = timeRead;}if (humidityRead < humidityLow) {humidityLow = humidityRead;humidityLowTime = timeRead;}if (humidityRead > humidityHigh) {humidityHigh = humidityRead;humidityHighTime = timeRead;}
// append the read data to the data arraystemperatureValues.push(temperatureRead);humidityValues.push(humidityRead);labelValues.push(timeRead);}
// set the chart object data and label arraystemperaturegraph.data.labels = labelValues;temperaturegraph.data.datasets[0].data = temperatureValues;
humiditygraph.data.labels = labelValues;humiditygraph.data.datasets[0].data = humidityValues;
// redraw the graph canvastemperaturegraph.update();humiditygraph.update();
// update the high/low watermark sections$('#t-high').text(Number(temperatureHigh).toFixed(2).toString() + '°C at ' + temperatureHighTime);$('#t-low').text(Number(temperatureLow).toFixed(2).toString() + '°C at ' + temperatureLowTime);$('#h-high').text(Number(humidityHigh).toFixed(2).toString() + '% at ' + humidityHighTime);$('#h-low').text(Number(humidityLow).toFixed(2).toString() + '% at ' + humidityLowTime);
}});}
Lets go through this script in detail.
The first two lines set up the Amazon AWS SDK. We need to specify the AWS region, then we need to specify the credentials we will be using for interrogating the DynamoDB table. Copy and paste in the Key and Secret that you created in the previous section here.
The next bit is initialising the AWS DynamoDB object in ‘dynamodb’. The ‘datumVal’ variable contains a timestamp that is 24 hours before the current date/time. This will be used in the DynamoDB query to only select data rows in the prior 24 hour period.
The ‘params’ object contains the parameters that will be sent to the dynamodb object to select the table, and run a query upon it. I am not a fan of NoSQL, mainly because querying data is a huge pain, and this proves it. The next 10 lines are purely setting up an expression to look at the ID and the Timestamp columns in the DynamoDB table, and pull our all ID’s which contain ‘temp-humidity/Omega-XXX’ (remember, the ID is actually the topic, including the thing identifier), and a timestamp that is greater than, or equal to the ‘datum’ that we set before.
Next, on line 20 and 21 we set up the context placeholders for the two charts. Simple Chart.js stuff here.
Lines 23 to 62 we are simply setting up some default placeholders for the charts, including the colours of the lines and shading etc. I am also using some xAxes and yAxes properties to turn off the X-axis labels and to ensure the Y-Axis starts at a zero base. You can omit these if you want the graph to look more dynamic (or cluttered! :)).
Lines 64 and 65 is just initialising the Chart.js objects with the above options and context.
Next comes a generic function that calls the getData() function every five minutes. You can change the setInterval() parameter from 300000 (1000 milliseconds per second x 60 seconds per minute x 5 minutes) to whatever you like. But seeing as we are only pushing temperature and humidity data from our Onion to Amazon IoT every 5 minutes as well, anything less than a 5 minute check is just overkill. Feel free to tailor these numbers to suit your own purposes though.
Line 70 to the end is just the getData() function itself. All this does is run a query against the ‘dynamodb’ object using the ‘params’ we supplied for the query parameters etc.
The results are returned in the data[‘Items’] array.
Lines 81 to 84 just sets up the placeholder arrays for the values and labels to be used on the charts.
Lines 86 to 99 I have set up purely for checking the highest and lowest settings for the temperature and humidity reading. You can elect not to do this, but I wanted to show on the main page the highs/lows for the preceding 24 hour period. I am simply initialising some empty variable here to use in the following loop.
Lines 101 to 129 is just a simple loop that runs through the returned data[‘Items’] array and parses the keys into the variables and arrays I defined above. I am also comparing the read values against the highs and lows. For every array element I read, I check to see if the highs are higher than the last highest value, and the lows lower that the last value(s), and update the highs/lows accordingly.
Then, after the loop, lines 132 to 136 update the Chart.js chart data and labels with what we have read in the loop.
Lines 139 and 140 force the charts to redraw themselves. Lines 143 to 146 use jQuery AJAX calls to update the High and Low sections on the main web page with the readings and times.
That is it!
Save these two files, then upload them to your bucket by going back to your Amazon S3 Bucket screen and clicking on the ‘Actions’ button and choosing ‘Upload Files’.
Drag and drop the two files onto the upload screen, but don’t start it yet! Click on the ‘Set Details >’ button at the bottom, then immediately click on ‘Set Permissions >’.
Make sure you tick the box that says ‘Make everything public’, otherwise nobody can see your index.html file!
Now click ‘Start Upload’ to begin uploading the two files.
You are DONE! Can you believe it?? We are done. Finished. Completed.
If you type in the website address we noted down earlier into your browser, you should be able to see a beautiful dashboard showing the collected data from your Onion Omega device.
CONCLUSION
If you made it this far, then congratulations on achieving this marathon. It took me several days to nut the above settings out, and many false starts and frustrations along with it. I am hoping that by documenting what eventually worked for me, I can reduce your stress and wasted time and set you on the path to IoT development a lot quicker and easier.
Next steps for me are to set up a battery power source for my Omega Onion, so it doesn’t have to be connected to my computer, and can sit on a shelf somewhere in my guitar storage room and still report to me.
Let me know if you find this tutorial useful, and please also let me know what you guys have built with IoT — it is a fascinating field!
Note: Reposted from my personal and development blog. If you liked this article, please click on the ‘heart’ icon below. :)
|
https://hackernoon.com/building-an-iot-dashboard-using-the-onion-omega-and-amazon-aws-a3520f850c9
|
CC-MAIN-2022-40
|
refinedweb
| 7,275
| 62.68
|
Details
- Type:
Bug
- Status:
Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: X10 2.1.2
-
- Component/s: X10 Compiler: Front-end Constraints
- Labels:None
- Environment:SVN Revision: 20648
- Testcase included:
- Number of attachments :
Description
Compiling
import x10.util.Ordered; public class LessIsMore{ static def doIt[T](a: T{T <: Ordered[T]}) { return a < a; } }
yields
LessIsMore.x10:4: No valid method call found for call in given type. Call: operator<(T, T) Type: x10.lang.Any
Can you explain what this error message means? I thought that T<:Ordered[T] meant "T implements Ordered[T]"
and therefore (since "<" is in that interface) I ought to be able to use it.
Activity
Thanks. It does work in my "real" code. It occurred to me that there is a situation where you really might want to attach the constraint to the argument.
def doIt[T]{a:T{T <: Ordered[T]}, b: T) { /* need "a" ordered, but not b */ }
Not all subtypes of T need implement Ordered[T], but it may be important that the first argument should. So it does make sense to put some effort into fixing the constraint propagation.
Sorry, it doesn't work that way. You are not constraining a and b, you are constraining T. And the constraint is in effect for the whole method. So either both a and b are ordered, or neither is (because they are the same type, T). Since T is defined by doIt(), you may as well put the constraint into the method guard.
If you want two potentially different types, you have to use two different type variables.
The meaning you tried to assign to this idiom above is actually captured by XTENLANG-683.
Again thanks: I'll make sure I explain this clearly in the Guide.
Defer to 2.2.1.
bulk defer of open issues to 2.2.2.
bulk defer of issues to 2.2.3.
bulk defer of 2.3.0 open issues to 2.3.1.
bulk defer to 2.3.2
bulk defer to 2.4.1.
bulk defer to 2.4.2
bulk defer to 2.4.3
bulk defer to 2.4.4
bulk defer to 2.5.2
This is a bug in propagating type constraints from method arguments into the environment. A workaround for now is to move the type constraint into the method guard, namely:
|
http://jira.codehaus.org/browse/XTENLANG-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
|
CC-MAIN-2014-52
|
refinedweb
| 396
| 77.43
|
How To Pretty-Print a Python ElementTree Structure
ElementTree doesn’t support pretty-printing XML. lxml does, but isn’t installed on our system. minidom‘s toprettyxml() is seriously fucked up. What to do? Turned out PyXML was installed, so I took some advice from here and came up with this function, which takes an ET node and returns a pretty-printed string:
import xml.etree.ElementTree as ET from xml.dom.ext.reader import Sax2 from xml.dom.ext import PrettyPrint from StringIO import StringIO def prettyPrintET(etNode): reader = Sax2.Reader() docNode = reader.fromString(ET.tostring(etNode)) tmpStream = StringIO() PrettyPrint(docNode, stream=tmpStream) return tmpStream.getvalue()
rajbot 5:37 pm on July 29, 2009 Permalink | Log in to Reply
xml.dom.ext seems completely undocumented on the web. gives a 404
Here is what pydoc xml.dom.ext says about PrettyPrint():
Dan 9:31 am on July 30, 2009 Permalink | Log in to Reply
This is where I like the lxml module for all my Python XML handling.
Of coure, if you just have an XML file the xmllint binary on unix systems is even better.
Dan
Anand Chitipothu 9:45 pm on January 1, 2010 Permalink | Log in to Reply
Looks like xml.dom.ext is not part of Python Standard Library. It is added by installing PyXML.
source:
Found a prettyprint utility in “Element Library Functions”.
|
http://www.tikirobot.net/wp/2009/07/29/how-to-pretty-print-a-python-elementtree-structure/
|
CC-MAIN-2014-41
|
refinedweb
| 229
| 69.38
|
ISINFF(3) BSD Programmer's Manual ISINFF(3)
isinff, isnanf - test for infinity or not-a-number
libm
#include <math.h> int isinff(float); int isnanf(float);
The isinff() function returns 1 if the number is "Infinity", otherwise 0. The isnanf() function returns 1 if the number is "not-a-number", other- wise 0.
isinf(3), isnan(3), math(3) IEEE Standard for Binary Floating-Point Arithmetic, Std 754-1985, ANSI.
Neither the VAX nor the Tahoe floating point have distinguished values for either infinity or not-a-number. These routines always return 0 on those architectures. MirOS BSD #10-current August.
|
http://mirbsd.mirsolutions.de/htman/sparc/man3/isnanf.htm
|
crawl-003
|
refinedweb
| 102
| 50.84
|
Important: Please read the Qt Code of Conduct -
What parts of ECMAScript 7 are supported in Qt 5.12?
In the Qt 5.12 What's New documentation it states that:
"The JavaScript engine now supports ECMAScript 7. This includes an upgrade to ECMAScript 6"
However, it seems that there is only partial support for ECMAScript 7. Promises work (but are buggy) but arrow functions don't. Is there something I can reference other than this closed JIRA issue to give an official list of supported features? Or can someone tell me what's wrong with this code?
import QtQuick 2.12 import QtQuick.Window 2.12 Window { visible: true width: 640 height: 480 title: qsTr("Hello World") Component.onCompleted: { const test = [1,2,3,4,5,6,7,8,9,]; test.forEach(t => console.log(t)); } }
- SGaist Lifetime Qt Champion last edited by
Hi,
For that kind of question you should go to the interest mailing list. You'll find there Qt's developers/maintainers. This forum is more user oriented.
Thanks @SGaist. From this StackOverflow post I found that the problem was that the version of Qt Creator that ships with 5.12 doesn't support the new ECMAScript 7 syntax. I installed Qt Creator 4.9.0-beta2 and it's working.
|
https://forum.qt.io/topic/100526/what-parts-of-ecmascript-7-are-supported-in-qt-5-12/3
|
CC-MAIN-2022-05
|
refinedweb
| 215
| 69.58
|
Problem Statement
In this problem, we are given two LinkedList and are asked to find if the linked lists are identical or not i.e. we have to find whether the two linked lists have the same arrangement of the values of the nodes in them or not.
Linked list 1:
Linked List 2:
Output: Identical
These two LinkedList are identical.
Problem Statement Understanding
Let's first understand the problem statement with the help of an example:
If Linked list 1 = 3→5→7 and Linked list 2 = 3→5→7.
The term 'arrangement of the values of the nodes in a linked list' which we have referred to in the problem statement means that if our linked list is 5→6→7→8→9, then the arrangement of the values of the nodes in the linked list is 5,6,7,8,9.
- From the Linked list 1 we can see that its arrangement of the values of the nodes is 3,5,7.
- From the Linked list 2 we can see that its arrangement of the values of the nodes is 3,5,7.
Since both the linked list have the same arrangement of the values of the nodes, so the above linked list 1 and 2 are Identical.
If Linked list 1 = 3→5→6 and Linked list 2 = 3→6→5.
- From the Linked list 1 we can see that its arrangement of the values of the nodes is 3,5,6.
- From the Linked list 2 we can see that its arrangement of the values of the nodes is 3,6,5.
Since both the linked list have a different arrangement of the values of the nodes, so the above linked list 1 and 2 are Not Identical.
Now, I think from the above examples, it is clear what we are trying to find in this problem. So next we will try to think about how we can approach this problem.
Before jumping to the next section of the blog, try to think about how you can solve this problem?
Approach
The basic approach which comes to mind is to traverse both the linked list simultaneously and check if at any iteration:
- The values of the lists are different then we return false.
Else if we reach the end of both the lists at the same time while traversing then only we return true.
Note: If we reach the end in only one list that means their size is different and hence also not identical.
Algorithm
- Start traversing the linked lists x and y.
- If at any point while traversing, the data is different in the two lists (x->data != y->data), then we return false.
- If we reach the end of both the linked list at the same time, then we return true.
- If we reach the end of any one of the lists then they are not identical and return false.
Dry Run
Code Implementation
#include
using namespace std; struct Node { int data; struct Node *next; }; bool areIdentical(struct Node *x,struct Node *y) { while (x != NULL && y != NULL) { if (x->data != y->data) return false; x = x->next; y = y->next; } return (x == NULL && y == NULL); } void push(struct Node** head_ref, int new_data) { struct Node* new_node = new Node(); new_node->data = new_data; new_node->next = (*head_ref); (*head_ref) = new_node; } int main() { struct Node *x = NULL; struct Node *y = NULL; push(&x, 7); push(&x, 5); push(&x, 3); push(&y, 7); push(&y, 5); push(&y, 3); if(areIdentical(x, y)) cout << "Identical"; else cout << "Not identical"; return 0; }
#include
#include #include /* Structure for a linked list node */ struct Node { int data; struct Node *next; }; /* Returns true if linked lists a and b are identical, otherwise false */ bool areIdentical(struct Node *a, struct Node *b) {); } /* UTILITY FUNCTIONS TO TEST fun1() and fun2() */ /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(struct Node** head_ref, int new_data) { /* allocate node */ struct Node* new_node = (struct Node*) malloc(sizeof(struct Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node; } /* Driver program to test above function */ int main() { /* The constructed linked lists are : a: 3->2->1 b: 3->2->1 */ struct Node *a = NULL; struct Node *b = NULL; push(&a, 1); push(&a, 2); push(&a, 3); push(&b, 1); push(&b, 2); push(&b, 3); areIdentical(a, b)? printf("Identical"): printf("Not identical"); return 0; }
class Identical { Node head; class Node { int data; Node next; Node(int d) { data = d; next = null; } } /* Returns true if linked lists a and b are identical, otherwise false */ boolean areIdentical(Identical llist2) { Node a = this.head, b = llist2.head;); } void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } /* Driver program to test above functions */ public static void main(String args[]) { Identical llist1=new Identical(); Identical llist2=new Identical(); /* The constructed linked lists are : llist1: 3->2->1 llist2: 3->2->1 */ llist1.push(1); llist1.push(2); llist1.push(3); llist2.push(1); llist2.push(2); llist2.push(3); if (llist1.areIdentical(llist2) == true) System.out.println("Identical "); else System.out.println("Not identical "); } }
class Node: def __init__(self, d): self.data = d self.next = None class LinkedList: def __init__(self): self.head = None def areIdentical(self, listb): a = self.head b = listb.head while (a != None and b != None): if (a.data != b.data): return False a = a.next b = b.next return (a == None and b == None) def push(self, new_data): new_node = Node(new_data) new_node.next = self.head self.head = new_node llist1 = LinkedList() llist2 = LinkedList() llist1.push(7) llist1.push(5) llist1.push(3) llist2.push(7) llist2.push(5) llist2.push(3) if (llist1.areIdentical(llist2) == True): print("Identical ") else: print("Not identical ")
Output
Identical
Time Complexity: O(min(m,n)), where m,n are the size of the linked lists.
Space Complexity: O(1), no extra space is used.
This blog tried to discuss if the two linked lists are identical or not using simple traversal. This is a basic question and if you want to practice more such questions on linked lists, feel free to solve them at Linked List.
|
https://www.prepbytes.com/blog/linked-list/identical-linked-lists/
|
CC-MAIN-2022-21
|
refinedweb
| 1,057
| 70.02
|
A Sneak Peek of New Features with Microsoft Graph Toolkit v2.0.0
Beth
It’s hard to believe that just 1 year ago the Microsoft developer community didn’t have [Microsoft Graph Toolkit (MGT)][1] 😱. Fast forward to today, Microsoft Graph now has millions of calls a month through the Microsoft Graph Toolkit and continue to grow month over month. We could not have done it without this creative, collaborative, and amazing community and our partners’ help!
We first introduced Microsoft Graph Toolkit v1 back in September 2019. Since then, we took many feature requests and inspiration from our fans. Today, we are excited to share with you some wonderful new features that are coming to Microsoft Graph Toolkit v2, just a year later. If you would like to try it out, we have a [preview version][2] now. You can install today by running `npm i @microsoft/mgt@next` in your terminal.
## Some exciting new features!
### 🆕 mgt-person-card [issue #528][3]
Our redesign of Person Card presents many new sections of information centered around the person you are interested in connecting with. We kept the functionalities that allows you to email or send a Microsoft Teams chat directly from the card. We added new sections to show organization information, existing email threads that you both participated in, files you both have access to, and even skills and experiences the person has from the new [Profile API][4]! We are now on to the next step to make the Person Card extensible. So that you can add your own sections and additional information you would like to present.
You can add the `mgt-person` and `mgt-person-card` components by including the following in your HTML: `<mgt-person person-query=”me” view=”twoLines” person-card=”hover”></mgt-person>`
### 🌚 Dark theme
What if your application is built with a dark theme? With v1 you would need to tweak each CSS custom property available for each component. Now you would be able to simply apply `class=”mgt-dark”` to the desired section of your HTML page. We will handle switching between light theme and dark theme for you! We have `mgt-people-picker` component with dark theme available in the preview package. But I will provide some examples below as how you can configure theming with different options.
**Example – regional theme**
<div class=”mgt-light”> <header class=”mgt-dark”> // login will have dark theme <mgt-login></mgt-login> </header> <article> // agenda will have light theme <mgt-agenda></mgt-agenda> </article> </div>
**Example – global theme**
<body class=”mgt-light”> // everything will be light themed <header><mgt-login></mgt-login></header> <article><mgt-agenda></mgt-agend></article> <footer></footer> </body>
**Example – per component theme**
<mgt-login class=”mgt-dark”></mgt-login>
**Example - customized theming** .custom { --input-background-color: $custom-background-color; --input-border: $custom-input-border; } **Example - Modify specific colors in a theme** .custom { --input-background-color: $custom-background-color; }
### 🗃 Caching of Microsoft Graph calls [issue #221][6]
Prior to v2, each component calls Microsoft Graph even if sometimes the information retrieved is the same. Think of getting a list of people to display in `mgt-people` and then showing subsets of these people in `mgt-agenda` for different meetings. The caching feature creates an optional and configurable service to cache common graph requests. This will improve the loading performance to help you create a pleasant user experience!
To configure cache, you will need to understand the static class `CacheService.config` object:
let config = { defaultInvalidationPeriod: number, isEnabled: boolean, people: { invalidationPeriod: number, isEnabled: boolean }, photos: { invalidationPeriod: number, isEnabled: boolean }, users: { invalidationPeriod: number, isEnabled: boolean } };
You can change the cache config as below:
import { CacheService } from '@microsoft/mgt'; CacheService.config.users.isEnabled = false; CacheService.config.photos.invalidationPeriod: 3600000; // 1 hour in milliseconds
And here is a comparasion of load time in the same view of an app before and after cache implementation:
The overall goal of Microsoft Graph Toolkit team has always been providing the best developer experience possible! A lot of you asked us to make it easier to include MGT components in your React projects. So we created `@microsoft/mgt-react` package! The library wraps all MGT components and exports them as React components. Try it out today with `npm install @microsoft/mgt-react`! You can learn more about how to use `@microsoft/mgt-react` with the [README][8].
## What else is coming?
I started this article thinking it will only be 1 or 2 pages because I wanted to simply highlight a few of these new things. But as I go through the list, I realized that they are all exciting and unique in their own ways. I can’t run through all of them in this post and will have to save the rest of them for the official release announcement. But you can find out more about them by trying out the preview package yourselves 😉. So, here is the rest of the running list of v2 features: – Package split: mgt-element, mgt, mgt-react – New mgt-todo component – Localization helper – RTL support – Person Card templating and configuration
## What’s next?
We know there are some big changes in v2, and we want to make sure to take extra care to not break the existing applications that use Microsoft Graph Toolkit, especially in production. For this reason, we really need your help today! Give us feedback on the preview package. Log bugs you find and create issues in our repo. We ❤️LOVE❤ community contributions. So much we will send you some interesting Microsoft Graph swags if you help us with any issues with tag ‘help wanted’ or ‘good first issue’. You can find a lot of these issues in this project board – [Community Love][9].
<img src=”” width=”400″ />
If you are not tired of reading blog articles yet, here is an entire series of [A Lap around Microsoft Graph Toolkit][10]. Our team worked with some of our fabulous Microsoft MVPs to walk you through all the moving parts of Microsoft Graph Toolkit step by step.
If you are interested in learning how to get started with Microsoft Graph Toolkit with a specific type of platform (Teams tab / SharePoint Web Part) or web technology framework (React / Angular) in mind, you can check out our [brand new get-started guides][11]!
## Stay tuned!
Let us know what topics are of interest to you. And stay tuned with more news and releases coming from our team! 🥰
[1]: [2]: [3]: [4]: [5]: [6]: [7]: [8]: [9]: [10]: [11]:
|
https://devblogs.microsoft.com/ifdef-windows/a-sneak-peek-of-new-features-with-microsoft-graph-toolkit-v2-0-0/
|
CC-MAIN-2021-10
|
refinedweb
| 1,086
| 53.21
|
A client context structure, which holds client specific callbacks, batons, serves as a cache for configuration options, and other various and sundry things. More...
#include <svn_client.h>.
Definition at line 920 of file svn_client.h.
main authentication baton.
Definition at line 923 of file svn_client.h.
a baton to pass to the cancellation callback.
Definition at line 959 of file svn_client.h.
a callback to be used to see if the client wishes to cancel the running operation.
Definition at line 956 of file svn_client.h.
Check-tunnel callback.
If not
NULL, and open_tunnel_func is also not
NULL, this callback will be invoked to check if open_tunnel_func should be used to create a specific tunnel, or if the default tunnel implementation (either built-in or configured in the client configuration file) should be used instead.
Definition at line 1032 of file svn_client.h.
Custom client name string, or
NULL.
Definition at line 1008 of file svn_client.h.
a hash mapping of
const char * configuration file names to svn_config_t *'s.
For example, the '~/.subversion/config' file's contents should have the key "config". May be left unset (or set to NULL) to use the built-in default settings and not use any configuration.
Definition at line 952 of file svn_client.h.
Conflict resolution callback and baton, if available.
Definition at line 1003 of file svn_client.h.
Conflict resolution callback and baton, if available.
NULL means that subversion should try
conflict_func.
Definition at line 1013 of file svn_client.h.
log message callback baton
log_msg_baton2instead.
Definition at line 945 of file svn_client.h.
callback baton for log_msg_func2
Definition at line 977 of file svn_client.h.
The callback baton for
log_msg_func3.
Definition at line 995 of file svn_client.h.
Log message callback function.
NULL means that Subversion should try not attempt to fetch a log message.
log_msg_func2instead.
Definition at line 940 of file svn_client.h.
Log message callback function.
NULL means that Subversion should try log_msg_func.
Definition at line 973 of file svn_client.h.
Log message callback function.
NULL means that Subversion should try
log_msg_func2, then
log_msg_func.
Definition at line 991 of file svn_client.h.
MIME types map.
Definition at line 999 of file svn_client.h.
notification callback baton for notify_func()
notify_baton2instead
Definition at line 934 of file svn_client.h.
notification baton for notify_func2().
Definition at line 968 of file svn_client.h.
notification callback function.
This will be called by notify_func2() by default.
notify_func2instead.
Definition at line 929 of file svn_client.h.
notification function, defaulting to a function that forwards to notify_func().
If
NULL, it will not be invoked.
Definition at line 964 of file svn_client.h.
Open-tunnel callback.
If not
NULL, this callback will be invoked to create a tunnel for a ra_svn connection that needs one, overriding any tunnel definitions in the client config file. This callback is used only for ra_svn and ignored by the other RA modules.
Definition at line 1042 of file svn_client.h.
Callback baton for progress_func.
Definition at line 986 of file svn_client.h.
Notification callback for network progress information.
May be NULL if not used.
Definition at line 982 of file svn_client.h.
The baton used with check_tunnel_func and open_tunnel_func.
Definition at line 1047 of file svn_client.h.
A working copy context for the client operation to use.
This is initialized by svn_client_create_context() and should never be
NULL.
Definition at line 1021 of file svn_client.h.
|
https://subversion.apache.org/docs/api/latest/structsvn__client__ctx__t.html
|
CC-MAIN-2017-47
|
refinedweb
| 561
| 54.08
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Hello everybody,
in my first tutorial I described how you can build your own MongoDB and use a JAVA program to mine Twitter either via the search function and a loop or via the Streaming API. But till now you just have your tweets stores in a Database and we couldn´t get any insight in our tweets now.
So we will take a look in this tutorial on how to connect to the MongoDB with R and analyze our tweets.
Start the MongoDB
To access the MongoDB I use the REST interface. This is the easiest way for accessing the database with R when just have started with it. If you are a more advanced user, you can also use the rmongodb package and the code provided by the user abhishek. You can find the code below.
So we have to start the MongoDB daemon. It is located in the folder “bin” and has the name “mongod”. So navigate to this folder and type in:
./mongod --rest
This way we start the server and enable the access via the REST interface.
R
Let´s take a look at our R code and connect to the Database.
First we need the two packages RCurl and rjson. So type in:
library(RCurl) library(rjson)
Normally the MongoDB server is running on the port 28017. So make sure that there is no firewall or other program blocking it.
So we have to define the path to the data base with:
database = "tweetDB" collection = "Apple" limit = "100" db <- paste("",database,"/",collection,"/?limit=",limit,sep = "")
tweetDB – name of your database
Apple – name of your collection
limit=100 – number of tweets you want to get
Ok now we can get our tweets with
tweets <- fromJSON(getURL(db))
And so you saved the Tweets the received. You can now analyze them like I explained in other tutorials about working with R and Twitter.
You can for example extract the text of your tweets and store it in a dataframe with:
tweet_df = data.frame(text=1:limit) for (i in 1:limit){ tweet_df$text[i] = tweets$rows[[i]]$tweet_text} tweet_df
If you have any questions feel free to ask or follow me on Twitter to get the newest updates about analytics with R and analytics of Social Data.
# install package to connect through monodb install.packages(“rmongodb”) library(rmongodb) # connect to MongoDB mongo = mongo.create(host = “localhost”) mongo.is.connected(mongo) mongo.get.databases(mongo) mongo.get.database.collections(mongo, db = “tweetDB2″) #”tweetDB” is where twitter data is stored library(plyr) ## create the empty data frame df1 = data.frame(stringsAsFactors = FALSE) ## create the namespace DBNS = “tweetDB2.#analytic” ## create the cursor we will iterate over, basically a select * in SQL cursor = mongo.find(mongo, DBNS) ## create the counter i = 1 ## iterate over the cursor while (mongo.cursor.next(cursor)) { # iterate and grab the next record tmp = mongo.bson.to.list(mongo.cursor.value(cursor)) # make it a dataframe tmp.df = as.data.frame(t(unlist(tmp)), stringsAsFactors = F) # bind to the master dataframe df1 = rbind.fill(df1, tmp.df) } dim(df.
|
https://www.r-bloggers.com/build-your-own-twitter-archive-and-analyzing-infrastructure-with-mongodb-java-and-r-part-2-update/
|
CC-MAIN-2020-05
|
refinedweb
| 534
| 65.01
|
I have created a order dictionary and could not get the index out of it.
I have gone through the below url but not working.
Accessing dictionary value by index in python
line_1 = OrderedDict((('A1', "Miyapur"), ('A2', "JNTU College"), ('A3', "KPHB Colony"),
('A4', "Kukatpally"), ('A5', "Balanagar"), ('A6', "Moosapet"),
('A7', "Bharat Nagar"), ('A8', "Erragadda"), ('A9', "ESI Hospital"),
('A10', "S R Nagar"), ('X1', "Ameerpet"), ('A12', "Punjagutta"),
('A13', "Irrum Manzil"), ('A14', "Khairatabad"), ('A15', "Lakdikapul"),
('A16', "('Assembly"), ('A17', "Nampally"), ('A18', "Gandhi Bhavan"),
('A19', "Osmania Medical College"), ('X2', "MG Bus station"), ('A21', "Malakpet"),
('A22', "New Market"), ('A23', "Musarambagh"), ('A24', "Dilsukhnagar"),
('A25', "Chaitanyapuri"), ('A26', "Victoria Memorial"), ('A27', "L B Nagar")))
print(line_1.values()[1])
print(line_1[1])
print(line_1.keys()[1])
TypeError: 'odict_values' object does not support indexing
KeyError: 1
TypeError: 'odict_keys' object does not support indexing
In Python 3, dictionaries (including
OrderedDict) return "view" objects from their
keys() and
values() methods. Those are iterable, but don't support indexing. The answer you linked appears to have been written for Python 2, where
keys() and
values() returned lists.
There are a few ways you could make the code work in Python 3. One simple (but perhaps slow) option would be to pass the view object to
list() and then index it:
print(list(line_1.values())[1])
Another option would be to use
itertools.islice to iterate over the view object to the desired index:
import itertools print(next(itertools.islice(line_1.values(), 1, 2)))
But all of these solutions are pretty ugly. It may be that a dictionary is not the best data structure for you to use in this situation. If your data was in a simple list, it would be trivial to lookup any item by index (but lookup up by key would be harder).
|
https://codedump.io/share/Vw4wDCnIhQhW/1/order-dictionary-index-in-python
|
CC-MAIN-2017-34
|
refinedweb
| 291
| 51.78
|
Expert Reviewed
How to Annualize a Quarterly Return
Three Methods:Locating the InformationCalculating the Annual Rate of ReturnAnnualizing Daily ReturnsCommunity Q&A
Investment companies provide their clients with regular updates regarding their return on investment (ROI). If you have investments, you probably have received a quarterly return report that shows how well each of your investments has fared over the preceding three months. It is easier to comprehend the strength of the investment (as well as to compare it with other investments) if you can translate the quarterly return into an equivalent annual return. You can do this with a calculator or even pencil and paper.
Steps
1
Locating the Information
- 1Obtain the investment's quarterly report. You will receive this in the mail or you can look it up online under your account. You can also find this information on the company's website.
- 2Find the quarterly rate of return. There will likely be a number of figures within the report that show how the investment rose or fell during that time. What you want to annualize is the percentage figure, called the rate of return (ROR), which shows the percentage of growth (or shrinkage) you received during the previous three months.
- For example, at the bottom of the page of numbers it may show that your quarterly return is 1.5 percent. The annual return would be larger, because your money could be expected to have grown each quarter. The annualized return would be the percentage of growth if the investment grew at the same rate all year.
- 3Calculate how many time periods there are in a year. In order to annualize, you first consider the time period being featured. In this case it's three months since it's a quarterly report. Then calculate how many such periods are contained in a year. Thus, there are four three-month periods (quarters) in a year. You would then use the number 4 when called for in the annualizing formula.
- If you were trying to annualize a monthly return, you would use the number 12.
2
Calculating the Annual Rate of Return
- 1Calculate the annual rate of return. For a quarterly investment, the formula to calculate the annual rate of return is: Annual Rate of Return = [(1 + Quarterly Rate of Return)^4] - 1. The number 4 is an exponent. In other words, the quantity "1 + quarterly rate of return" is raised to the fourth power, and then 1 is subtracted from the result.[1]
- 2
- 3Plug in your numbers. Continuing this example, use 0.015 as the quarterly ROR. Thus, the annual rate of return = (1 + 0.015) raised to the fourth power.
- Add 1 to 0.015 and you get 1.015.
- 4Use a calculator to bring that number to the fourth power. If you do not have a calculator that works with exponents, you can search for one on the Internet or buy one at your local office supplies store. 1.015 to the fourth power is 1.061364.
- You can always multiply 1.015 x 1.015 x 1.015 x 1.015 if you don't have a calculator.
- The example formula now looks like this: Annual Rate Of Return = 1.061364 - 1.
- 5
3
Annualizing Daily Returns
- 1Calculate the Annual Rate of Return using days. You may have a new investment and want to know the Annual Rate of Return based on a number of days, not months. Let's say you have held the investment for 17 days and earned 2.13%.
- 2Plug the numbers into the formula. In this case to calculate the exponent to use, you will divide 17 (the number of days you held the investment) by 365 (the number of days in a year). The answer is .0465753.
- Convert 2.13% interest rate to a decimal by dividing 2.13 by 100 = .0213.
- Your formula will look like this: ((1+0.0213)^1/.0465753) – 1 = Annual Rate of Return. ((1.0213)^21.4706078)-1 = 1.5722717 - 1 = .5722717. Convert this to a percentage by multiplying this by 100 = 57.23% annual rate of return.
- 3
Community Q&A
Quick Summary
To annualize a quarterly return, start by going online to your investment account to find the quarterly rate of return (ROR) figure. Then divide that percentage by 100 to convert it into a decimal. Add 1 to your decimal. You probably can do that sum in your head, but grab your calculator for the next step. Use the exponent function to take that sum to the 4th power. Now subtract 1 from what you get, and you’ll have your annual ROR in decimal form. Want the percentage? Just multiply by 100!
Tips
- "Quarterly return" is also the term used for tax returns that must be filed every three months by some employers, self-employed people and people who receive unemployment benefits.
Things You'll Need
- Quarterly return
- Calculator
- Pen
- Paper
Sources and Citations
Article Info
Categories: Investments and Trading
In other languages:
Español: anualizar un retorno trimestral, Français: annualiser un rendement trimestriel, Bahasa Indonesia: Mengubah Tingkat Pengembalian Kuartal menjadi Tahunan
Thanks to all authors for creating a page that has been read 100,594 times.
|
https://www.wikihow.com/Annualize-a-Quarterly-Return
|
CC-MAIN-2018-34
|
refinedweb
| 867
| 66.44
|
As you may be aware, the primary way that data is represented within WebSphere Process Server is as a Business Object (BO). This is essentially a container that houses all the values of your data and can be manipulated or inspected within the various components that exist in WebSphere Process Server. This blog entry is to break down what needs to be understood about these Business Objects and how they are handled by WebSphere Process Server specifically. Please note, this entry is only directly applicable to EMF style BOs, not Lazy parsing style (this is new in version 7 of WebSphere Process Server).
The first thing to know is that Business Objects are represented as XML Schemas in WebSphere Process Server. For instance, if you are using the WebSphere Integration Developer (WID) to create your object, you can observe that once you have finished modeling your BO in the BO Editor, it has created an xsd file for you within the module. This is nothing more than an XML schema containing element and attribute declarations. These schemas can also exist directly in a wsdl file as well.Regardless of where the schemas are defined (wsdls and xsds), the important aspect that differentiates the elements is their name as well as the namespace in which they are defined. What this means in the WebSphere Process Server world is that if there are multiple elements defined that have the same name and are defined in the same namespace this has potential to cause problems, even if the elements are defined in totally different actual files.
The reasoning for this is that when all these elements are loaded into the EMF system, which is what the BO as modeled in at runtime, these two things are what tells them apart. You can essentially think of a namespace as a big bucket in EMF. Every element and attribute that is defined in a namespace is tossed into a common bucket and is known from then on as a "feature" (more on this in a moment). The only caveat, is that there can only be one element or attribute with a specific name existing in this namespace bucket. So what essentially happens is that if you have defined two different elements with the same name in the same namespace, at runtime, only one of these will be usable. When a component attempts to use the other one, it will instead load the unexpected element and this is where problems can arise since this element may actually have distinct structures in each case. This them becomes a guessing game on which element will actually be loaded and used (and this ultimately will depend on the classloading policy in place).
Whether the root cause is something as mentioned above or some other problem related to your schemas, the exceptions in WebSphere Process Server can be confusing since the data is stored in EMF and the terminology is slightly different. In EMF, the elements and attributes you would see in your xml schema are referred to as "features". So when you receive an exception of FeatureNotFound, this correlates to the system being unable to find a particular element or attribute within the object it is searching. The first place to check, of course, is if there even exists such as element and if so, you can move on to ensuring that no duplicates exist.
|
https://www.ibm.com/developerworks/mydeveloperworks/blogs/WebSphere_Process_Server/entry/i_know_what_business_is_i_know_what_an_object_is_but_what_is_a_business_object?lang=en
|
CC-MAIN-2015-14
|
refinedweb
| 566
| 52.63
|
FIGURE 1-3 The installation process for Windows Store apps; the exact sequence is unimportant. In fact, licensing terms are integrated into the Store; acquisition of an app implies acceptance of those terms. (However, it is perfectly allowable for apps to show their own license acceptance page on startup, as well as require an initial login to a service if applicable.) But here’s an interesting point: do you remember the real purpose of all those lengthy, annoyingly all-caps licensing agreements that we pretend to read? Almost all of them basically say that you can install the software on only one machine. Well, that changes with Windows Store apps: instead of being licensed to a machine, they are licensed to the user, giving that user the right to install the app on up to five different devices. In this way Store apps are a much more personal thing than desktop apps have traditionally been. They are less general-purpose tools that multiple users share and more like music tracks or other media that really personalize the overall Windows experience. So it makes sense that users can replicate their customized experiences across multiple devices, something that Windows supports through automatic roaming of app data and settings between those devices. (More on that later.) In any case, the end result of all this is that the app and its necessary structures are wholly ready to awaken on a device, as soon as the user taps a tile on the Start page or launches it through features like Search and Share. And because the system knows about everything that happened during installation, it can also completely reverse the process for a 100% clean uninstall—completely blowing away the appdata folders, for example, and cleaning up anything and everything that was put in the registry. This keeps the rest of the system entirely clean over time, even though the user may be installing and uninstalling hundreds or thousands of apps. We like to describe this like the difference between having guests in your house and guests in a hotel. In your house, guests might eat your food, rearrange the furniture, break a vase or two, feed leftovers to the pets, stash odds and ends in the backs of drawers, and otherwise leave any number of irreversible changes in their wake (and you know desktop apps that 32
do this, I’m sure!). In a hotel, on the other hand, guests have access only to a very small part of the whole structure, and even if they trash their room, the hotel can clean it out and reset everything as if the guest was never there. Sidebar: What Is the Windows Library for JavaScript? The HTML, CSS, and JavaScript code in a Windows Store app is only parsed, compiled, and rendered at run time. (See the “Playing in Your Own Room: The App Container” section below.) As a result, a number of system-level features for apps written in JavaScript, like controls, resource management, and default styling are supplied through the Windows Library for JavaScript, or WinJS, rather than through the Windows Runtime API. This way, JavaScript developers see a natural integration of those features into the environment they already understand, rather than being forced to use different kinds of constructs. WinJS, for example, provides an HTML implementation of a number of controls such that they appear as part of the DOM and can be styled with CSS like other intrinsic HTML controls. This is much more natural for developers than having to create an instance of some WinRT class, bind it to an HTML element, and style it through code or some other proprietary markup scheme. Similarly, WinJS provides an animations library built on CSS that embodies the Windows 8 user experience so that apps don’t have to figure out how to re-create that experience themselves. Generally speaking, WinJS is a toolkit that contains a number of independent capabilities that can be used together or separately. So WinJS also provides helpers for common JavaScript coding patterns, simplifying the definition of namespaces and object classes, handling of asynchronous operations (that are all over WinRT) through promises, and providing structural models for apps, data binding, and page navigation. At the same time, it doesn’t attempt to wrap WinRT unless there is a compelling scenario where WinJS can provide real value. After all, the mechanism through which WinRT is projected into JavaScript already translates WinRT structures into those familiar to JavaScript developers. All in all, WinJS is essential for and shared between every Store app written in JavaScript, and it's automatically downloaded and updated as needed when dependent apps are installed. We’ll see many of its features throughout this book, though some won’t cross our path. In any case, you can always explore what’s available through the WinJS section of the Windows API reference. Sidebar: Third-Party Libraries WinJS is an example of a special shared library package that is automatically downloaded from the Windows Store for apps that depend on it. Microsoft maintains a few of these in the Store so that the package need be downloaded only once and then shared between apps. Shared third-party libraries are not currently supported. However, apps can freely use third-party libraries by bringing them into their own app package, provided of course that the libraries use only the APIs available to Windows Store apps. 33
|
https://www.yumpu.com/en/document/view/59794928/microsoft-press-ebook-programming-windows-8-apps-with-html-css-and-javascript-pdf/33
|
CC-MAIN-2018-13
|
refinedweb
| 906
| 54.15
|
- Code: Select all
import android
droid = android.Android()
message = raw_input("search:")
droid.webViewShow("",message)
Is it any where near being right?
Also if anyone could point me to a good tutorial for making apps with python 2.x would be much appreciate as that is what i would like to learn to do(iv tried searching but all i get is pygame tutorials i want to do apps and be able to do in on my android) kivy maybe, i dont know what it is but iv heard its something to do with apps.
So basically is that code anywhere near being right?
Is kivy a different language if not a tutorial on using it?
I think thats it. Thanks anyone who answers
|
http://www.python-forum.org/viewtopic.php?p=4131
|
CC-MAIN-2014-15
|
refinedweb
| 123
| 75.2
|
An Introduction to Elixir Applications
In my previous articles we have discussed various Elixir terms and written a hefty amount of code. What we have not discussed, however, is how to structure and organize your code so that it is easy to maintain and release.
Applications.
In this article you will learn what applications are, how they can be created, how to specify and install dependencies, and how to provide environment values. At the end of the article we will do some practice and create a web-based calculator.
I will be using Elixir 1.5 in this article (it was released a couple of months ago), but all the explained concepts should apply to version 1.4 as well.
Applications?.
So let's see how we can create a new Elixir application!
New Application
To create a new application, all you need to do is run the following command:
mix new app_name
We can also provide the
--sup flag to create an empty supervisor for us. Let's create a new application called
Sample this way:
mix new sample --sup
This command will create a sample directory for you with a handful of files and folders inside. Let me quickly guide you through them:
- config folder contains a sole file config.exs that, as you can guess, provides configuration for the application. Initially it has some useful comments, but no configuration. Note, by the way, that the configuration provided in this file is only restricted to the application itself. If you are loading the application as a dependency, its config.exs will be effectively ignored.
- lib is the primary folder of the application that contains a sample.ex file and a sample folder with an application.ex file. application.ex defines a callback module with a
start/2function that creates an empty supervisor.
- test is the folder containing automated tests for the application. We won't discuss automated tests in this article.
- mix.exs is the file that contains all the necessary information about the application. There are multiple functions here. Inside the
projectfunction, you provide the app's name (as an atom), version, and environment. The
applicationfunction contains information about the application module callback and runtime dependencies. In our case,
Sample.Applicationis set as the application module callback (that can be treated as the main entry point), and it has to define a
start/2function. As already mentioned above, this function was already created for us by the
mixtool. Lastly, the
depsfunction lists build-time dependencies.
Dependencies
It is quite important to distinguish between runtime and build-time dependencies. Build-time dependencies are loaded by the
mix tool during the compilation and are basically compiled into your application.
They can be fetched from a service like GitHub, for example, or from the hex.pm website, an external package manager that stores thousands of components for Elixir and Erlang. Runtime dependencies are started before the application starts. They are already compiled and available for us.
There are a couple of ways to specify build-time dependencies in a mix.exs file. If you'd like to use an application from the hex.pm website, simply say:
{:dependency_name, "~> 0.0.1"}
The first argument is always an atom representing the application's name. The second one is the requirement, a version that you desire to use—it is parsed by the Version module. In this example,
~> means that we wish to download at least version
0.0.1 or higher but less than
0.1.0. If we say
~> 1.0, it means we'd like to use version greater than or equal to
1.0 but less than
2.0. There are also operators like
==,
>,
<,
>=, and
<= available.
It is also possible to directly specify a
:git or a
:path option:
{:gettext, git: "", tag: "0.1"} {:local_dependency, path: "path/to/local_dependency"}
There is also a
:github shortcut that allows us to provide only the owner's and a repo's name:
{:gettext, github: "elixir-lang/gettext"}
To download and compile all dependencies, run:
mix deps.get
This will install a Hex client if you don't have one and then check if any of the dependencies needs to be updated. For instance, you can specify Poison—a solution to parse JSON—as a dependency like this:
defp deps do [ {:poison, "~> 3.1"} ] end
Then run:
mix deps.get
You will see a similar output:
Running dependency resolution... Dependency resolution completed: poison 3.1.0 * Getting poison (Hex package) Checking package () Fetched package
Poison is now compiled and available on your PC. What's more, a mix.lock file will be created automatically. This file provides the exact versions of the dependencies to use when the application is booted.
To learn more about dependencies, run the following command:
mix help deps
Behaviour Again
Applications are behaviours, just like GenServer and supervisors, which we talked about in the previous articles. As I already mentioned above, we provide a callback module inside the mix.exs file in the following way:
def application do [ mod: {Sample.Application, []} ] end
Sample.Application is the module's name, whereas
[] may contain a list of arguments to pass to the
start/2 function. The
start/2 function must be implemented in order for the application to boot properly.
The application.ex contains the callback module that looks like this:
defmodule Sample.Application do use Application def start(_type, _args) do children = [ ] opts = [strategy: :one_for_one, name: Sample.Supervisor] Supervisor.start_link(children, opts) end end
The
start/2 function must either return
{:ok, pid} (with an optional state as the third item) or
{:error, reason}.
Another thing worth mentioning is that applications do not really require the callback module at all. It means that the application function inside the mix.exs file may become really minimalistic:
def application do [] end
Such applications are called library applications. They do not have any supervision tree but can still be used as dependencies by other applications. One example of a library application would be Poison, which we specified as a dependency in the previous section.
Starting an Application
The easiest way to start your application is to run the following command:
iex -S mix
You will see an output similar to this one:
Compiling 2 files (.ex) Generated sample app
A _build directory will be created inside the sample folder. It will contain .beam files as well as some other files and folders.
If you don't want to start an Elixir shell, another option is to run:
mix run
The problem, though, is that the application will stop as soon as the
start function finishes its job. Therefore, you may provide the
--no-halt key to keep the application running for as long as needed:
mix run --no-halt
The same can be achieved using the
elixir command:
elixir -S mix run --no-halt
Note, however, that the application will stop as soon as you close the terminal where this command was executed. This can be avoided by starting your application in a detached mode:
elixir -S mix run --no-halt --detached
Application Environment.
In order to read some parameter, use the
fetch_env/2 function that accepts an app and a key:
Application.fetch_env(:sample, :some_key)
If the key cannot be found, an
:error atom is returned. There are also a
fetch_env!/2 function that raises an error instead and
get_env/3 that may provide a default value.
To store a parameter, use
put_env/4:
Application.put_env(:sample, :key, :value)
The fourth value contains options and is not required to be set.
Lastly, to delete a key, employ the
delete_env/3 function:
Application.delete_env(:sample, :key)
How do we provide a value for the environment when starting an app? Well, such parameters are set using the
--erl key in the following way:
iex --erl "-sample key value" -S mix
You can then easily fetch the value:
Application.get_env :sample, :key # => :value
What if a user forgets to specify a parameter when starting the application? Well, most likely we need to provide a default value for such cases. There are two possible places where you can do this: inside the config.exs or inside the mix.exs file.
The first option is the preferred one because config.exs is the file that is actually meant to store various configuration options. If your application has lots of environment parameters, you should definitely stick with config.exs:
use Mix.Config config :sample, key: :value
For a smaller application, however, it is quite okay to provide environment values right inside mix.exs by tweaking the application function:
def application do [ extra_applications: [:logger], mod: {Sample.Application, []}, env: [ # <==== key: :value ] ] end
Example: Creating a Web-Based CalcServer
Okay, in order to see applications in action, let's modify the example that was already discussed in my GenServer and Supervisors articles. This is a simple calculator that allows users to perform various mathematical operations and fetch the result quite easily.
What I want to do is make this calculator web-based, so that we can send POST requests to perform calculations and a GET request to grab the result.
Create a new lib/calc_server.ex file with the following contents:
We will only add support for the
add operation. All other mathematical operations can be introduced in the same way, so I won't list them here to make the code more compact.
The
CalcServer utilizes
GenServer, so we get
child_spec automatically and can start it from the callback function like this:
def start(_type, _args) do children = [ {Sample.CalcServer, 0} ] opts = [strategy: :one_for_one, name: Sample.Supervisor] Supervisor.start_link(children, opts) end
0 here is the initial result. It must be a number, otherwise
CalcServer will immediately terminate.
Now the question is how do we add web support? To do that, we'll need two third-party dependencies: Plug, which will act as an abstraction library, and Cowboy, which will act as an actual web server. Of course, we need to specify these dependencies inside the mix.exs file:
defp deps do [ {:cowboy, "~> 1.1"}, {:plug, "~> 1.4"} ] end
Now we can start the Plug application under our own supervision tree. Tweak the start function like this:
def start(_type, _args) do children = [ Plug.Adapters.Cowboy.child_spec( :http, Sample.Router, [], [port: Application.fetch_env!(:sample, :port)] ), {Sample.CalcServer, 0} ] # ... end
Here we are providing
child_spec and setting
Sample.Router:
Plug.Adapters.Cowboy.child_spec( :http, Sample.Router, [], [port: Application.fetch_env!(:sample, :port)] )
Now provide the default port value inside the config.exs file:
config :sample, port: 8088
Great!
What about the router? Create a new lib/router.ex file with the following contents:
defmodule Sample.Router do use Plug.Router plug :match plug :dispatch end
Now we need to define a couple of routes to perform addition and fetch the result:
get "/result" do conn |> ok(to_string(Sample.CalcServer.result)) end post "/add" do fetch_number(conn) |> Sample.CalcServer.add conn |> ok end
We are using
get and
post macros to define the
/result and
/add routes. Those macros will set the
conn object for us.
ok and
fetch_number are private functions defined in the following way:
defp fetch_number(conn) do Plug.Conn.fetch_query_params(conn).params["number"] |> String.to_integer end defp ok(conn, data "OK") do send_resp conn, 200, data end
fetch_query_params/2 returns an object with all the query parameters. We are only interested in the number that the user sends to us. All parameters initially are strings, so we need to convert it to integer.
send_resp/3 sends a response to the client with the provided status code and a body. We won't perform any error-checking here, so the code will always be
200, meaning everything is okay.
And, this is it! Now you may start the application in any of the ways listed above (for example, by typing
iex -S mix) and use the
curl tool to perform the requests:
curl # => 0 curl -X POST # => OK curl # => 1
Conclusion
In this article we have discussed Elixir applications and their purpose. You have learned how to create applications, provide various types of information, and list dependencies inside the mix.exs file. You've also seen how to store the configuration inside the app's environment and learned a couple of ways to start your application. Lastly, we have seen applications in action and created a simple web-based calculator.
Don't forget that the hex.pm website lists many hundreds of third-party applications ready for use in your projects, so be sure to browse the catalog and pick the solution that suits you!
Hopefully, you found this article useful and interesting. I thank you for staying with me and until the next time.
Source: Tuts Plus
|
http://designncode.in/an-introduction-to-elixir-applications/
|
CC-MAIN-2018-13
|
refinedweb
| 2,119
| 57.77
|
Welcome to a Matplotlib with Python 3+ tutorial series. In this series, we're going to be covering most aspects to the Matplotlib data visualization module. Matplotlib is capable of creating most kinds of charts, like line graphs, scatter plots, bar charts, pie charts, stack plots, 3D graphs, and geographic map graphs.
First, in order to actually use Matplotlib, we're going to need it!
If you have a later version of Python installed, you should be able to open cmd.exe/terminal and then run:
pip install matplotlib
Note: You may need to do
C:/Python34/Scripts/pip install matplotlib if the above shorter version doesn't work.
If, when importing matplotlib, you get an error something like "no module named" and a module name, it means you need to also install that module. A common issue is that people do not have the module named "six." This means you need to pip install six.
Alternatively, you can head to Matplotlib.org and install by heading to the downloads section and downloading your appropriate version. Keep in mind that, just because you have a 64 bit operating system, you do not necessarily have a 64 bit version of Python. Chances are, you have 32 bit unless you tried to get 64 bit. Open IDLE and read the top. If it says you have 64 bit, you have 64 bit, if it says 32, then you have 32 bit. Once you have Python installed, you're ready to rumble. You can code this logic however you wish. I prefer to code using IDLE, but feel free to use whatever you prefer.
import matplotlib.pyplot as plt
This line imports the integral pyplot, which we're going to use throughout this entire series. We import pyplot as plt, and this is a traditional standard for python programs using pylot.
plt.plot([1,2,3],[5,7,4])
Next, we invoke the .plot method of pyplot to plot some coordinates. This .plot takes many parameters, but the first two here are 'x' and 'y' coordinates, which we've placed lists into. This means, we have 3 coordinates according to these lists: 1,5 2,7 and 3,4.
The plt.plot will "draw" this plot in the background, but we need to bring it to the screen when we're ready, after graphing everything we intend to.
plt.show()
With that, the graph should pop up. If not, sometimes it can pop under, or you may have gotten an error. Your graph should look like:
This window is a matplotlib window, which allows us to see our graph, as well as interact with it and navigate it. You can hover the graph and see the coordinates in the bottom right typically. You can also utilize the buttons. These may be in various locations, but, in the picture above, these are the buttons that are in the lower left corner.
Home Button
The home button will help you once you have begun navigating your chart. If you ever want to return back to the original view, you can click on this. Clicking this before you have navigated your graph will do nothing.
Forward/Back buttons
These buttons can be used like the forward and back buttons in your browser. You can click these to move back to the previous point you were at, or forward again.
Pan Axis
This cross-looking button allows you to click it, and then click and drag your graph around.
Zoom
The zoom button lets you click on it, then click and drag a square that you would like to zoom into specifically. Zooming in will require a left click and drag. You can alternatively zoom out with a right click and drag.
Configure Subplots
This button allows you to configure various spacing options with your figure and plot. Clicking it will bring up:
Each of those blue bars is a slider, which allows you to adjust the padding. Some of these wont do anything right now, because there aren't any other subplots. Left, bottom, right, and top adjust the padding of the figure from the edge of the window. Then wspace and hspace correspond to when you have multiple subplots, and this will act like "spacing" or "padding" between them.
Save Figure
This button will allow you to save your figure in various forms.
So there's a quick introduction to matplotlib, we have much more to cover!
|
https://pythonprogramming.net/matplotlib-intro-tutorial/
|
CC-MAIN-2018-05
|
refinedweb
| 742
| 73.07
|
In 1968 industry luminary Edsgar Dijkstra wrote a now famous article entitled "GOTO Statement
Considered Harmful"[1] in which he made a case for programming without branching constructs. He and
many others have commented over the years that you can express algorithms more clearly without them,
and educators and language designers have labored to usher in a
goto-less programming world.
Have they succeeded? It depends. Their efforts have certainly raised the bar of structured programming,
with
goto-filled languages like FORTRAN and BASIC giving way to better structured languages such as
Fortran-77, Pascal, Modula, C, and Visual BASIC. More programmers certainly think structured nowadays.
When is the last time you saw a
goto in a technical article (besides this one, of course :-)?. Yet all popular
languages have always had
goto as trap door, just in case you "needed" it.
Until now, that is. Java has no
goto and does very well, thank you very much. In this article I'll explain
why, as well as look at all the issues pertaining to controlling program flow, including exceptions.
goto? Like anything else in life, the problem is not in the construct itself, but rather in how it is used/abused. My first language was FORTRAN-IV, which had no else nor the notion of a compound statement. Here's a sample:
IF (X .LT. 0) GOTO 10 IF (X .EQ. 0) GOTO 20 N = 1 Y = F(N) GOTO 30 10 N = -1 Y = H(N) GOTO 30 20 N = 0 Y = G(N) 30 CONTINUEOkay, quick! What does this do? Can't you see the logic at a glance? If you can, I don't know if that's a good thing or not! Here's how you might write it in C or Java:
if (x < 0) { n = -1; y = f(n); } else if (x == 0) { n = 0; y = g(n); } else { n = 1; y = h(n); }Ah, much better! Of course seasoned C hackers might get carried away and do the following:
n = (x < 0) ? (f(-1), -1) : (x == 0) ? (f(0), 0) : (f(-1), -1);which is not pretty, I'll admit, but even this atrocity is easier to follow than the FORTRAN version because you don't have to jump all over the place. Don't try the line above in Java, though: it doesn't have a comma operator.
I tend to liken moving from branching to structured logic to the jump from assembly language to a high
level language. You can do anything in assembler, but programming in C is clearer and less error prone.
Likewise, you can express any logic by littering a sequence of statements with
gotos, but higher-level
constructs make your code more readable and easier to get right the first time.
I realize that in 1999 I might be preaching to the proverbial choir, but let's look at one more example to prove the point. What does the following BASIC program do?
140 lo = 1 150 hi = 100 160 if lo > hi then print "You cheated!" : goto 240 170 g = int((lo + hi) / 2) 180 print "Is it";g;" (L/H/Y)?" 190 input r$ 200 if r$ = "L" then lo = g+1 : goto 160 210 if r$ = "H" then hi = g-1 : goto 160 220 if r$ <> "Y" then print "What? Try again..." : goto 190 230 print "What fun!" 240 print "Wanna play again?" 250 input r$ 260 if r$ = "Y" then 140Since I used reasonably named variables, you probably guessed that this program plays the game of "Hi- Lo": it uses binary search to guess a number between 1 and 100. The user responds to each guess by telling whether it is too high or too low. If the variables
loand
hiever cross (i.e.,
lo > hi), then the user gave erroneous input. But again, it is difficult to infer the logic without careful study. Can you readily see how many loops there are, and where they begin and end?
donefor the outer loop, and
foundfor the inner. When it's time to terminate a loop, I just change the state of its control. This is the type of programming style Bohm and Jacopini had in mind.
But what if you need to terminate a loop from within, i.e., before the last statement of its body? Somehow
you need to skip the statements that follow. Following the rules of structured programming you'd need to
nest the remainder of the loop body in an
if statement, like this:
boolean done = false; while (!done) { // <a bunch of statements here.> if (<you DON'T need to exit the loop now>) { // <the rest of the loop body goes here> } else done = true; }If you need to exit the loop in more than spot, you have a whole lot of nesting going on! To reflect the logic more directly, Java, like C, has the
break,
continue, and
returnstatements, which are just a restricted form of
goto. The
breakstatement exits the immediately enclosing loop or switch, whereas continue iterates on the enclosing loop. Using
breakin the loop above obviates the need for the control variable and makes the logic more self-evident:
for (;;) { // if (So a little bit of
) break; // }
gotoain't so bad. This is especially true with nested loops. The structured program in Listing 2 has three loops, nested sequentially, and it wants to break out of all three loops when
kbecomes 1 in the innermost loop. To make this happen, it needs to set all loop control variables false. Java provides a better way via the labeled
break, which allows you to say, in effect, "I want to break out of the loop at such and such a level of nesting." As the program in Listing 3 illustrates, you place a label (an identifier followed by a colon, as in C) immediately before the loop(s) you want to directly break out of, and then make that label the target of a
breakstatement. Isn't it nice not to have to use extraneous boolean flags that have no direct bearing on the meaning of your program? Listing 4 has a version of Hi-Lo that uses a labeled break to allow the user to quit the game prematurely by typing the letter 'Q'. (In case you're wondering what a BufferedReader is, I'm not going to explain the I/O in this article. Trust me and stay tuned.)
Java also supports a labeled
continue, which breaks out of any intermediate loops to iterate on the loop
specified by the label. For example, if you replace
break with
continue in Listing 3, the output is
0,0,0 0,0,1 1,0,0 1,0,1The branching constructs,
break,
continue, and
return, along with labeled
breakand
continue, make an unbridled
gotocapability unnecessary, so Java doesn't support it.
As an illustration, suppose you have functions
f,
g, and
h, which execute in a nested fashion (see
Listing
5). These functions produce side effects and do not need to return any value. Suppose further that during
the execution of
h a particular error might occur, and you want to return to the main program and start over.
The return-value technique requires
h to return a code to
g, then
g to
f and
f to
main. In this case you
have to alter your functions' signatures to accommodate the error handling, and error handling code is
scattered throughout your program (see Listing 6). To simulate errors in this example, I use Java's random
number generator (class Random) in
h to return a 1 to indicate an error condition, 0 otherwise. The static
variable
seed holds a number you type on the command line to seed the random number generator.
What a mess! Why not just "jump" from
h to
main? With exceptions you can. In Listing 7 I restored
f and
g to their original form. Now if an error occurs in
h, I throw an exception of type MyError, which is
caught in
main. As you can see, exception-handling syntax in Java is virtually identical to C++: you wrap
code to be exception-tested in a
try block at whatever level suits you, followed by one or more exception
handlers that catch objects of a specified class. To raise an exception you use the
throw keyword. When
an exception is thrown, execution retraces its way back up the stack until it finds a handler that takes a
parameter of the same type (or of a supertype). The key differences between Java and C++ exception
handling are:
finallyclause that facilitates program cleanup in the presence of exceptions (see below).
readLineas follows:
char r; try { r = in.readLine().toUpperCase().charAt(0); } catch (IOException x) { // Abort after read error: System.out.println("read error " + x); System.exit(-1); }Unchecked exceptions include things that are difficult to detect at compile-time, such as an array index out of bounds. These exceptions can occur almost anywhere, and it would be ridiculous to force the developer to specify all such exceptions in all method specifications. Unchecked exceptions derive from either RuntimeException or Error.
static void copy(String file) { FileReader r = new FileReader(file); int c; while ((c = r.read()) != -1) System.out.write(c); r.close(); }This won't compile because the FileRead constructor as well as
read,
write, and
closethrow checked exceptions. The easy way to make the compiler happy is to add IOException to copy's specification:
static void copy(String file) throws IOException { FileReader r = new FileReader(file);This way the caller will get an exception so s/he knows something went wrong. So the compiler is happy, but if read or write throw an exception the file doesn't get closed. One solution is to catch the exception and close the file, but you have to rethrow the exception so the caller still gets the exception, like this:
static void copy(String file) throws IOException { FileReader r = new FileReader(file); int c; try { while ((c = r.read()) != -1) System.out.write(c); } catch (IOException x) { r.close(); throw x; // rethrow } if (open) r.close(); }It's a pain to have to have two calls to close, and in a complicated program where many exceptions can be thrown this technique is too tedious and error-prone to be acceptable. The C++ solution is to wrap the lifetime of the file in an object and have the destructor close the file. Well, Java doesn't have destructors, but it does has the
finallyclause, which is an even better solution in this case:
static void copy(String file) throws IOException { FileReader r = new FileReader(file); int c; try { while ((c = r.read()) != -1) System.out.write(c); } finally { r.close(); } }Any code in a
finallyclause is executed no matter what, whether an exception occurred or not, or even if a
returnstatement occurs within the
tryblock or any of its handlers. As the example above shows, you don't need to have a
catchclause to use finally. Since all I want to do in this case is to close the file and let the exception pass back to the caller, I didn't need one. A complete program that uses the copy method to print a file you specify on the command line is in Listing 8. Note that when you print an exception object with System.out.println, it gives the type of the exception with an explanatory message.
goto, but gives the programmer enough flexibility to write readable and convenient structured code. Java's enforced exception specifications finesse surprise exceptions, and the
finallyclause helps guarantee proper resource management with minimum hassle. Nice.
[LISTING 1 - Hi-Lo in Java] import java.io.*; public class Hilo { public static void main (String[] args) throws IOException { BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); boolean done = false; while (!done) { boolean found = false; int lo = 1, hi = 100, guess = 0; while (!found && lo <= hi) { guess = (lo + hi) / 2; System.out.println("Is it " + guess + "?"); char r = in.readLine().toUpperCase().charAt(0); if (r == 'L') lo = guess + 1; else if (r == 'H') hi = guess - 1; else if (r != 'Y') System.out.println("Try again..."); else found = true; } if (lo > hi) System.out.println("You cheated!"); else System.out.println("Your number was " + guess); System.out.println("Want to play again?"); done = in.readLine().toUpperCase().charAt(0) != 'Y'; } } }
[LISTING 2 - Exits a nested loop via boolean flags] public class Nested { public static void main(String[] args) { boolean done1 = false; for (int i = 0; !done1 && i < 2; ++i) { boolean done2 = false; for (int j = 0; !done2 && j < 2; ++j) { boolean done3 = false; for (int k = 0; !done3 && k < 2; ++k) { System.out.println(i + "," + j + "," + k); if (k == 1) done3 = done2 = done1 = true; } } } } } /* Output: 0,0,0 0,0,1 */
[LISTING 3 - Exits a nested loop with labeled break public class Nested2 { public static void main(String[] args) { loop1: for (int i = 0; i < 2; ++i) { for (int j = 0; j < 2; ++j) { for (int k = 0; k < 2; ++k) { System.out.println(i + "," + j + "," + k); if (k == 1) break loop1; } } } } }
[LISTING 4 - Hi-Lo with breaks] import java.io.*; public class Hilo2 { public static void main (String[] args) throws IOException { BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); outer: for (;;) { int lo = 1, hi = 100, guess = 0; while (lo <= hi) { guess = (lo + hi) / 2; System.out.println("Is it " + guess + "?"); char r = in.readLine().toUpperCase().charAt(0); if (r == 'L') lo = guess + 1; else if (r == 'H') hi = guess - 1; else if (r == 'Q') break outer; else if (r != 'Y') System.out.println("Try again..."); else break; } if (lo > hi) System.out.println("You cheated!"); else System.out.println("Your number was " + guess); System.out.println("Want to play again?"); if (in.readLine().toUpperCase().charAt(0) != 'Y') break; } } }
[LISTING 5 - Illustrates Nested method calls] public class Deep { static void f() { System.out.println("doing f..."); g(); } static void g() { System.out.println("doing g..."); h(); } static void h() { System.out.println("doing h..."); } public static void main(String[] args) { f(); System.out.println("back in main"); } } /* Output: doing f... doing g... doing h... back in main */
[LISTING 6 - Uses return codes for errors] import java.util.*; // For class Random public class Deep2 { static long seed; static int f() { System.out.println("doing f..."); return g(); } static int g() { System.out.println("doing g..."); return h(); } static int h() { Random r = new Random(seed); int code = r.nextInt(2); if (code == 0) System.out.println("doing h..."); return code; } public static void main(String[] args) { seed = Long.parseLong(args[0]); int code = f(); if (code != 0) System.out.println("f() returned " + code); System.out.println("back in main"); } } /* Output of "java Deep2 0": doing f... doing g... doing h... back in main Output of "java Deep2 1": doing f... doing g... f() returned 1 back in main */
[LISTING 7 - Illustrates Exceptions] import java.util.*; class MyError extends Exception {} public class Deep3 { static long seed; static void f() throws MyError { System.out.println("doing f..."); g(); } static void g() throws MyError { System.out.println("doing g..."); h(); } static void h() throws MyError { Random r = new Random(seed); if (r.nextInt(2) != 0) throw new MyError(); System.out.println("doing h..."); } public static void main(String[] args) { seed = Long.parseLong(args[0]); try { f(); } catch (MyError x) { System.out.println("MyError occurred"); } System.out.println("back in main"); } } /* Output of "java Deep3 1": doing f... doing g... MyError occurred back in main */
[LISTING 8 - Illustrates the finally clause] import java.io.*; public class Copy { static void copy(String file) throws IOException { FileReader r = new FileReader(file); int c; try { while ((c = r.read()) != -1) System.out.write(c); } finally { r.close(); } } public static void main(String[] args) { try { copy(args[0]); } catch (IOException x) { System.out.println(x); } } } /* Output with a non-existent file: java.io.FileNotFoundException: foo (The system cannot find the file specified) */
|
http://www.freshsources.com/may99.html
|
crawl-002
|
refinedweb
| 2,660
| 64.51
|
6.4: Relational Operators
- Page ID
- 29060:
#include<iostream> using namespace std; // Main function int main() { int num1, num2; num1 = 33; num2 = 99; if(num1 != num2) cout << num1 << " is not equal to " << num2 << endl; if(num1 >= num2) cout << num1 << " is greater than " << num2 << endl; else cout << num1 << " is smaller than " << num2 << endl; return 0; }
The answers to Boolean expressions within the C++ programming language are a value of either 1 for true or 0 for false. So, when we look at the first condition, it evaluates the 2 values, using the relational operator, and then returns a value of true , 1, or false, 0. This then determines whether the if statement is true or false. If we were using string variables, the comparison are based on the numerical value of the letters.
Be careful. In math you are familiar with using this symbol = to mean equal and ≠ to mean not equal. In the C++ programming language the ≠ is not used and the = symbol means assignment.
Adapted from: "Boolean Data Type" by Kenneth Busbee, Download for free at is licensed under CC BY 4.0
|
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/06%3A_Conditional_Execution/6.04%3A_Relational_Operators
|
CC-MAIN-2021-43
|
refinedweb
| 184
| 65.66
|
SubcontractorServices implementation in thick client.
Cesar Cardozo
Greenhorn
Joined: Jun 30, 2013
Posts: 14
posted
Jul 13, 2013 00:27:36
0
Hi guys.
As I had commented in my previous post I have almost finished my project, but I found another "possible" weakness. I'm using a thick client because of the use of cookies. So in the client side reside the gui, a controller and the services. The server has de database connection.
A big issue that most of the candidates had found with this thick client is that the DB interface doesn't throw the
RemoteException
in its methods when we decide to use RMI. After searching approaches in this forum and reading alternatives in other sources I opted for implementing the Proxy
pattern
to have the DB interface in the client and the server side.
What I did in concrete was to develop a remote adapter interface that has the same methods that the DB interface but I added a
RemoteException
in each one. The implementation of this remote interface is essentially an wrapper of my Data class. Then I simply register this object in the RMI registry. Of course this is in the server side.
In the client side I created a Proxy class that implements the DB interface and in the constructor I look up the remote adapter object registered in the RMI. I found this approach pretty simple.
Now the "dark" side of this solution is: The proxy class is an wrapper of a remote object but still somehow has to deal with the RemoteExceptions. I don't rememember well but in one thread in this forum I read that someone wrote "and the proxy class eliminates the RemoteExceptions". My mind was blowing because I had learned that I should not swallow these exceptions just because the DB interface doesn't throw them!!!.
Then my solution was to create a NetworkException that extends a
RuntimeException
and declare it as part of the API of the DBProxy class.
public class DBProxy implements DB { ... //NetworkException is a RuntimeException. public String[] read(int recNo) throws RecordNotFoundException, NetworkException { try { //dbAdapter is a remote object registered in the RMI registry. return dbAdapter.read(recNo); } catch (RemoteException ex) { throw new NetworkException(ex.getMessage()); } } .... }
The compiler is happy and so do I. Everything still is simple.
Now, which class is going to use this DBProxy that implements the DB interface?, of course a SubcontractorServices class. This class takes a DB interface in its constructor . Also, this class implements a bookSubcontractor method and a searchSubcontractors method. Then a programmer knows that these two methods depend only of the structure of the DB interface methods.
However, things get glommy when you reason this: what if I pass a DBProxy class in the constructor. This is valid because the DBProxy class implements the DB interface that is what the SubcontractorServices class needs. But what if in the middle of one operation for example, the DBProxy read method throws a NetworkException, how the programmer knows that he has to catch this exception if is not documented in the DB interface.
I as the developer of these classes know that I have to catch this NetworkException(
RuntimeException
).
public class SubcontractorServices { public SubcontractorServicesImpl(DB database){ ... } public List<Subcontractor> searchSubcontractors(String[] criteria) throws ServicesException { try{ //I use the find and read methods within the logic. ... } catch(RecordNotFoundException ex) { throw new ServicesException(ex.toString()); } catch(NetworkException ex) { throw new ServicesException(ex.toString()); } }
The solution is simple again, I just have to catch the NetworkException in a catch that accepts an NetworkException. But again, I am the developer and know that someone could pass either a Data class (standalone mode) or a Proxy class (client mode), and thus I am aware that I have to catch this unchecked NetworkException.
But.....someone that is not the developer will not have a clue that the read method could throw the NetworkException simply because is not documented in the DB interface. If a junior programmer reads the code he may think, where the hell does this NetworkException come from???.
My idea is to simply document in the SubcontractorServices class that a DBProxy class could be passed and this class throws the unchecked NetworkException in each one of its methods.
What do you think, it is enough to document this "ghost" exception in the SubcontractorServices class? The logic of the application is simple at least for me. In the standalone mode you first create your Data class and then pass it to the SubcontractorServices class. In the client mode you first create your DBProxy class and then pass it to the SubcontractorServices class.
Regards!!.
Cesar Cardozo
Greenhorn
Joined: Jun 30, 2013
Posts: 14
posted
Jul 13, 2013 01:22:40
0
I just checked Andrew Monkhouse's approach. He did something similar but in the controller class.
Inside the controller he request a specific DBClient implementation depending on the application mode. In his solution he uses a factory instead of passing the DBClient to the constructor of the controller class.
The DBClient class throws
IOException
in its methods and that is how he handles the
RemoteException
issue.
Now the way he deals with the RemoteExceptions in the controller class (that in my case would be the SubcontractorServices class) is to catch them using a general Exception. However, in his case another programer that looked at his code would know why that exception is there, to treat RemoteExceptions. Just because it is declared in the DBClient class as a checked exception using the
IOException
.
In my project I could use a the same tecnique and have a catch with a general Exception to indicate: "... and any other exception (including the unchecked NetworkException) will be treated here.". Of course I will document it and everyother programmer would be aware what is going on.
K. Tsang
Bartender
Joined: Sep 13, 2007
Posts: 2757
9
I like...
posted
Jul 13, 2013 01:23:05
0
After reading this kinda long post, I personally don't recommend using a
RuntimeException
.
My understanding is that your basic problem is how does the client know which service to call (local or network)? RIght?
I figured you used RMI. At least here is my version when I did it. The remote service implementation delegates to the local service implementation. I have an interface that extends the Oracle provided interface that throws DatabaseException and some other exception I don't remember. So when say a
RemoteException
or some other like RecordNotFoundException is thrown in the services, I rethrow it as DatabaseException for example and inform the user.
K. Tsang
JavaRanch
SCJP5
SCJD
OCPJP7
OCPWCD5
OCPBCD5
OCPWSD5
OCMJEA5
part 1
Cesar Cardozo
Greenhorn
Joined: Jun 30, 2013
Posts: 14
posted
Jul 13, 2013 01:54:32
0
Hi, thanks for your reply.
I have an interface that extends the Oracle provided interface that throws DatabaseException
I also have an extended interface to add my custom methods but the original DB interface doesn't throw a DatabaseException. I am only allowed to throw a
DuplicateKeyException
or a RecordNotFoundException (that extend the DatabaseException).
So if a
RemoteException
happens I can't rethrow a DatabaseException because is a more general class and is not declared in Oracle's interface. The other alternative was to rethrow one of the allowed exceptions but it would not make sense to rethrow a RecordNotFoundException for example if I get a
RemoteException
.
Roel De Nijs
Bartender
Joined: Jul 19, 2004
Posts: 6293
22
I like...
posted
Jul 13, 2013 03:38:18
0
Pfew, just read the whole thread
What K. Tsang obviously missed (I guess) in your explanation is that you opted for a thick client, so you have to stick with the provided interface (and that's why DBProxy implements DB). He (and me too by the way) opted for a thin client approach. So we created a business service interface (with search and book methods) and signature of the methods were completely up to us to decide. So the
RemoteException
was not a problem, throwing a checked DatabaseException from these methods also not a problem.
Regarding Andrew's book he was very lucky with the required to implement interface he got (from himself
): all the methods throw an
IOException
. So handling the
RemoteException
isn't a problem, because it's a subclass from
IOException
. And I'm pretty sure that's done by purpose, otherwise someone could copy/paste his source code, alter some variable names, submit and pass this certification. So now you (as a reader and potential certified
java
developer) have to do quite a lot of thinking. I think Andrew did a great job in writing a book that covers all aspects of this certification, but you can only use the concepts described in the book, you can't copy/paste the code from the book for your own assignment. I guess that was the very tricky part of writing this book.
I think your problem isn't a really big issue and can be easily solved. I would simply add a javadoc comment to the
SubcontractorServicesImpl
class (not the interface). Something like: "Important note: This class can be used to perform both local and network calls. When a network call fails a NetworkException will be thrown, so don't forget to handle it appropriately!". Any other programmer has to read and use the API correctly when developing some new features or changing existing ones. And in your choices.txt you simply add an explanation why you introduced this NetworkException, why it must be a
RuntimeException
(because you can't change the required interface, it's a violation of a must requirement) and you are completely ready for submission!
Good luck!
PS. Pfew, this was a long reply too
SCJA
, SCJP (
1.4
|
5.0
|
6.0
),
SCJD
OCAJP 7
Cesar Cardozo
Greenhorn
Joined: Jun 30, 2013
Posts: 14
posted
Jul 13, 2013 10:29:17
0
Well, ..... done!!! I needed your approval. Yes I have a SubcontractorServices interface and a SubcontractorServicesImpl that is the implementation class. I hope that someone else that read this thread doesn't get confused when I said that SubcontractorServices was a class.
Thanks again K. Tsang and Roel for your help!! (Next friday is my deadline, so just one more week ..... just one more week....
)
I agree. Here's the link:
subject: SubcontractorServices implementation in thick client.
Similar Threads
Implementaion of the interface provided by Sun to handle local and remote connections
Stand alone criteria [B&S]
My design...getting cloudier
RMI implementation
how do I return a DBAccess interface back to client?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/615694/java-developer-SCJD/certification/SubcontractorServices-implementation-thick-client
|
CC-MAIN-2015-14
|
refinedweb
| 1,781
| 62.78
|
Complex.Reciprocal Method (Complex).One.
using System; using System.Numerics; public class Example { public static void Main() { Complex[] values = { new Complex(1, 1), new Complex(-1, 1), new Complex(10, -1), new Complex(3, 5) }; foreach (Complex value in values) { Complex r1 = Complex.Reciprocal(value); Console.WriteLine("{0:N0} x {1:N2} = {2:N2}", value, r1, value * r1); } } } // The example displays the following output: // (1, 1) x (0.50, -0.50) = (1.00, 0.00) // (-1, 1) x (-0.50, -0.50) = (1.00, 0.00) // (10, -1) x (0.10, 0.01) = (1.00, 0.00) // (3, 5) x (0.09, -0.15) = (1.00, 0.00)
Available since 8
.NET Framework
Available since 4.0
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 4.0
Windows Phone
Available since 8.1
|
https://msdn.microsoft.com/en-us/library/system.numerics.complex.reciprocal.aspx
|
CC-MAIN-2017-34
|
refinedweb
| 136
| 62.75
|
Hi all,
I just started using the Hadoop DFS last night, and it has already
solved a big performance problem we were having with throughput from
our shared NFS storage. Thanks for everyone who has contributed to
that project.
I wrote my own MapReduce implementation, because I needed two
features that Hadoop didn't have: Grid Engine integration and easy
record I/O (described below). I'm writing this message to see if
you're interested in these ideas for Hadoop, and to see what ideas I
might learn from you.
Grid Engine: All the machines available to me run Sun's Grid Engine
for job submission. Grid Engine is important for us, because it
makes sure that all of the users of a cluster get their fair share of
resources--as far as I can tell, the JobTracker assumes that one user
owns the machines. Is this shared scenario you're interested in
supporting? Would you consider supporting job submission systems
like Grid Engine or Condor?
Record I/O: My implementation is something like
org.apache.hadoop.record implementation, but with a couple of
twists. In my implementation, you give the system a simple Java
class, like this:
public class WordCount {
public String word;
public long count;
}
and my TypeBuilder class generates code for all possible orderings of
this class (order by word, order by count, order by word then count,
order by count then word). Each ordering has its own hash function
and comparator.
In addition, each ordering has its own serialization/deserialization
code. For example, if we order by count, the serialization code
stores only differences between adjacent counts to help with
compression.
All this code is grouped into an Order object, which is accessed like
this:
String[] fields = { "word" };
Order<WordCount> order = (new WordCountType()).getOrder( fields );
This order object contains a hash function, a comparator, and
serialization logic for ordering WordCount objects by word.
Is this code you'd be interested in?
Thanks,
Trevor
(by the way, Doug, you may remember me from a panel at the OSIR
workshop this year on open source search)
|
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200610.mbox/%3C2BB66C30-2DD8-48F8-BFC6-A3A7D02CFC70@cs.umass.edu%3E
|
CC-MAIN-2020-34
|
refinedweb
| 349
| 61.06
|
Inheritance is one of the key features of Object-oriented programming in C++. It allows us to create a new class (derived class) from an existing class (base class).
The derived class inherits the features from the base class and can have additional features of its own. For example,
class Animal { // eat() function // sleep() function }; class Dog : public Animal { // bark() function };
Here, the
Dog class is derived from the
Animal class. Since
Dog is derived from
Animal, members of
Animal are accessible to
Dog.
Notice the use of the keyword
public while inheriting Dog from Animal.
class Dog : public Animal {...};
We can also use the keywords
private and
protected instead of
public. We will learn about the differences between using
private,
public and
protected later in this tutorial.
is-a relationship
Inheritance is an is-a relationship. We use inheritance only if an is-a relationship is present between the two classes.
Here are some examples:
- A car is a vehicle.
- Orange is a fruit.
- A surgeon is a doctor.
- A dog is an animal.
Example 1: Simple Example of C++ Inheritance
// C++ program to demonstrate inheritance #include <iostream> using namespace std; // base class class Animal { public: void eat() { cout << "I can eat!" << endl; } void sleep() { cout << "I can sleep!" << endl; } }; // derived class class Dog : public Animal { public: void bark() { cout << "I can bark! Woof woof!!" << endl; } }; int main() { // Create object of the Dog class Dog dog1; // Calling members of the base class dog1.eat(); dog1.sleep(); // Calling member of the derived class dog1.bark(); return 0; }
Output
I can eat! I can sleep! I can bark! Woof woof!!
Here, dog1 (the object of derived class
Dog) can access members of the base class
Animal. It's because
Dog is inherited from
Animal.
// Calling members of the Animal class dog1.eat(); dog1.sleep();
C++ protected Members
The access modifier
protected is especially relevant when it comes to C++ inheritance.
Like
private members,
protected members are inaccessible outside of the class. However, they can be accessed by derived classes and friend classes/functions.
We need
protected members if we want to hide the data of a class, but still want that data to be inherited by its derived classes.
To learn more about protected, refer to our C++ Access Modifiers tutorial.
Example 2 : C++ protected Members
// C++ program to demonstrate protected members #include <iostream> #include <string> using namespace std; // base class class Animal { private: string color; protected: string type; public: void eat() { cout << "I can eat!" << endl; } void sleep() { cout << "I can sleep!" << endl; } void setColor(string clr) { color = clr; } string getColor() { return color; } }; // derived class class Dog : public Animal { public: void setType(string tp) { type = tp; } void displayInfo(string c) { cout << "I am a " << type << endl; cout << "My color is " << c << endl; } void bark() { cout << "I can bark! Woof woof!!" << endl; } }; int main() { // Create object of the Dog class Dog dog1; // Calling members of the base class dog1.eat(); dog1.sleep(); dog1.setColor("black"); // Calling member of the derived class dog1.bark(); dog1.setType("mammal"); // Using getColor() of dog1 as argument // getColor() returns string data dog1.displayInfo(dog1.getColor()); return 0; }
Output
I can eat! I can sleep! I can bark! Woof woof!! I am a mammal My color is black
Here, the variable type is
protected and is thus accessible from the derived class
Dog. We can see this as we have initialized
type in the
Dog class using the function
setType().
On the other hand, the
private variable color cannot be initialized in
Dog.
class Dog : public Animal { public: void setColor(string clr) { // Error: member "Animal::color" is inaccessible color = clr; } };
Also, since the
protected keyword hides data, we cannot access type directly from an object of
Dog or
Animal class.
// Error: member "Animal::type" is inaccessible dog1.type = "mammal";
Access Modes in C++ Inheritance
In our previous tutorials, we have learned about C++ access specifiers such as public, private, and protected.
So far, we have used the
public keyword in order to inherit a class from a previously-existing base class. However, we can also use the
private and
protected keywords to inherit classes. For example,
class Animal { // code }; class Dog : private Animal { // code };
class Cat : protected Animal { // code };
The various ways we can derive classes are known as access modes. These access modes have the following effect:
- public: If a derived class is declared in
publicmode, then the members of the base class are inherited by the derived class just as they are.
- private: In this case, all the members of the base class become
privatemembers in the derived class.
- protected: The
publicmembers of the base class become
protectedmembers in the derived class.
The
private members of the base class are always
private in the derived class.
To learn more, visit our C++ public, private, protected inheritance tutorial.
Member Function Overriding in Inheritance
Suppose, base class and derived class have member functions with the same name and arguments.
If we create an object of the derived class and try to access that member function, the member function in the derived class is invoked instead of the one in the base class.
The member function of derived class overrides the member function of base class.
Learn more about Function overriding in C++.
Recommended Reading: C++ Multiple Inheritance
|
https://cdn.programiz.com/cpp-programming/inheritance
|
CC-MAIN-2020-40
|
refinedweb
| 880
| 56.86
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Call an Action Server from Pyhton code
Hi;
I have two models :
-calculation_function : inherit the "ir.actions.server" model
-indicator that have the calculation_function model as attribute
i want call this server action (calculation_function ) from a method that i writen in the Indicator model, i tried to call the run method ( indicator.calculation_function .run()) but it doesn't work.
I want to run a server action form my
You do not need 'run' for this. In order a method is executed initiate it in your method. Since we are speaking about ir.actions.server, you may return it in your methods. Important:
* the decorator should be api@multi. api@one would be executed, but the window would not be shown up
* you can return a window action only from another action (e.g. button click). It is impossible to return a window from 'create', 'write', 'inverse', 'compute', etc using standard Odoo tools. Again the method would be executed, but no window would appear.
E.g.:
@api.multi
def your_method(self):
self.ensure_one()
return self.indicator.calculation_function()
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/call-an-action-server-from-pyhton-code-117678
|
CC-MAIN-2017-47
|
refinedweb
| 231
| 50.43
|
vendor/plugins/acts_as_blabbermouth/tasks create vendor/plugins/acts_as_blabbermouth/test create vendor/plugins/acts_as_blabbermouth/README create vendor/plugins/acts_as_blabbermouth/MIT-LICENSE create vendor/plugins/acts_as_blabbermouth/Rakefile create vendor/plugins/acts_as_blabbermouth/init.rb create vendor/plugins/acts_as_blabbermouth/install.rb create vendor/plugins/acts_as_blabbermouth/uninstall.rb create vendor/plugins/acts_as_blabbermouth/lib/acts_as_blabbermouth.rb create vendor/plugins/acts_as_blabbermouth/tasks/acts_as_blabbermouth_tasks.rake create vendor/plugins/acts_as_blabbermouth/test/acts_as_blabbermouth_test.rb
As you can see, all of the boilerplate code has been created in the vendor/plugins/acts_as_blabbermouth directory.
The README and MIT-LICENSE are just generic text files that you should fill out - generally, the README file is the first place a new user will look for instructions on the plug-in.
The lib directory will hold the guts of your plugiin. The tasks directory is a place where you can store any rake tasks that your plug-in might need. The tests directory is, well, pretty self explanatory - just as you can test your Rails app, you can test your plug-in too. The install.rb and uninstall.rb files are called when installing and uninstalling your plug-in - you can place code here to initialize your environment and to cleanup after yourself. Finally, init.rb is the file that Rails actually calls to load your plug-in.
The main file we have to worry about is /lib/acts_as_blabbermouth.rb - this is where our main code will go.
Firstly, we need to include the acts_as_blabbermouth.rb file into the environment. This is done by adding the following line to the init.rb file:
require File.dirname(__FILE__) + '/lib/acts_as_blabbermouth'
Next, we add the following code to lib/acts_as_blabbermouth.rb
# ActsAsBlabbermouth module ActiveRecord #:nodoc: module Acts #:nodoc: module Blabbermouth #:nodoc: def self.included(base) base.extend(ClassMethods) end module ClassMethods def acts_as_blabbermouth include ActiveRecord::Acts::Blabbermouth::InstanceMethods extend ActiveRecord::Acts::Blabbermouth::SingletonMethods end end module SingletonMethods def quote_me quotes = [ "When you come to a fork in the road, take it. -Yogi Berra", "Every child is an artist. The problem is how to remain an artist once he grows up. -Pablo Picasso", "What we anticipate seldom occurs; what we least expected generally happens. -Benjamin Disraeli", "Drive-in banks were established so most of the cars today could see their real owners. -E. Joseph Cossman", "The greatest pleasure in life is doing what people say you cannot do. -Walter Bagehot" ] quotes[rand(quotes.size)] end end module InstanceMethods def quote_me self.class.quote_me end end end end end ActiveRecord::Base.send(:include, ActiveRecord::Acts::Blabbermouth)
Now, create a model file called quote.rb by running
script/generate model quote
and add the following line after the class declaration and before the end declaration
acts_as_blabbermouth
Fire up the script/console and type:
Quote.quote_me
Voila! If all went to plan, you should see one of the random quotes. Congratulations! You just wrote a plug-in!
So how did we do it?
- We drilled down in to the ActiveRecord::Acts module and mixed in a module called Blabbermouth. This acts like a namespace in other languages, so we can create our own set of classes and methods without stomping on other peoples plug-ins.
- We override the included class method, which gets called when the plug-in gets mixed in to another module. Here, we include the ClassMethods module, which exposes the acts_as_blabbermouth method to the model class
- We define the acts_as_blabbermouth method. All this method does is include the InstanceMethods and SingletonMethods modules. The InstanceMethods module contains all of the methods available on an instantiated object and SingletonMethods contains all the methods available to the un-instantiated class.
- We create a SingletonMethod called quote_me, which returns the random quote. This can be called by calling Quote.quote_me. We also create a method called quote_me in the InstanceMethods module, which calls the SingletonMethod - this way both the class and the object can call the quote_me method.
- Finally, we call ActiveRecord::Base.send(:include, ActiveRecord::Acts::Blabbermouth) which tells the ActiveRecord::Base module to include the code we have written.
For those of you playing at home, I’ve attached the plug-in source in a tarball, so you can get a better idea of how it all fits together. So off you go, go and create a plug-in yourself!
January 16th, 2008 at 4:08 pm
really a knowledeful blog
February 2nd, 2008 at 6:09 am
How do I test this with local rails app? In other words, how do I use script/plugin install with a local plugin instead of a url?
Thanks
February 2nd, 2008 at 10:41 am
Hi Jeff,
Using script/plugin install doesn’t really make much sense on a local filesystem. Just copying the plugin into the vendors/plugin directory will be sufficient.
Of course, if your plugin is under source control, you can easily point script/plugin at your own SVN repository.
February 17th, 2009 at 10:59 pm
Great work… Was helpful! Thanks
Chirantan
March 10th, 2009 at 9:59 pm
Really worthfull ! Thanks a lot :)
March 24th, 2009 at 4:57 am
This is wrong information. Installing a plugin will cause
install.rbto run in a Rails environment.
Why would you want to install a plugin from your local filesystem? So you can test
install.rb.
The quickest ways for me to get this done were
serve the plugin from a web server
make a git branch and push changes there
Hope this helps.
|
http://www.sitepoint.com/blogs/2008/01/16/howto-write-a-plug-in/
|
crawl-002
|
refinedweb
| 905
| 50.12
|
This action might not be possible to undo. Are you sure you want to continue?
11/19/2011
text
original
discovering Maven while searching for a simpler way to define a common build process across projects. specializing in open source consulting. published by O'Reilly in 2005 (ISBN 0-596-00750-7). Jason van Zyl: Jason van Zyl focuses on improving the Software Development Infrastructure associated with medium to large scale projects. Vincent has directly contributed to Maven's core. Brett is a co-founder and the Vice President of Engineering at Exist Global. Immediately hooked. and in 2005. Brett Porter has been involved in the Apache Maven project since early 2003. He created his own company. John lives in Gainesville. which has led to the founding of the Apache Maven project. when he began looking for something to make his job as Ant “buildmeister” simpler.About the Authors Vincent Massol has been an active participant in the Maven community as both a committer and a member of the Project Management Committee (PMC) since Maven's early days in 2002. supporting both European and American companies to deliver pragmatic solutions for a variety of business problems in areas like e-commerce. Florida with his wife. Brett has become involved in a variety of other open source projects. published by Manning in 2003 (ISBN 1-930-11099-5) and Maven: A Developer's Notebook. He is grateful to work and live in the suburbs of Sydney. Additionally. joining the Maven Project Management Committee (PMC) and directing traffic for both the 1. financial. software development. He enjoys cycling and raced competitively when he was younger. and working on his house. He continues to work directly on Maven and serves as the Chair of the Apache Maven Project Management Committee.0 and 2. John was elected to the Maven Project Management Committee (PMC). This is Vincent's third book. Build management and open source involvement have been common threads throughout his professional career. Carlos Sanchez received his Computer Engineering degree in the University of Coruña. and is a Member of the Apache Software Foundation. he founded the Jakarta Cactus project-a simple testing framework for server-side Java code and the Cargo project-a J2EE container manipulation framework. his focus in the Maven project has been the development of Maven 2. where he hopes to be able to make the lives of other developers easier. CSSC. Brett became increasingly involved in the project's development. Vincent lives and works in Paris. . Spain. When he's not working on Maven. He was invited to become a Maven committer in 2004. telecommunications and. roasting coffee. John Casey became involved in the Maven community in early 2002. where he is the technical director of Pivolis. he is a co-author of JUnit in Action. as well as to various Maven plugins. Emily. Australia. and today a large part of John's job focus is to continue the advancement of Maven as a premier software development tool.0 major releases. In addition to his work on Maven. and started early in the open source technology world. Since 2004. of course. John enjoys amateur astrophotography. a company which specializes in collaborative offshore software development using Agile methodologies.
This page left intentionally blank. .
2. Declarative Execution Maven's project object model (POM) Maven's build life cycle 1.2.7.Table of Contents Preface 1.1.6.2.4.4. Introduction 3.5.6.1.3. Reuse of Build Logic 1. Resolving Dependency Conflicts and Using Version Ranges 3. Coherent Organization of Dependencies Local Maven repository Locating dependency artifacts 22 22 23 24 25 26 27 27 28 28 28 28 30 31 32 34 1.7.1.1. Maven Overview 1. Packaging and Installation to Your Local Repository 2. Getting Started with Maven 36 37 2.2. Creating Your First Maven Project 2. Managing Dependencies 3. Using Profiles 56 56 59 61 63 64 68 70 9 .3. Maven’s Principles 1.3.1. Maven's Benefits 2.2.2.1.1. Setting Up an Application Directory Structure 3.6. Using Maven Plugins 2. Compiling Application Sources 2. Filtering Classpath Resources 2.3. Creating Applications with Maven 38 39 40 42 44 46 48 49 52 53 54 55 3. What is Maven? 1.3. Handling Test Classpath Resources 2.2. Using Snapshots 3. Compiling Test Sources and Running Unit Tests 2.2.2. Using Project Inheritance 3. Convention Over Configuration Standard directory layout for projects One primary output per project Standard naming conventions 1. What Does Maven Provide? 1. Maven's Origins 1. Summary 3. Introducing Maven 17 21 1.6.1.1. Preparing to Use Maven 2.2.8.5.3. Preventing Filtering of Binary Resources 2. Utilizing the Build Life Cycle 3.8.4. Handling Classpath Resources 2.6.
Building an EAR Project 4.6.10. Bootstrapping into Plugin Development 5. Deploying EJBs 4. Creating a Web Site for your Application 3.11.9. BuildInfo Example: Capturing Information with a Java Mojo Prerequisite: Building the buildinfo generator project Using the archetype plugin to generate a stub plugin project The mojo The Plugin POM Binding to the life cycle The output 5.9. Deploying a J2EE Application 4. Testing J2EE Application 4.9.9. Deploying with SFTP 3.5. Deploying with FTP 3.2.2.3.3.11.12. A Review of Plugin Terminology 5. Deploying with an External SSH 3.7. Improving Web Development Productivity 4.3.1. Deploying with SSH2 3. Summary 5.1.13.10. Plugin Development Tools Choose your mojo implementation language 5. Developing Your First Mojo 5.3.2.14.3.8.4. Building a Web Services Client Project 4.3. Summary 4.4.2.9. Deploying your Application 3. Introduction 4.5. Building J2EE Applications 74 74 75 75 76 77 78 84 85 4.1. Building an EJB Module With Xdoclet 4.4.3.9.4. Deploying Web Applications 4.1. Developing Custom Maven Plugins 86 86 87 91 95 100 103 105 108 114 117 122 126 132 133 5. A Note on the Examples in this Chapter 134 134 135 135 136 137 137 138 140 140 5.3.9. Building an EJB Project 4. Building a Web Application Project 4. Deploying to the File System 3.4. The Plugin Framework Participation in the build life cycle Accessing build information The plugin descriptor 5.2. Introduction 5. Introducing the DayTrader Application 4.1. BuildInfo Example: Notifying Other Developers with an Ant Mojo The Ant target The Mojo Metadata file 141 141 141 142 142 145 146 147 148 148 149 10 . Organizing the DayTrader Directory Structure 4.
1.2.7.8. Monitoring and Improving the Health of Your Dependencies 6.8. Cutting a Release 7. How to Set up a Consistent Developer Environment 7. Accessing Project Dependencies Injecting the project dependency set Requiring dependency resolution BuildInfo example: logging dependency versions 5. What Does Maven Have to Do with Project Health? 6.3. Team Collaboration with Maven 170 171 173 177 183 185 189 197 203 206 210 210 211 7. Team Dependency Management Using Snapshots 7. Adding Reports to the Project Web site 6.3.7. Monitoring and Improving the Health of Your Tests 6.2. Creating a Standard Project Archetype 7. Where to Begin? 8.Modifying the Plugin POM for Ant Mojos Binding the Notify Mojo to the life cycle 150 152 5.5. Monitoring and Improving the Health of Your Source Code 6.1. Summary 7. Creating a Shared Repository 7.9. Introduction 8.5. Attaching Artifacts for Installation and Deployment 153 153 154 154 155 156 157 158 159 160 162 163 164 5.6. Separating Developer Reports From User Documentation 6.5. Monitoring and Improving the Health of Your Releases 6.3.9.3.11. Accessing Project Sources and Resources Adding a source directory to the build Adding a resource to the build Accessing the source-root list Accessing the resource list Note on testing source-roots and resources 5. Introducing the Spring Framework 8. Creating Reference Material 6.4. Choosing Which Reports to Include 6.4.1. The Issues Facing Teams 7.2.5. Viewing Overall Project Health 6. Configuration of Reports 6.6. Migrating to Maven 212 213 216 219 222 233 238 241 245 247 8.1. Summary 8.10. Assessing Project Health with Maven 167 169 6.5. Creating an Organization POM 7.12. Continuous Integration with Maestro 7. Creating POM files 248 248 250 256 11 .6. Gaining Access to Maven APIs 5. Summary 6.1.5.2.4. Advanced Mojo Development 5.1.5.
1.2.3. Some Special Cases 8.2.7.2. Java Mojo Metadata: Supported Javadoc Annotations Class-level annotations Field-level annotations A.2.3. Standard Directory Structure B. Non-redistributable Jars 8. Avoiding Duplication 8. Maven's Life Cycles A. The clean Life Cycle Life-cycle phases Default life-cycle bindings A.8.5.2. Maven’s Super POM B.1. Referring to Test Classes from Other Modules 8. Compiling Tests 8. Restructuring the Code 8.4. Other Modules 8.6. Building Java 5 Classes 8. Maven’s Default Build Life Cycle Bibliography Index 292 293 294 295 297 12 .1.2. The default Life Cycle Life-cycle phases Bindings for the jar packaging Bindings for the maven-plugin packaging A.4.2. Summary Appendix A: Resources for Plugin Developers 256 260 260 262 264 264 265 265 268 270 270 271 271 273 A.8. Using Ant Tasks From Inside Maven 8.2.1. Compiling 8.2.5.1.5.2.6.6.3.1. The site Life Cycle Life-cycle phases Default Life Cycle Bindings 274 274 274 276 277 278 278 278 279 279 279 A. Complex Expression Roots A. Testing 8.6.5. Ant Metadata Syntax Appendix B: Standard Conventions 280 280 281 281 282 282 286 286 287 287 291 B.2.4.1. Mojo Parameter Expressions A.5. The Expression Resolution Algorithm Plugin metadata Plugin descriptor syntax A.6. Running Tests 8.3.6. Simple Expressions A.6.1.6 .
.......................................... 34 Figure 1-3: Sample directory structure.................................................................. 127 Figure 6-1: The reports generated by Maven.................. 179 Figure 6-4: The directory layout with a user guide......................... 78 Figure 3-6: The target directory.................................................... 47 Figure 2-4: Directory structure after adding test resources................................................................................... 57 Figure 3-2: Proficio-stores directory............................................................................................................................................................................................................................... 48 Figure 3-1: Proficio directory structure............................................................................................................................................................................... 189 Figure 6-8: An example CPD report................................................................ 83 Figure 4-1: Architecture of the DayTrader application.............................................................................. EJB and Web modules................................................ 58 Figure 3-3: Version parsing...........105 Figure 4-9: DayTrader JSP registration page served by the Jetty plugin....... 86 Figure 4-2: Module names and a simple flat directory structure........ 67 Figure 3-5: The site directory structure.............................................................. 91 Figure 4-6: Directory structure for the DayTrader ejb module............................................................. 66 Figure 3-4: Version Parsing........................... 34 Figure 2-1: Directory structure after archetype generation.................................................................................................................................................................................................. 181 Figure 6-6: An example source code cross reference............................................................. 88 Figure 4-3: Modules split according to a server-side vs client-side directory organization.... 195 Figure 6-10: An example Cobertura report......................................................................... 186 Figure 6-7: An example PMD report................................... 171 Figure 6-2: The Surefire report........................................................................... 95 Figure 4-7: Directory structure for the DayTrader ejb module when using Xdoclet.................................. 40 Figure 2-2: Directory structure after adding the resources directory............... 172 Figure 6-3: The initial setup................................................................................................................................ 82 Figure 3-7: The sample generated site.............. 117 Figure 4-12: Directory structure of the ear module showing the Geronimo deployment plan......................................................................................................................................................... 111 Figure 4-11: Directory structure of the ear module........................................................................................................................ 193 Figure 6-9: An example Checkstyle report.......................................................................... 111 Figure 4-10: Modified registration page automatically reflecting our source change................................................ 123 Figure 4-13: The new functional-tests module amongst the other DayTrader modules ...................................................................................................................................................................................................................... 199 14 ................................ 33 Figure 1-2: General pattern for the repository layout............................................................................................................... 180 Figure 6-5: The new Web site.................. 89 Figure 4-4: Nested directory structure for the EAR.....List of Figures Figure 1-1: Artifact movement from remote to local repository......................................................................................................................................................................................................................................................................................................... 126 Figure 4-14: Directory structure for the functional-tests module................................................... 89 Figure 4-5: Directory structure of the wsappclient module.............................................................................. 46 Figure 2-3: Directory structure of the JAR file created by Maven.................................................................................................................................................................................................................... 101 Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources............................................................................................................................................................................
.. Figure 7-4: Summary page after projects have built................................................................................................................................................................... Figure 7-7: Build Management configuration............................................................................................ Figure 8-4: The final directory structure................................................ Figure 8-1: Dependency relationship between Spring modules............ Figure 7-1: The Administrator account screen.................. Figure 7-6: Adding a build definition for site deployment.................................................................................................................................................................................................................................................................................................Figure 6-11: An example dependency report............................................................ Figure 8-5: Dependency relationship.................................................................................................................................... Figure 8-3: A tiger module directory............................................. with all modules........................................................................ Figure 7-8: Archetype directory layout...... Figure 6-12: The dependency convergence report........................................ Figure 8-2: A sample spring module directory....................... Figure 7-2: Build Management general configuration screen................................... Figure 7-5: Schedule configuration................................................................................................................................................................... Figure 7-3: Add project screen shot.................. Figure 6-13: An example Clirr report................................................................................................................................. 204 205 206 223 223 226 227 229 231 236 239 249 250 266 266 267 15 ............................................................................................................................................................................................................................
16 .This page left intentionally blank.
Maven works equally well for small and large projects. which provides a wide range of topics from understanding Maven's build platform to programming nuances.0. it is recommended that you step through the material in a sequential fashion. this guide is written to provide a quick solution for the need at hand.Preface Preface Welcome to Better Builds with Maven. an indispensable guide to understand and use Maven 2. For first time users. 17 . For users more familiar with Maven (including Maven 1. This guide is intended for Java developers who wish to implement the project management and comprehension capabilities of Maven 2 and use it to make their day-to-day work easier and to get help with the comprehension of any Java-based project. reading this book will take you longer. it does not take long to realize these benefits. Perhaps. We hope that this book will be useful for Java project managers as well.x). As you will soon find. Maven 2 is a product that offers immediate value to many users and organizations. but Maven shines in helping teams operate more effectively by allowing team members to focus on what the stakeholders of a project require -leaving the build infrastructure to Maven! This guide is not meant to be an in-depth and comprehensive resource but rather an introduction.
you will be able to take an existing Ant-based build. focuses on the task of writing custom plugins. and how to use Maven to deploy J2EE archives to a container. looks at Maven as a set of practices and tools that enable effective team communication and collaboration. and Chapter 8 shows you how to migrate Ant builds to Maven. EJB. how to use Maven to build J2EE archives (JAR. and how to use Maven to generate a Web site for your project. In this chapter you will learn to set up the directory structure for a typical application and the basics of managing an application's development with Maven. Team Collaboration with Maven. split it into modular components if needed. Getting Started with Maven. WAR. visualize. At this stage you'll pretty much become an expert Maven user. compiling and packaging your first project. Chapter 7 discusses using Maven in a team development environment. Chapter 4 shows you how to build and deploy a J2EE application. Assessing Project Health with Maven. Building J2EE Applications. you should be up and running with Maven.Better Builds with Maven Organization The first two chapters of the book are geared toward a new user of Maven 2. discusses Maven's monitoring tools. goes through the background and philosophy behind Maven and defines what Maven is. and learning more about the health of the project. Creating Applications with Maven. Finally. and install those JARs in your local repository using Maven. Chapter 4. it discusses the various ways that a plugin can interact with the Maven build environment and explores some examples. Chapter 7. These tools aid the team to organize. you will be able to keep your current build working. Chapter 8. Chapter 1. gives detailed instructions on creating. In this chapter. Web Services). including a review of plugin terminology and the basic mechanics of the Maven plugin framework. create JARs. shows how to create the build for a full-fledged J2EE application. you will be revisiting the Proficio application that was developed in Chapter 3. You will learn how to use Maven to ensure successful team development. Developing Custom Maven Plugins. compile and test the code. they discuss what Maven is and get you started with your first Maven project. After reading this second chapter. explains a migration path from an existing build in Ant to Maven. Introducing Maven. illustrates Maven's best practices and advanced uses by working on a real-world example application. 18 . Chapter 2. Chapter 5. Chapter 5 focuses on developing plugins for Maven. At the same time. Chapter 3 builds on that and shows you how to build a real-world project. Chapter 6 discusses project monitoring issues and reporting. After reading this chapter. From there. EAR. the chapter covers the tools available to simplify the life of the plugin developer. and document for reuse the artifacts that result from a software project. reporting tools. Chapter 6. Migrating to Maven. It starts by describing fundamentals. Chapter 3.
exist. click the chapter link to obtain the source code for the book. However. starting with the new Maestro Support Forum and With Maven Support Forum for additional content on Better Builds with Maven. Eclipse Kepler and other activities at Exist Global.com/?q=node/151. we are human. So if you have Maven 2. Maestro users will find additional content here for them.com/?q=node/151 and locate the Submit Errata link to notify us of any errors that you might have found. go to. Q for Eclipse. so occasionally something will come up that none of us caught prior to publication. Once at the site. 19 .Preface Errata We have made every effort to ensure that there are no errors in the text or in the code.com/?q=node/151. Visit Exist Global Forums for information about the latest happening in the Apache Maven community. We offer source code for download.exist. How to Download the Source Code All of the source code used in this book is available for download at. To send an errata for this book. then you're ready to go.exist. and technical support from the Exist Global Web site at installed.
Better Builds with Maven This page left intentionally blank. 20 .
1.Albert Einstein 21 . but not any simpler. ..
1. It simultaneously reduces your duplication effort and leads to higher code quality . first-order problems such as simplifying builds. 1 You can tell your manager: “Maven is a declarative project management tool that decreases your overall time to market by effectively leveraging cross-project intelligence. This book focuses on the core tool produced by the Maven project. It's the most obvious three-word definition of Maven the authors could come up with. and the deployment process. richer definition of Maven read this introduction. What is Maven? Maven is a project management framework. Maven can be the build tool you need. and with repetition phrases such as project management and enterprise software start to lose concrete meaning. and software. In addition to solving straightforward. to distribution. If you are reading this introduction just to find something to tell your manager1. Maven 2. While you are free to use Maven as “just another build tool”. an artifact repository model. you can stop reading now and skip to Chapter 2. but the term project management framework is a meaningless abstraction that doesn't do justice to the richness and complexity of Maven. The Maven project at the Apache Software Foundation is an open source community which produces software tools that understand a common declarative Project Object Model (POM). documentation. You may have been expecting a more straightforward answer. 1. If you are interested in a fuller.” 22 . standards. When someone wants to know what Maven is. It defines a standard life cycle for building. it will prime you for the concepts that are to follow. From compilation.Better Builds with Maven 1. are beginning to have a transformative effect on the Java community. Too often technologists rely on abstract phrases to capture complex topics in three or four words.” Maven is more than three boring. and a software engine that manages and describes projects. Don't worry. but this doesn't tell you much about Maven. a framework that greatly simplifies the process of managing a software project. Maven Overview Maven provides a comprehensive approach to managing software projects. and many developers who have approached Maven as another build tool have come away with a finely tuned build system.1. and the technologies related to the Maven project. Maven also brings with it some compelling second-order benefits. Perhaps you picked up this book because someone told you that Maven is a build tool. Maven. and deploying project artifacts.1. testing. to documentation. It provides a framework that enables easy reuse of common build logic for all projects following Maven's standards. Revolutionary ideas are often difficult to convey with words. it is a build tool or a scripting framework. So. they expect a short. It is a combination of ideas. to view it in such limited terms is akin to saying that a web browser is nothing more than a tool that reads hypertext. sound-bite answer. and it is impossible to distill the definition of Maven to simply digested sound-bites. to team collaboration. what exactly is Maven? Maven encompasses a set of build standards. Maven provides the necessary abstractions that encourage reuse and take much of the work out of project builds. uninspiring words. distribution. “Well.
predictable way. and instead. as much as it is a piece of software. started focusing on component development. The build process for Tomcat was different than the build process for Struts. and not necessarily a replacement for Ant. common build strategies. Instead of focusing on creating good component libraries or MVC frameworks. which can be described in a common format. 1. such as Jakarta Commons. Ultimately. and deploying. Many people come to Maven familiar with Ant. The ASF was effectively a series of isolated islands of innovation. Developers at the ASF stopped figuring out creative ways to compile. generating documentation. you will wonder how you ever developed without it. Using Maven has made it easier to add external dependencies and publish your own project components. but Maven is an entirely different creature from Ant. projects such as Jakarta Taglibs had (and continue to have) a tough time attracting developer interest because it could take an hour to configure everything in just the right way. If you followed the Maven Build Life Cycle.1. the barrier to entry was extremely high. every project at the ASF had a different approach to compilation. test.2. 23 . developers were building yet another build system. So. This lack of a common approach to building software meant that every new project tended to copy and paste another project's build system. The same standards extended to testing. It is the next step in the evolution of how individuals and organizations collaborate to create software systems. Developers within the Turbine project could freely move between subcomponents.Introducing Maven As more and more projects and products adopt Maven as a foundation for project management. In addition. It is a set of standards and an approach to project development. the Codehaus community started to adopt Maven 1 as a foundation for project management. knowing clearly how they all worked just by understanding how one of the components worked. While there were some common themes across the separate builds. each community was creating its own build systems and there was no reuse of build logic across projects. this copy and paste approach to build reuse reached a critical tipping point at which the amount of work required to maintain the collection of build systems was distracting from the central task of developing high-quality software. to answer the original question: Maven is many things to many people. Maven's standards and centralized repository model offer an easy-touse naming system for projects. Maven entered the scene by way of the Turbine project. Once developers spent time learning how one project was built. and it immediately sparked interest as a sort of Rosetta Stone for software project management. your project gained a build by default. for a project with a difficult build system. Maven's standard formats enable a sort of "Semantic Web" for programming projects. Soon after the creation of Maven other projects. Whereas Ant provides a toolbox for scripting builds. and Web site generation. Prior to Maven. so it's a natural association. and package software. generating metrics and reports. Maven provides standards and a set of patterns in order to facilitate project management through reusable. Maven is a way of approaching a set of software as a collection of highly-interdependent components. Maven is not just a build tool. Once you get up to speed on the fundamentals of Maven. and the Turbine developers had a different site generation process than the Jakarta Commons developers. Maven's Origins Maven was borne of the practical desire to make several projects at the Apache Software Foundation (ASF) work in the same. distribution. they did not have to go through the process again when they moved on to the next project. it becomes easier to understand the relationships between projects and to establish a system that navigates and reports on these relationships.
to provide a common layout for project documentation. Maven allows developers to declare life-cycle goals and project dependencies that rely on Maven’s default structures and plugin capabilities. you can easily drive a Camry. Given the highly inter-dependent nature of projects in open source. What Does Maven Provide? Maven provides a useful abstraction for building software in the same way an automobile provides an abstraction for driving. When you purchase a new car. and the software tool (named Maven) is just a supporting element within this model. in order to perform the build. if you've learned how to drive a Jeep. Maven takes a similar approach to software projects: if you can build one Maven project you can build them all. referred to as "building the build". The model uses a common project “language”. documentation. declarative build approach tend to be more transparent. and you gain access to expertise and best-practices of an entire industry. existing Ant scripts (or Make files) can be complementary to Maven and used through Maven's plugin architecture. assemble. and if you can apply a testing plugin to one project. the car provides a known interface. and much more transparent. An individual Maven project's structure and contents are declared in a Project Object Model (POM). 24 . more maintainable. You describe your project using Maven's model. Maven’s ability to standardize locations for source files. 1.Better Builds with Maven However. test.1. which forms the basis of the entire Maven system. you can apply it to all projects. and output. and to retrieve project dependencies from a shared storage area makes the building process much less time consuming.3. Projects and systems that use Maven's standard. Plugins allow developers to call existing Ant scripts and Make files and incorporate those existing functions into the Maven build life cycle. Developers can build any given project without having to understand how the individual plugins work (scripts in the Ant world). if your project currently relies on an existing Ant build script that must be maintained. install) is effectively delegated to the POM and the appropriate plugins. The key value to developers from Maven is that it takes a declarative approach rather than requiring developers to create the build process themselves. more reusable. Much of the project management and build orchestration (compile. and easier to comprehend. Maven provides you with: A comprehensive model for software projects • Tools that interact with this declarative model • Maven provides a comprehensive model that can be applied to all software projects.
Maven makes it is easier to create a component and then integrate it into a multi-project build. • • • Without these advantages.“ Reusability . home-grown build systems. The following Maven principles were inspired by Christopher Alexander's idea of creating a shared language: Convention over configuration Declarative execution • Reuse of build logic • Coherent organization of dependencies • • Maven provides a shared language for software development projects.Maven allows organizations to standardize on a set of best practices. As mentioned earlier. Further. When everyone is constantly searching to find all the different bits and pieces that make up a project. You will see these principles in action in the following chapter. Without visibility it is unlikely one individual will know what another has accomplished and it is likely that useful code will not be reused. logical. Agility . 25 . Developers can jump between different projects without the steep learning curve that accompanies custom. Maintainability . allowing more effective communication and freeing team members to get on with the important work of creating value at the application level.Organizations that adopt Maven can stop “building the build”. there is little chance anyone is going to comprehend the project as a whole. This is a natural effect when processes don't work the same way for everyone.Introducing Maven Organizations and projects that adopt Maven benefit from: • Coherence . Because Maven projects adhere to a standard model they are less opaque. 1. Maven’s Principles According to Christopher Alexander "patterns help create a shared language for communicating insight and experience about problems and their solutions". when you create your first Maven project. This chapter will examine each of these principles in detail. and focus on building the application. it is improbable that multiple individuals can work productively together on a project. and aesthetically consistent relation of parts. When you adopt Maven you are effectively reusing the best practices of an entire industry. The definition of this term from the American Heritage dictionary captures the meaning perfectly: “Marked by an orderly. As a result you end up with a lack of shared knowledge. Each of the principles above enables developers to describe their projects at a higher level of abstraction. when code is not reused it is very hard to create a maintainable system.2. but also for software components.Maven lowers the barrier to reuse not only for build logic.Maven is built upon a foundation of reuse. along with a commensurate degree of frustration among team members. Maven projects are more maintainable because they follow a common. Maven provides a structured build life cycle so that problems can be approached in terms of this structure. publicly-defined model.
and allows you to create value in your applications faster with less effort. but the use of sensible default strategies is highly encouraged. One of those ideals is flexibility. which all add up to make a huge difference in daily use. With Maven you slot the various pieces in where it asks and Maven will take care of almost all of the mundane aspects for you. you gain an immense reward in terms of productivity that allows you to do more. such as classes are singular and tables are plural (a person class relates to a people table). so that you don't have to think about the mundane details.1. and better at the application level. sooner. One characteristic of opinionated software is the notion of 'convention over configuration'. With Rails. Rails does.”2 David Heinemeier Hansson articulates very well what Maven has aimed to accomplish since its inception (note that David Heinemeier Hansson in no way endorses the use of Maven. you're rewarded by not having to configure that link. Convention Over Configuration One of the central tenets of Maven is to provide sensible default strategies for the most common tasks.2. If you follow basic conventions.Better Builds with Maven 1. All of these things should simply work. This is not to say that you can't override Maven's defaults. Well. so stray from these defaults when absolutely necessary only. We have a ton of examples like that. If you are happy to work along the golden path that I've embedded in Rails. You don’t want to spend time fiddling with building. generating documentation. he probably doesn't even know what Maven is and wouldn't like it if he did because it's not written in Ruby yet!): that is that you shouldn't need to spend a lot of time getting your development infrastructure functioning Using standard conventions saves time. or deploying. and I believe that's why it works.. It eschews placing the old ideals of software in a primary position. makes it easier to communicate to others. you trade flexibility at the infrastructure level to gain flexibility at the application level. the notion that we should try to accommodate as many approaches as possible. and this is what Maven provides. This "convention over configuration" tenet has been popularized by the Ruby on Rails (ROR) community and specifically encouraged by ROR's creator David Heinemeier Hansson who summarizes the notion as follows: “Rails is opinionated software. 2 O'Reilly interview with DHH 26 . that we shouldn't pass judgment on one form of development over another. The class automatically knows which table to use for persistence.
Maven encourages a common arrangement of project content so that once you are familiar with these standard. and shared utility code. You will be able to look at other projects and immediately understand the project layout. default locations. you will be able to adapt your project to your customized layout at a cost. If this saves you 30 minutes for each new project you look at. In this scenario. Follow the standard directory layout. If you have no choice in the matter due to organizational policy or integration issues with existing systems. generated output. but you can also take a look in Appendix B for a full listing of the standard conventions. server code. you might be forced to use a directory structure that diverges from Maven's defaults.consider a set of sources for a client/server-based application that contains client code. but Maven would encourage you to have three. and you will make it easier to communicate about your project. To illustrate. increased complexity of your project's POM. configuration files. If you have placed all the sources together in a single project. the code contained in each project has a different concern (role to play) and they should be separated. and a project for the shared utility code portion. These components are generally referred to as project content. when you do this. makes it much easier to reuse. maintainability.Introducing Maven Standard directory layout for projects The first convention used by Maven is a standard directory layout for project sources. you will be able to navigate within any Maven project you build in the future. In this case. extendibility and reusability. You can override any of Maven's defaults to create a directory layout of your choosing. project resources. and documentation. One primary output per project The second convention used by Maven is the concept that a single Maven project produces only one primary output. If you do have a choice then why not harness the collective knowledge that has built up as a result of using this convention? You will see clear examples of the standard directory structure in the next chapter. First time users often complain about Maven forcing you to do things a certain way and the formalization of the directory structure is the source of most of the complaints. Maven pushes you to think clearly about the separation of concerns when setting up your projects because modularity leads to reuse. the boundaries between our three separate concerns can easily become blurred and the ability to reuse the utility code could prove to be difficult. which should be identified and separated to cope with complexity and to achieve the required engineering quality factors such as adaptability. separate projects: a project for the client portion of the application. You could produce a single JAR file which includes all the compiled classes. you need to ask yourself if the extra configuration that comes with customization is really worth it. It is a very simple idea but it can save you a lot of time. The separation of concerns (SoC) principle states that a given problem involves different kinds of concerns. even if you only look at a few new projects a year that's time better spent on your application. 27 . but. Having the utility code in a separate project (a separate JAR file). a project for the server portion of the application.
2. 1. The execution of Maven's plugins is coordinated by Maven's build life cycle in a declarative fashion with instructions from Maven's POM. Maven's project object model (POM) Maven is project-centric by design.jar. you would not even be able to get the information from the jar's manifest.2. Maven is useless . and many other functions. Maven puts this SoC principle into practice by encapsulating build logic into coherent modules called plugins. The naming conventions provide clarity and immediate comprehension. The intent behind the standard naming conventions employed by Maven is that it lets you understand exactly what you are looking at by. It doesn't make much sense to exclude pertinent information when you can have it at hand to use. and the POM is Maven's description of a single project. well. Maven promotes reuse by encouraging a separation of concerns (SoC) . a plugin for creating JARs. is the use of a standard naming convention for directories and for the primary output of each project. This is illustrated in the Coherent Organization of Dependencies section. This is important if there are multiple sub-projects involved in a build process. Plugins are the key building blocks for everything in Maven. Without the POM. 28 . It is immediately obvious that this is version 1.jar you would not really have any idea of the version of Commons Logging. Even from this short list of examples you can see that a plugin in Maven has a very specific role to play in the grand scheme of things.3. A simple example of a standard naming convention might be commons-logging-1. In Maven there is a plugin for compiling source code. Reuse of Build Logic As you have already learned. later in this chapter. Maven can be thought of as a framework that coordinates the execution of plugins in a well defined way.2. 1. It is the POM that drives execution in Maven and this approach can be described as model-driven or declarative execution. One important concept to keep in mind is that everything accomplished in Maven is the result of a plugin executing. the plugin configurations contained in the POM. looking at it. which results because the wrong version of a JAR file was used. It's happened to all of us.2.2.Better Builds with Maven Standard naming conventions The third convention in Maven.2 of Commons Logging. in a lot of cases. a set of conventions really. because the naming convention keeps each one separate in a logical. Systems that cannot cope with information rich artifacts like commons-logging-1.jar are inherently flawed because eventually. Moreover. you'll track it down to a ClassNotFound exception. If the JAR were named commonslogging.the POM is Maven's currency. easily comprehensible manner. when something is misplaced. a plugin for creating Javadocs. but with Maven. a plugin for running tests. Declarative Execution Everything in Maven is driven in a declarative fashion using Maven's Project Object Model (POM) and specifically. and it doesn't have to happen again.
0</modelVersion> <groupId>com. The POM is an XML document and looks like the following (very) simplified example: <project> <modelVersion>4. generated by this project. in Maven all POMs have an implicit parent in Maven's Super POM.xml files. The key feature to remember is the Super POM contains important default information so you don't have to repeat this information in the POMs you create.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.This is the top-level element in all Maven pom. You. • artifactId . The POM shown previously is a very simple POM.jar). all objects have the implicit parent of java. myapp-1.This element indicates the unique base name of the primary artifact being 29 .Object.mycompany.lang. Additional artifacts such as source bundles also use the artifactId as part of their file name.This required element indicates the version of the object model that the POM is using. • • project . being the observant reader. The answer lies in Maven's implicit use of its Super POM.Introducing Maven The POM below is an example of what you could use to build and test a project.Object class.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM will allow you to compile. so if you wish to find out more about it you can refer to Appendix B. will ask “How this is possible using a 15 line file?”. The POM contains every important piece of information about your project. The version of the model itself changes very infrequently. and generate basic documentation.apache. For example org. The Super POM can be rather intimidating at first glance.<extension> (for example. A typical artifact produced by Maven would have the form <artifactId>-<version>. and is the analog of the Java language's java.lang. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization. • groupId . but it is mandatory in order to ensure stability when Maven introduces new features or other model changes.maven.0.This element indicates the unique identifier of the organization or group that created the project. but still displays the key elements that every POM contains. In Java. modelVersion . Maven's Super POM carries with it all the default conventions that Maven encourages.8. test. Likewise.plugins is the designated groupId for all Maven plugins.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.0.
• • For a complete reference of the elements available for use in the POM please refer to the POM reference at. For now. url . and compile phases that precede it automatically. compilation. WAR. See Chapter 2. When you need to add some functionality to the build life cycle you do so with a plugin. For example.This element indicates where the project's site can be found. generate-resources.This element indicates the display name used for the project. Maven plugins provide reusable build logic that can be slotted into the standard build life cycle. Maven's build life cycle Software projects generally follow similar. or install.org/maven-model/maven. the compile phase invokes a certain set of goals to compile a set of classes. or other projects that use it as a dependency. The actions that have to be performed are stated at a high level. It is important to note that each phase in the life cycle will be executed up to and including the phase you specify. The default value for the packaging element is jar so you do not have to specify this in most cases. and Maven deals with the details behind the scenes. or EAR. or goals. the build life cycle consists of a series of phases where each phase can perform one or more actions. or test. WAR. In Maven. or create a custom plugin for the task at hand. description . Any time you need to customize the way your project builds you either use an existing plugin. 30 .7 Using Maven Plugins and Chapter 5 Developing Custom Maven Plugins for examples and details on how to customize the Maven build. related to that phase. if you tell Maven to compile. installation. This not only means that the artifact produced is a JAR. Maven will execute the validate. • version .This element indicates the package type to be used by this artifact (JAR. initialize.html. The standard build life cycle consists of many phases and these can be thought of as extension points. For example. testing. packaging. EAR. etc. just keep in mind that the selected packaging of a project plays a part in customizing the build life cycle. So.apache. The path that Maven moves along to accommodate an infinite variety of projects is called the build life cycle. etc. Maven • name .Better Builds with Maven • packaging .This element indicates the version of the artifact generated by the project. which indicates that a project is in a state of development. but also indicates a specific life cycle to use as part of the build process.). you tell Maven that you want to compile. The life cycle is a topic dealt with later in this chapter. process-sources. goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version. generate-sources. and during the build process for your project. In Maven you do day-to-day work by invoking particular phases in this standard build life cycle. This is often used in Maven's generated documentation.This element provides a basic description of your project. or package. well-trodden build paths: preparation.
Dependency Management is one of the most powerful features in Maven.1. in order to find the artifacts that most closely match the dependency request. If a matching artifact is located.jar. but you may be asking yourself “Where does that dependency come from?” and “Where is the JAR?” The answers to those questions are not readily apparent without some explanation of how Maven's dependencies. Maven takes the dependency coordinates you provide in the POM. artifacts and repositories work. and repositories. you are simply telling Maven what a specific project expects.1 of the junit artifact produced by the junit group.4. but a Java artifact could also be a WAR. instead you deal with logical dependencies. instead it depends on version 3. Your project doesn't require junit-3. Maven transports it from that remote repository to your local repository for project use. and it supplies these coordinates to its own internal dependency mechanisms.0.2. which is straightforward. our example POM has a single dependency listed for Junit: <project> <modelVersion>4. Maven needs to know what repository to search as well as the dependency's coordinates. Coherent Organization of Dependencies We are now going to delve into how Maven resolves dependencies and discuss the intimately connected concepts of dependencies. Maven tries to satisfy that dependency by looking in all of the remote repositories to which it has access. A dependency is a reference to a specific artifact that resides in a repository.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. With Maven.0</modelVersion> <groupId>com.8.8. If you recall.8. or EAR file.mycompany. 31 .app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. artifactId and version. In the POM you are not specifically telling Maven where the dependencies are physically located. grabbing a dependency. SAR. and providing this dependency to your software project.Introducing Maven 1. but the key concept is that Maven dependencies are declarative. A dependency is uniquely identified by the following identifiers: groupId. In “Maven-speak” an artifact is a specific piece of software. In Java.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM states that your project has a dependency on JUnit. the most common artifact is a JAR file. At a basic level. When a dependency is declared within the context of your project. artifacts. we can describe the process of dependency management as Maven reaching out into the world. In order for Maven to attempt to satisfy a dependency. you stop focusing on a collection of JAR files. There is more going on behind the scenes.
Read the following sections for specific details regarding where Maven searches for these dependencies. but when a declared dependency is not present in your local repository Maven searches all the remote repositories to which it has access to find what’s missing. Maven creates your local repository in <user_home>/. By default.Better Builds with Maven Maven has two types of repositories: local and remote.m2/repository. 32 . You must have a local repository in order for Maven to work. The following folder structure shows the layout of a local Maven repository that has a few locally installed dependency artifacts such as junit-3.8. Maven usually interacts with your local repository. it will create your local repository and populate it with artifacts as a result of dependency requests.jar. Local Maven repository When you install and run Maven for the first time.1.
but in practice the repository is a directory structure in your file system.jar artifact that are now in your local repository. In theory. On the next page is the general pattern used to create the repository layout: 33 .1. We’ll stick with our JUnit example and examine the junit-3.8.Introducing Maven Figure 1-1: Artifact movement from remote to local repository So you understand how the layout works. a repository is just an abstract storage mechanism.. Above you can see the directory structure that is created when the JUnit dependency is resolved. take a closer look at one of the artifacts that appeared in your local repository.
8.1. If this file is not present.y. and a version of “3. artifactId of “junit”.apache.1” in <user_home>/. Maven will generate a path to the artifact in your local repository.8.maven.jar.m2/repository/junit/junit/3.1/junit-3.z then you will end up with a directory structure like the following: Figure 1-3: Sample directory structure In the first directory listing you can see that Maven artifacts are stored in a directory structure that corresponds to Maven’s groupId of org. Maven will fetch it from a remote repository. Locating dependency artifacts When satisfying dependencies.8.Better Builds with Maven Figure 1-2: General pattern for the repository layout If the groupId is a fully qualified domain name (something Maven encourages) such as x. for example. 34 . Maven attempts to locate a dependency's artifact using the following process: first. Maven will attempt to find the artifact with a groupId of “junit”.
upon which your project relies. internal Maven repository. which can be managed by Exist Global Maestro. With Maven.maven.0 JARs to every project. the artifact is downloaded and installed in your local repository. it doesn't scale easily to support an application with a great number of small components. 4 The history of how Maven communicates to the central repository has changed over time based on the Maven client release version. While this approach works for a few projects. if your project has ten web applications.mergere. Once the dependency is satisfied.0 by changing your dependency declarations. there is no need to store the various spring JAR files in your project. and they shouldn't be versioned in an SCM. Each project relies upon a specific artifact via the dependencies listed in a POM.. 3 Alternatively. Your local repository is one-stop-shopping for all artifacts that you need regardless of how many projects you are building. Dependencies are not your project's code.com/maven2. the Maven Super POM sets the central repository to. 35 .ibiblio. modular project arrangements. into a lib directory.. Continuum and Archiva build platform. From Maven version 2. Maestro is an Apache License 2.0.0 through 2. Declare your dependencies and let Maven take care of details like compilation and testing classpaths.2.1.0. which all depend on version 1.8.org/maven2/ 2. every project with a POM that references the same dependency will use this single copy installed in your local repository.Introducing Maven By default. The following repositories have been the central/default repository in the Maven Super POM: 1.3 If your project's POM contains more than one remote repository.6 there have been three central repository URLs and a fourth URL is under discussion at this time.com/. you don’t store a copy of junit3. artifacts can be downloaded from a secure.4 From this point forward. Before Maven. In other words. but it is incompatible with the concept of small.org/maven2/ If you are using the Maestro Developer Client from Exist Global.maven. you would check the 10-20 JAR files. all projects referencing this dependency share a single copy of this JAR. Maven will attempt to fetch an artifact from the central Maven repository at. Storing artifacts in your SCM along with your project may seem appealing. For more information on Maestro please see:. the common pattern in most projects was to store JAR files in a project's subdirectory.0 distribution based on a pre-integrated Maven.ibiblio.exist. Instead of adding the Spring 2.. If you were coding a web application. Maven will attempt to download an artifact from each remote repository in the order defined in your POM.org/maven2.jar for each project that needs it. you simply change some configurations in Maven.org/pub/mirrors/maven2/ 3.6 of the Spring Framework. and it is a trivial process to upgrade all ten web applications to Spring 2. and you would add these dependencies to your classpath.
Maven is a set of standards. you don't have to jump through hoops trying to get it to work. simplifies the process of development. and. To summarize. a useful technology just works. in the background. Maven is a repository. Like the engine in your car or the processor in your laptop. 36 .3. You don't have to worry about whether or not it's going to work. it is the adoption of a build life-cycle process that allows you to take your software development to the next level. Maven's Benefits A successful technology takes away burden. it should rarely. Maven is a framework. active open-source community that produces software focused on project management. if ever. Maven is also a vibrant. shielding you from complexity and allowing you to focus on your specific task. Maven provides such a technology for project management. rather than imposing it. and Maven is software. Using Maven is more than just downloading another JAR file and a set of scripts.Better Builds with Maven 1. be a part of your thought process. in doing so.. The terrible temptation to tweak should be resisted unless the payoff is really noticeable.Jon Bentley and Doug McIlroy 37 .
mycompany. If you are behind a firewall. so for now simply assume that the above settings will work. it may be necessary to make a few more preparations for Maven to function correctly.com/maven2</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> </settings> In its optimal mode.xml file.mycompany.xml file with the following content: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy.mycompany.m2/settings.xml file with the following content. 38 .1. it is assumed that you are a first time Maven user and have already set up Maven on your local system.Better Builds with Maven 2.m2/settings. If there is an active Maven proxy running. then please refer to Maven's Download and Installation Instructions before continuing. To do this. create a <user_home>/. Depending on where your machine is located. then note the URL and let Maven know you will be using a proxy.com</host> <port>8080</port> <username>your-username</username> <password>your-password</password> </proxy> </proxies> </settings> If Maven is already in use at your workplace. then you will have to set up Maven to understand that.xml file will be explained in more detail in the following chapter and you can refer to the Maven Web site for the complete details on the settings. Create a <user_home>/.com</id> <name>My Company's Maven Proxy</name> <url>. Now you can perform the following basic check to ensure Maven is working correctly: mvn -version If Maven's version is displayed. Maven requires network access. <settings> <mirrors> <mirror> <id>maven. If you have not set up Maven yet. then you should be all set to create your first Maven project. The settings. Preparing to Use Maven In this chapter. ask your administrator if there is an internal Maven proxy.
39 . which looks like the following: <project> <modelVersion>4.xml file.2. an archetype is a template of a project.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url></version> <scope>test</scope> </dependency> </dependencies> </project> At the top level of every project is your pom.8. execute the following: C:\mvnbook> mvn archetype:create -DgroupId=com. which contains a pom.xml file. you know you are dealing with a Maven project. but if you would like more information about archetypes. In Maven. you will notice that the following directory structure has been created. To create the Quick Start Maven project. This chapter will show you how the archetype mechanism works.apache.mycompany. After the archetype generation has completed. you will notice that a directory named my-app has been created for the new project.app \ -DartifactId=my-app You will notice a few things happened when you executed this command.0</modelVersion> <groupId>com.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. and that it in fact adheres to Maven's standard directory layout discussed in Chapter 1. which is combined with some user input to produce a fullyfunctional Maven project. you will use Maven's Archetype mechanism. Whenever you see a directory structure. First. An archetype is defined as an original pattern or model from which all other things of the same kind are made. and this directory contains your pom.mycompany. please refer to the Introduction to Archetypes. Creating Your First Maven Project To create your first project.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.Getting Started with Maven 2.0.xml.
3. Then. and deploying the project (source files. Compiling Application Sources As mentioned in the introduction. you tell Maven what you need. 2. but later in the chapter you will see how the standard directory layout is employed for other project content. and so on). in order to accomplish the desired task. Change to the <my-app> directory. you are ready to build your project. some application sources. the site. note that this one simple command encompasses Maven's four foundational principles: Convention over configuration • Reuse of build logic • Declarative execution • Coherent organization of dependencies • These principles are ingrained in all aspects of Maven.Better Builds with Maven Figure 2-1: Directory structure after archetype generation The src directory contains all of the inputs required for building. various descriptors such as assembly descriptors. The <my-app> directory is the base directory. ${basedir}. documenting. for the my-app project. in one fell swoop. Before you issue the command to compile the application sources. and some test sources. configuration files. compile your application sources using the following command: C:\mvnbook\my-app> mvn compile 40 . but the following analysis of the simple compile command shows you the four principles in action and makes clear their fundamental importance in simplifying the development of a project. Now that you have a POM. testing. In this first stage you have Java source files only. at a very high level. in a declarative way.
This default value (though not visible in the POM above) was. how was Maven able to decide to use the compiler plugin. is the tool used to compile your application sources. How did Maven know where to look for sources in order to compile them? And how did Maven know where to put the compiled classes? This is where Maven's principle of “convention over configuration” comes into play.plugins:maven-resources-plugin: checking for updates from central . in fact. now you know how Maven finds application sources. application sources are placed in src/main/java.. This means you don't have to state this location at all in any of your POMs. What actually compiled the application sources? This is where Maven's second principle of “reusable build logic” comes into play. You can. how was Maven able to retrieve the compiler plugin? After all.maven. By default. The same build logic encapsulated in the compiler plugin will be executed consistently across any number of projects. The next question is. along with its default configuration. is target/classes.Getting Started with Maven After executing this command you should see output similar to the following: [INFO-------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [compile] [INFO]------------------------------------------------------------------[INFO] artifact org. in the first place? You might be guessing that there is some background process that maps a simple command to a particular plugin. The standard compiler plugin.apache.. Maven downloads plugins as they are needed. [INFO] artifact org. there is a form of mapping and it is called Maven's default build life cycle.... but there is very little reason to do so. and how Maven invokes the compiler plugin. So.plugins:maven-compiler-plugin: checking for updates from central . override this default location. of course. 41 . if you use the default location for application sources. [INFO] [resources:resources] . The same holds true for the location of the compiled classes which. Although you now know that the compiler plugin was used to compile the application sources. what Maven uses to compile the application sources. if you poke around the standard Maven installation. you won't find the compiler plugin since it is not shipped with the Maven distribution. inherited from the Super POM.apache. by default. Even the simplest of POMs knows the default location for application sources. Instead. . In fact.maven..
Therefore. programmers always write and execute their own unit tests *nudge nudge. Compiling Test Sources and Running Unit Tests Now that you're successfully compiling your application's sources. the compiled classes were placed in target/classes. it took almost 4 minutes with a broadband connection).com/. which is specified by the standard directory layout. because Maven already has what it needs.Better Builds with Maven The first time you execute this (or any other) command. 42 . wink wink*). simply tell Maven you want to test your sources. By following the standard Maven conventions you can get a lot done with very little effort! 2. This internal repository can be managed by Exist Global Maestro. Continuum and Archiva build platform. it won't download anything new. artifacts can be downloaded from a secure. or where your output should go. If you're a keen observer you'll notice that using the standard conventions makes the POM above very small. high-performance.4. This implies that all prerequisite phases in the life cycle will be performed to ensure that testing will be successful. Maven will download all the plugins and related dependencies it needs to fulfill the command. Use the following simple command to test: C:\mvnbook\my-app> mvn test 5 Alternatively. and eliminates the requirement for you to explicitly tell Maven where any of your sources are. Maven repository that is internal to your organization.0 distribution based on a pre-integrated Maven.exist. As you can see from the output. For more information on Maestro please see: The next time you execute the same command again. Again. you probably have unit tests that you want to compile and execute as well (after all. Maestro is an Apache License 2. From a clean installation of Maven this can take quite a while (in the output above. Maven will execute the command much quicker.
. 43 . you'll want to move on to the next logical step. [INFO] [surefire:test] [INFO] Setting reports dir: C:\Test\Maven2\test\my-app\target/surefire-reports ------------------------------------------------------T E S T S ------------------------------------------------------[surefire] Running com. • If you simply want to compile your test sources (but not execute the tests). [INFO] [resources:resources] [INFO] [compiler:compile] [INFO] Nothing to compile . Time elapsed: 0 sec Results : [surefire] Tests run: 1. how to package your application.apache.Getting Started with Maven After executing this command you should see output similar to the following: [INFO]------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [test] [INFO]------------------------------------------------------------------[INFO] artifact org. and execute the tests. Failures: 0.all classes are up to date [INFO] [resources:testResources] [INFO] [compiler:testCompile] Compiling 1 source file to C:\Test\Maven2\test\my-app\target\test-classes . • Before compiling and executing the tests. Errors: 0 [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 15 seconds [INFO] Finished at: Thu Oct 06 08:12:17 MDT 2005 [INFO] Final Memory: 2M/8M [INFO]------------------------------------------------------------------- Some things to notice about the output: Maven downloads more dependencies this time. remember that it isn't necessary to run this every time.app.. mvn test will always run the compile and test-compile phases first.AppTest [surefire] Tests run: 1.. as well as all the others defined before it.mycompany. Errors: 0. since we haven't changed anything since we compiled last)..maven. Maven compiles the main code (all these classes are up-to-date. Now that you can compile the application sources. Failures: 0. These are the dependencies and plugins necessary for executing the tests (recall that it already has the dependencies it needs for compiling and won't download them again).plugins:maven-surefire-plugin: checking for updates from central . compile the tests. you can execute the following command: C:\mvnbook\my-app> mvn test-compile However.
Packaging and Installation to Your Local Repository Making a JAR file is straightforward and can be accomplished by executing the following command: C:\mvnbook\my-app> mvn package If you take a look at the POM for your project.Better Builds with Maven 2. Take a look in the the target directory and you will see the generated JAR file.jar [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 5 seconds [INFO] Finished at: Tue Oct 04 13:20:32 GMT-05:00 2005 [INFO] Final Memory: 3M/8M [INFO]------------------------------------------------------------------- 44 . Errors: 0.0-SNAPSHOT.001 sec Results : [surefire] Tests run: 1. Time elapsed: 0. you will notice the packaging element is set to jar. Errors: 0 [INFO] [jar:jar] [INFO] Building jar: <dir>/my-app/target/my-app-1. To install.jar to <localrepository>\com\mycompany\app\my-app\1. It can then be used by other projects as a dependency.m2/repository is the default location of the repository. Now.app. Failures: 0.mycompany. This is how Maven knows to produce a JAR file from the above command (you'll read more about this later). The directory <user_home>/.0-SNAPSHOT.0-SNAPSHOT\my-app-1.0-SNAPSHOT. you'll want to install the artifact (the JAR file) you've generated into your local repository.jar [INFO] [install:install] [INFO] Installing c:\mvnbook\my-app\target\my-app.AppTest [surefire] Tests run: 1.5. Failures: 0.
packaging. alternatively you might like to generate an Eclipse descriptor: C:\mvnbook\my-app> mvn eclipse:eclipse 45 . as it is one of the highly-prized features in Maven. what other functionality can you leverage. there are a great number of Maven plugins that work out-of-the-box. testing. the following tests are included: **/*Test. So. there is far more functionality available to you from Maven without requiring any additions to the POM. Of course. given Maven's re-usable build logic? With even the simplest POM.Getting Started with Maven Note that the Surefire plugin (which executes the test) looks for tests contained in files with a particular naming convention.java • You have now completed the process for setting up. This chapter will cover one in particular. In contrast. Without any work on your part. By default.java • **/*TestCase. In this case. to get any more functionality out of an Ant build script. this covers the majority of tasks users perform. you must keep making error-prone additions. Perhaps you'd like to generate an IntelliJ IDEA descriptor for the project: C:\mvnbook\my-app> mvn idea:idea This can be run over the top of a previous IDEA project. as it currently stands.java • Conversely. for example: C:\mvnbook\my-app> mvn clean This will remove the target directory with the old build data before starting. simply execute the following command: C:\mvnbook\my-app> mvn site There are plenty of other stand-alone goals that can be executed as well. this POM has enough information to generate a Web site for your project! Though you will typically want to customize your Maven site. and if you've noticed. and installing a typical Maven project. so it is fresh. Or.java • **/Test*. it will update the settings rather than starting fresh. building. the following tests are excluded: **/Abstract*Test. everything done up to this point has been driven by an 18-line POM. For projects that are built with Maven.java • **/Abstract*TestCase. if you're pressed for time and just need to create a basic Web site for your project.
you can package resources within JARs. If you unpacked the JAR that Maven created you would see the following: 46 . simply by placing those resources in a standard directory structure. The rule employed by Maven is that all directories or files placed within the src/main/resources directory are packaged in your JAR with the exact same structure. starting at the base of the JAR. is the packaging of resources into a JAR file.Better Builds with Maven 2.properties file within that directory. Handling Classpath Resources Another common use case.6. This means that by adopting Maven's standard conventions. Figure 2-2: Directory structure after adding the resources directory You can see in the preceding example that there is a META-INF directory with an application. you need to add the directory src/main/resources. which requires no changes to the POM shown previously. That is where you place any resources you wish to package in the JAR. In the following example. For this common task. Maven again uses the standard directory layout.
47 . You will also notice some other files like META-INF/MANIFEST.properties file is there in the META-INF directory.Getting Started with Maven Figure 2-3: Directory structure of the JAR file created by Maven The original contents of src/main/resources can be found starting at the base of the JAR and the application.xml and pom. These come standard with the creation of a JAR in Maven. You can create your own manifest if you choose. but the properties can be utilized using the standard Java APIs. Operating on the POM file would require you to use Maven utilities. as well as a pom.properties file.properties files are packaged up in the JAR so that each artifact produced by Maven is self-describing and also allows you to utilize the metadata in your own application.xml and pom. Then run mvn install and examine the jar file in the target directory. One simple use might be to retrieve the version of your application.MF. simply create the resources and META-INF directories and create an empty file called application. but Maven will generate one by default if you don't. If you would like to try this example. should the need arise. The pom.properties inside.
follow the same pattern as you do for adding resources to the JAR. except place resources in the src/test/resources directory..] // Retrieve resource InputStream is = getClass()..6.] 48 .. you could use a simple snippet of code like the following for access to the resource required for testing: [. Handling Test Classpath Resources To add resources to the classpath for your unit tests. At this point you have a project directory structure that should look like the following: Figure 2-4: Directory structure after adding test resources In a unit test.1.properties" ).Better Builds with Maven 2.getResourceAsStream( "/test. // Do something with the resource [..
you can filter your resource files dynamically by putting a reference to the property that will contain the value into your resource file using the syntax ${<property name>}.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.maven. Filtering Classpath Resources Sometimes a resource file will need to contain a value that can be supplied at build time only.apache.apache. a property defined in an external properties file. you can use the following configuration for the maven-jarplugin: <plugin> <groupId>org.0</modelVersion> <groupId>com.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>. To accomplish this in Maven.xml.0. a value defined in the user's settings.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestFile>META-INF/MANIFEST. or a system property. To have Maven filter resources when copying. simply set filtering to true for the resource directory in your pom.MF</manifestFile> </archive> </configuration> </plugin> 2.xml: <project> <modelVersion>4.8.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.mycompany.6.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> </project> 49 . The property can be either one of the values defined in your pom.2.xml.Getting Started with Maven To override the manifest file yourself.
which will eventually go into the JAR looks like this: # application. ${project. and ${project.version=${project. add a reference to this new file in the pom.properties: # filter.properties application.name=Maven Quick Start Archetype application.version} With that in place.0-SNAPSHOT To reference a property defined in an external file.version} refers to the version of the project.properties my.have been added. All of this information was previously provided as default values and now must be added to the pom.xml. any element in your POM is available when filtering resources. whose values will be supplied when the resource is filtered as follows: # application. So ${project. the POM has to explicitly state that the resources are located in the src/main/resources directory.xml to override the default value for filtering and set it to true.name} application. resources.build. To reference a property defined in your pom. you can execute the following command (process-resources is the build life cycle phase where the resources are copied and filtered): mvn process-resources The application. First.xml file: <build> <filters> <filter>src/main/filters/filter.name=${project. all you need to do is add a reference to this external file in your pom. In addition.version=1. create an external properties file and call it src/main/filters/filter. To continue the example. and resource elements .value=hello! Next.properties application. the property name uses the names of the XML elements that define the value. create an src/main/resources/METAINF/application.name} refers to the name of the project.xml.which weren't there before .finalName} refers to the final name of the file created.properties</filter> </filters> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> 50 .properties file.Better Builds with Maven You'll notice that the build. In fact. when the built project is packaged.filter.properties file under target/classes.
0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url> and you'd get the same effect (notice you don't need the references to src/main/filters/filter.mycompany.home).value>hello</my.line.version} command.filter.properties java.line.version=${java.Getting Started with Maven Then. either the system properties built into Java (like java.name} application.version or user.value} The next execution of the mvn process-resources command will put the new property value into application.version=${project.filter.prop=${command.filter.version} message=${my.filter.value> </properties> </project> Filtering resources can also retrieve values from system properties.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.apache.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.0. To continue the example.properties.properties either): <project> <modelVersion>4. add a reference to this property in the application.8. you could have defined it in the properties section of your pom.name=${project.0</modelVersion> <groupId>com. or properties defined on the command line using the standard Java -D parameter.properties file as follows: # application.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> <properties> <my.properties application. As an alternative to defining the my.value property in an external file.prop} 51 . change the application.properties file to look like the following: # application.
then you would create a resource entry to handle the filtering of resources with an exclusion for the resources you wanted unfiltered. and an inclusion of your images directory. mvn process-resources "-Dcommand.line.] </project> 52 .properties file will contain the values from the system properties..] <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>images/**</exclude> </excludes> </resource> <resource> <directory>src/main/resources</directory> <includes> <include>images/**</include> </includes> </resource> </resources> </build> [. Preventing Filtering of Binary Resources Sometimes there are classpath resources that you want to include in your JAR.. The build element would look like the following: <project> [. If you had a src/main/resources/images that you didn't want to be filtered. the application. This is most often the case with binary resources. with filtering disabled..line. In addition you would add another resource entry. when you execute the following command (note the definition of the command.6. but you do not want them filtered.3.prop=hello again" 2.Better Builds with Maven Now.prop property on the command line). for example image files..
Getting Started with Maven 2. To illustrate the similarity between plugins and dependencies.0 sources. 53 ..] </project> You'll notice that all plugins in Maven 2 look very similar to a dependency.apache.xml. to customize the build for a Maven project. the groupId and version elements have been shown. Using Maven Plugins As noted earlier in the chapter.plugins or the org. This is as simple as adding the following to your POM: <project> [.you can lock down a specific version. you must include additional Maven plugins.maven.codehaus. If it is not present on your local system. If you do not specify a groupId.maven. plugin developers take care to ensure that new versions of plugins are backward compatible so you are usually OK with the latest release. The configuration element applies the given parameters to every goal from the compiler plugin. or configure parameters for the plugins already included in the build. If you do not specify a version then Maven will attempt to use the latest released version of the specified plugin. but in most cases these elements are not required.. In the above case. you may want to configure the Java compiler to allow JDK 5.. This is often the most convenient way to use a plugin.. or settings. the compiler plugin is already used as part of the build process and this just changes the configuration.5</source> <target>1. For the most part. and in some ways they are. this plugin will be downloaded and installed automatically in much the same way that a dependency would be handled. then Maven will default to looking for the plugin with the org. but if you find something has changed . but you may want to specify the version of a plugin to ensure reproducibility.apache. You can specify an additional groupId to search within your POM.7.] <build> <plugins> <plugin> <groupId>org.5</target> </configuration> </plugin> </plugins> </build> [.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2..mojo groupId label. For example.0</version> <configuration> <source>1.
Summary After reading Chapter 2. although you might want to refer to the next chapter for more information about customizing your build to fit your project's unique needs. testing a project. 2.Better Builds with Maven If you want to find out what the plugin's configuration options are.apache. you could stop reading this book now. you have gained access to every single project using Maven. you'll know how to use the basic features of Maven: creating a project. you should be up and running with Maven.apache. If you want to see the options for the maven-compiler-plugin shown previously.maven. compiling a project. and packaging a project.plugins \ -DartifactId=maven-compiler-plugin -Dfull=true You can also find out what plugin configuration is available by using the Maven Plugin Reference section at. If someone throws a Maven project at you.org/plugins/ and navigating to the plugin and goal you are using. If you were looking for just a build tool. If you are interested in learning how Maven builds upon the concepts described in the Introduction and obtaining a deeper working knowledge of the tools introduced in Chapter 2. By learning how to build a Maven project. In eighteen pages. you've seen how you can use Maven to build your project. read on. You've learned a new language and you've taken Maven for a test drive.8. You should also have some insight into how Maven handles dependencies and provides an avenue for customization using Maven plugins. The next few chapters provide you with the how-to guidelines to customize Maven's behavior and use Maven to manage interdependent software projects. use the mvn help:describe command. use the following command: mvn help:describe -DgroupId=org. 54 . .3.Edward V. .
using a real-world example. The application that you are going to create is called Proficio. encapsulate. which consists of all the classes that will be used by Proficio as a whole.1. and be able to easily identify what a particular module does simply by looking at its name. you are going to learn about some of Maven’s best practices and advanced uses by working on a small application to manage frequently asked questions (FAQ). In this chapter. goal. So. which consists of a set of interfaces. or purpose. • Proficio Core: The implementation of the API. Proficio has a very simple memory-based store and a simple XStream-based store. In examining the top-level POM for Proficio. a key goal for every software development project. which really means a reference to another POM. Moreover. lets start by discussing the ideal directory structure for Proficio. more manageable and comprehensible parts. you will be guided through the specifics of setting up an application and managing that application's Maven structure. This setup is typically referred to as a multi-module build and this is how it looks in the top-level Proficio POM: 56 . The only real criterion to which to adhere is that your team agrees to and uses a single naming convention. • These are default naming conventions that Maven uses. it is important to keep in mind that Maven emphasizes the practice of standardized and modular builds. Now you will delve in a little deeper. A module is a reference to another Maven project. SoC refers to the ability to identify. task. • Proficio Model: The data model for the Proficio application. As such.Better Builds with Maven 3. The natural outcome of this practice is the generation of discrete and coherent components. Concerns are the primary motivation for organizing and decomposing software into smaller. houses all the store modules. but you are free to name your modules in any fashion your team decides. The interfaces for the APIs of major components. you will see that the Proficio sample application is made up of several Maven modules: Proficio API: The application programming interface for Proficio.2. are also kept here. you can see in the modules element all the sub-modules that make up the Proficio application. each of which addresses one or more specific concerns. which is Latin for “help”. everyone on the team needs to clearly understand the convention. • Proficio Stores: The module which itself. The guiding principle in determining how best to decompose your application is called the Separation of Concerns (SoC). Introduction In the second chapter you stepped though the basics of setting up a simple project. • Proficio CLI: The code which provides a command line interface to Proficio. 3. Setting Up an Application Directory Structure In setting up Proficio's directory structure. In doing so. which enable code reusability. like the store. and operate on the pieces of software that are relevant to a particular concept.
It is recommended that you specify the application version in the top-level POM and use that version across all the modules that make up your application.. For POMs that contain modules.apache.0</modelVersion> <groupId>com. In Maven 1.x documentation. but the Maven team is trying to consistently refer to these setups as multimodule builds now. If you were to look at Proficio's directory structure you would see the following: Figure 3-1: Proficio directory structure 57 .] </project> An important feature to note in the POM above is the value of the version element.. Currently there is some variance on the Maven Web site when referring to directory structures that contain more than one Maven project. so it makes sense that all the modules have a common application version.0-SNAPSHOT.] .proficio</groupId> <artifactId>proficio</artifactId> <packaging>pom</packaging> <version>1.0-SNAPSHOT</version> <name>Maven Proficio</name> <url>. which you can see is 1.exist.0. For an application that has multiple modules.mvnbook. You should take note of the packaging element..x these were commonly referred to as multi-project builds and some of this vestigial terminology carried over to the Maven 2. which in this case has a value of pom.. it is very common to release all the sub-modules together.org</url> [.Creating Applications with Maven <project> <modelVersion>4.
proficio</groupId> <artifactId>proficio</artifactId> <version>1.mvnbook.0.exist-SNAPSHOT</version> </parent> <modelVersion>4.Better Builds with Maven>com. Looking at the module names is how Maven steps into the right directory to process the respective POMs located there. but the interesting thing here is that we have another project with a packaging type of pom. which is the proficio-stores module.
.3.mvnbook. Being the observant user. Using project inheritance allows you to do things like state your organizational information. In this case the assumption being made is that JUnit will be used for testing in all our child projects. in any of your child POMs. Let's examine a case where it makes sense to put a resource in the top-level POM.. you will see that in the dependencies section there is a declaration for JUnit version 3. you never have to declare this dependency again. 3....] <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. If you look at the top-level POM for Proficio.] This is the snippet in each of the POMs that lets you draw on the resources stated in the specified toplevel POM and from which you can inherit down to the level required .. So. The dependency is stated as following: <project> [.enabling you to add resources where it makes sense in the hierarchy of your projects.8.proficio</groupId> <artifactId>proficio</artifactId> <version>1. or state your common dependencies . you have probably taken a peek at all the POMs in each of the projects that make up the Proficio project and noticed the following at the top of each of the POMs: [.] <parent> <groupId>com.all in a single place.8..1</version> <scope>test</scope> </dependency> </dependencies> [. state your deployment information.Creating Applications with Maven Whenever Maven sees a POM with a packaging of type pom Maven knows to look for a set of related sub-modules and then process each of those modules. You can nest sets of projects like this to any level.0-SNAPSHOT</version> </parent> [. organizing your projects in groups according to concern.exist.] </project> 59 . Using Project Inheritance One of the most powerful features in Maven is project inheritance. by stating the dependency in the top-level POM once. just as has been done with Proficio’s multiple storage mechanisms. which are all placed in one directory.1.. using our top-level POM for the sample Proficio application.
] </dependencies> [.0-SNAPSHOT</version> </parent> <modelVersion>4.mvnbook.mvnbook.] <dependencies> [.plexus</groupId> <artifactId>plexus-container-default</artifactId> </dependency> </dependencies> </project> In order for you to see what happens during the inheritance process.1 dependency: <project> [.. you will see the JUnit version 3. take a look 60 .8.. This command will show you the final result for a target POM.. After you move into the proficio-core module directory and run the command. you will need to use the handy at the resulting POM.exist.proficio</groupId> <artifactId>proficio-api</artifactId> </dependency> <dependency> <groupId>org.] </project> mvn help:effective-pom command.codehaus.0. if you take a look at the POM for the proficio-core module you will see the following (Note: there is no visible dependency declaration for JUnit): <project> <parent> <groupId>com.1</version> <scope>test</scope> </dependency> [..proficio</groupId> <artifactId>proficio</artifactId> <version>1.8.exist.Better Builds with Maven What specifically happens for each child POM. So.] <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3..0</modelVersion> <artifactId>proficio-core</artifactId> <packaging>jar</packaging> <name>Maven Proficio Core</name> <dependencies> <dependency> <groupId>com.. is that each one inherits the dependencies section of the top-level POM...
But remember from Chapter 2 that the Super POM sits at the top of the inheritance hierarchy. Managing Dependencies When you are building applications you typically have a number of dependencies to manage and that number only increases over time.exist. which in turn inherits from the Super POM.exist.mvnbook.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.proficio</groupId> 61 . the proficio-core project inherits from the top-level Proficio project. it is likely that some of those projects will share common dependencies. To illustrate how this mechanism works. individual projects.4. or align. You want to make sure that all the versions.mvnbook. You don't want.mvnbook. Looking at the effective POM includes everything and is useful to view when trying to figure out what is going on when you are having problems. to end up with multiple versions of a dependency on the classpath when your application executes.version}</version> </dependency> <dependency> <groupId>com.. so that the final application works correctly. making dependency management difficult to say the least. for example. across all of your projects are in alignment so that your testing accurately reflects what you will deploy as your final result.] <dependencyManagement> <dependencies> <dependency> <groupId>com. as the results can be far from desirable.mvnbook. Maven's strategy for dealing with this problem is to combine the power of project inheritance with specific dependency management elements in the POM.. of all your dependencies. 3. you use the dependency management section in the top-level POM of an application. versions of dependencies across several projects.exist. When you write applications which consist of multiple. So in this case. In order to manage.exist. When this happens it is critical that the same version of a given dependency is used for all your projects.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project.Creating Applications with Maven You will have noticed that the POM that you see when using the mvn help:effective-pom is bigger than you expected.version}</version> </dependency> <dependency> <groupId>com.proficio</groupId> <artifactId>proficio-store-memory</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com. let's look at the dependency management section of the Proficio top-level POM: <project> [.
mvnbook.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> [.proficio</groupId> <artifactId>proficio-model</artifactId> </dependency> </dependencies> </project> The version for this dependency is derived from the dependencyManagement element which is inherited from the Proficio top-level POM. There is an important distinction to be made between the dependencies element contained within the dependencyManagment element and the top-level dependencies element in the POM.mvnbook. whereas the top-level dependencies element does affect the dependency graph..proficio</groupId> <artifactId>proficio-core</artifactId> <version>${project.codehaus.Better Builds with Maven <artifactId>proficio-store-xstream</artifactId> <version>${project.. we have several Proficio dependencies and a dependency for the Plexus IoC container. If you take a look at the POM for the proficio-api module.version}) for proficio-model so that version is injected into the dependency above. The dependencies stated in the dependencyManagement only come into play when a dependency is declared without a version.] </project> Note that the ${project.version} specification is the version specified by the top-level POM's version element. you will see a single dependency declaration and that it does not specify a version: <project> [.0-SNAPSHOT (stated as ${project. which is the application version.] <dependencies> <dependency> <groupId>com.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. As you can see within the dependency management section.. 62 .version}</version> </dependency> <dependency> <groupId>com. to make it complete.exist. The dependencyManagement declares a stated preference for the 1.. The dependencies element contained within the dependencyManagement element is used only to state the preference for a version and by itself does not affect a project's dependency graph.version}</version> </dependency> <dependency> <groupId>org.exist.
.] </project> Specifying a snapshot version for a dependency means that Maven will look for new versions of that dependency without you having to manually specify a new version.codehaus.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1..mvnbook.Creating Applications with Maven 3.version}</version> </dependency> <dependency> <groupId>com.mvnbook.5. If you look at the top-level POM for Proficio you will see a snapshot version specified: <project> [. so Maven will attempt to update them. Snapshot dependencies are assumed to be changing. but you can use the -U command line option to force the search for updates. By default Maven will look for snapshots on a daily basis.] <version>1.exist.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project. Your build system needs to be able to deal easily with this real-time flux. When you specify a non-snapshot version of a dependency Maven will download that dependency once and never attempt to retrieve it again. and this is where Maven's concept of a snapshot comes into play. it is usually the case that each of the modules are in flux.exist. Your APIs might be undergoing some change or your implementations are undergoing change and are being fleshed out. 63 ..0-SNAPSHOT</version> <dependencyManagement> <dependencies> <dependency> <groupId>com. A snapshot in Maven is an artifact that has been prepared using the most recent sources available.. or you may be doing some refactoring. Using Snapshots While you are developing an application with multiple modules.version}</version> </dependency> <dependency> <groupId>org.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> [. Controlling how snapshots work will be explained in detail in Chapter 7.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.
the version selected is the one declared “nearest” to the top of the tree .0-SNAPSHOT (selected for compile) proficio-model:1.Better Builds with Maven 3. if you run mvn -X test on the proficio-core module.0-alpha-9 (selected for compile) plexus-utils:1. For example. as the graph grows. the output will contain something similar to: proficio-core:1.0-SNAPSHOT junit:3. and Proficio requires version 1. Maven selects the version that requires the least number of dependencies to be traversed.that is.codehaus. Resolving Dependency Conflicts and Using Version Ranges With the introduction of transitive dependencies in Maven 2. To ensure this.1 (selected for compile) It should be noted that running mvn -X test depends on other parts of the build having been executed beforehand. In Maven. modify the plexus-container-default dependency in the proficio-core/pom. you can exclude the dependency from the graph by adding an exclusion to the dependency that introduced it. and allowing Maven to calculate the full dependency graph.6. • While further dependency management features are scheduled for the next release of Maven at the time of writing. then the result is undefined. you can remove the incorrect version from the tree.0. or you can override both with the correct version. local scope test wins) proficio-api:1.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. there are ways to manually resolve these conflicts as the end user of a dependency. this has limitations: The version chosen may not have all the features required by the other dependencies.1 (selected for test) plexus-container-default:1. In this example. it became possible to simplify a POM by including only the dependencies you need directly. However. • If multiple versions are selected at the same depth.8. so it is useful to run mvn install at the top level of the project (in the proficio directory)to ensure that needed components are installed into the local repository. Once the path to the version has been identified.0. and more importantly ways to avoid it as the author of a reusable library.8.4 (selected for compile) classworlds:1. Removing the incorrect version requires identifying the source of the incorrect version by running Maven with the -X flag (for more information on how to do this.xml file as follows: <dependency> <groupId>org.0-alpha-9</version> <exclusions> <exclusion> 64 . Maven must choose which version to provide.9 in Chapter 6). it is inevitable that two or more artifacts will require different versions of a particular dependency. To manually resolve conflicts.1 be used.1 (not setting scope to compile. see section 6. A dependency in the POM being built will be used over anything else. In this case.0-SNAPSHOT (selected for compile) plexus-utils:1. plexus-utils occurs twice.1-alpha-2 (selected for compile) junit:3. However.
1.1.0.1. Neither of these solutions is ideal. Maven has no knowledge regarding which versions will work. you may require a feature that was introduced in plexus-utils version 1. This is because. for a library or framework. so that the 1. not for compilation.1</version> <scope>runtime</scope> </dependency> </dependencies> However.1. for stability it would always be declared in the current POM as a dependency . In fact. The alternate way to ensure that a particular version of a dependency is used. in this situation. a WAR file).plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1.codehaus. if the dependency were required for compilation. which will accumulate if this project is reused as a dependency itself. as shown above for plexus-utils. In this case. the dependency is used only for packaging. When a version is declared as 1. use version ranges instead.Creating Applications with Maven <groupId>org. the dependency should be specified as follows: <dependency> <groupId>org. as follows: <dependencies> <dependency> <groupId>org. However.regardless of whether another dependency introduces it.1 version is used instead. but it is possible to improve the quality of your own dependencies to reduce the risk of these issues occurring with your own build artifacts. To accomplish this.codehaus.4 version of plexus-utils in the dependency graph.)</version> </dependency> 65 .plexus</groupId> <artifactId>plexus-utils</artifactId> </exclusion> </exclusions> </dependency> This ensures that Maven ignores the 1. this indicates that the preferred version of the dependency is 1.codehaus. that will be used widely by others. The reason for this is that it distorts the true dependency graph. You'll notice that the runtime scope is used here. This is extremely important if you are publishing a build. this approach is not recommended unless you are producing an artifact that is bundling its dependencies and is not used as a dependency itself (for example.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>[1. Maven assumes that all versions are valid and uses the “nearest dependency” technique described previously to determine which version to use. is to include it directly in the POM. but that other versions may be acceptable. so in the case of a conflict with another dependency.
66 .3 (inclusive) Greater than or equal to 1. the build will fail. rc1). will be retrieved from the repository.1.0) [1. If the nearest version does not match.1. In the current version scheme. It is intended that the qualifier indicates a version prior to release (for example. it is possible to make the dependency mechanism more reliable for your builds and to reduce the number of exception cases that will be required.(1. beta-1. but less than 2. For instance. Figure 3-3: Version parsing As you can see. For a qualifier to be a snapshot the qualifier must be the text “snapshot” or a time stamp. you need to avoid being overly specific as well. then the next nearest will be tested. except 1.1). and table 3-2 shows some of the values that can be used. Finally. if two version ranges in a dependency graph do not intersect at all.2. In figure 3-1. while the nearest dependency technique will still be used in the case of a conflict. you can provide only the qualifier. The build number is an increment after release to indicate patched builds.1.0 Greater than or equal to 1.) (. the version you are left with is [1. and so on.3] [1.5.0 Between 1.1. it is necessary to understand how versions are compared. However.). alpha-1. a version is broken down into five parts: the major. the version that is used must fit the range given. In a regular version.) Less than or equal to 1.0.1 By being more specific through the use of version ranges. minor and bug fix releases.1.2 and 1. then the qualifier and finally a build number. the snapshot (as shown above) is a special case where the qualifier and build number are both allowed. Table 3-2: Examples of Version Ranges Range Meaning (. or there were no conflicts originally. if none of them match.0] [1. which is greater than or equal to 1. or only the build number.Better Builds with Maven What this means is that.0. The notation used above is set notation. The time stamp in figure 3-1 was generated on 11-022006 at 13:11:41.2. To understand how version ranges work.5 Any version.1. you can see how a version is partitioned by Maven. This means that the latest version.
67 . the versions will not match this syntax. 1.).first by major version. However.2-beta-1 will be selected. A final note relates to how version updates are determined when a range is in use. second .2-beta. so to avoid such a situation.2-beta-1 is newer than 1. the two versions are compared entirely as strings. and the versions 1.2-beta-1 exist in a referenced repository. you must structure your releases accordingly. This mechanism is identical to that of the snapshots that you learned in section 3. this can be configured perrepository to be on a more regular interval. fourth by qualifier (using string comparison). Figure 3-4: Version Parsing The use of version parsing in Maven as defined here is considered the best practice.Creating Applications with Maven With regard to ordering. or through the use of a separate repository containing only the artifacts and versions you strictly desire.6. for example. for example. the elements are considered in sequence to determine which is newer .1. This will ensure that the beta versions are used in a range only if the project has declared the snapshot (or development) repository explicitly. A version that also contains a build number is considered newer than a version without a build number. All of these elements are considered part of the version and as such the ranges do not differentiate. If you use the range [1. 1. and finally. By default. the repository is checked once a day for updates to the versions of artifacts in use.1 and 1.by minor version. or forced from the command line using the -U option for a particular Maven execution. by build number.2. In some cases. Based on Maven's version parsing rules you may also define your own version practices. or release betas as milestones along the way. Please see the figure below for more examples of the ordering of version parsing schemes. Whether you use snapshots until the final release.2-beta is older than version 1. In those cases.if the major versions were equal . third by bug fix version. either avoiding the naming convention that would result in that behavior. then 1. Often this is not desired. A version that contains a qualifier is older than a version without a qualifier. you should deploy them to a snapshot repository as is discussed in Chapter 7 of this book.
the updatePolicy value (which is in minutes) is changed for releases.mvnbook. Proficio has a requirement to generate Java sources from a model. which means the plugin is bound to a specific phase in the default life cycle. typically.0.modello</groupId> <artifactId>modello-maven-plugin</artifactId> <version>1. For example: <repository> [. which binds itself to a standard phase in Maven's default life cycle.. In Proficio. which is actually Maven’s default build life cycle.0-alpha-5</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> 68 .exist.codehaus. Maven’s default build life cycle will suffice for a great number of projects without any augmentation – but.proficio</groupId> <artifactId>proficio</artifactId> <version>1.] <releases> <updatePolicy>interval:60</updatePolicy> </releases> </repository> 3. If you look at the POM for the proficio-model you will see the plugins element with a configuration for the Modello plugin: <project> <parent> <groupId>com. projects will have different requirements and it is sometimes necessary to augment the default Maven life cycle to satisfy these requirements.Better Builds with Maven If it will be configured for a particular repository. Maven accommodates this requirement by allowing the declaration of a plugin.0</modelVersion> <artifactId>proficio-model</artifactId> <packaging>jar</packaging> <name>Proficio Model</name> <build> <plugins> <plugin> <groupId>org. Utilizing the Build Life Cycle In Chapter 2 Maven was described as a framework that coordinates the execution of its plugins in a well-defined way or process.. of course.7.0-SNAPSHOT</version> </parent> <modelVersion>4. the Modello plugin is used to generate the Java sources for Proficio’s data model. Plugins in Maven are created with a specific task in mind. the generate-sources phase. For example.
by specifying the goal in the executions element.0.mdo</model> </configuration> </plugin> </plugins> </build> </project> This is very similar to the declaration for the maven-compiler-plugin that you saw in Chapter 2. A plugin in Maven may have several goals.0</version> <packageWithVersion>false</packageWithVersion> <model>src/main/mdo/proficio. but here you see an additional executions element.Creating Applications with Maven <version>1. so you need to specify which goal in the plugin you wish to run. 69 .
Therefore. profiles. since these profiles are portable (they will be distributed to the repository on deploy.xml file. POM-specified profiles override those in profiles. that local always wins. or in the POM itself.xml allows you to augment a single project's build without altering the POM.xml are only allowed to define: repositories • pluginRepositories • properties • Everything else must be specified in a POM profile. However. called profiles. but sometimes you simply have to take into consideration variation across systems and this is why profiles were introduced in Maven.xml profiles have the potential to affect all builds.xml overrides those in settings. Profiles modify the POM at build time. Typically. used properly. any files which are not distributed to the repository are NOT allowed to change the fundamental build in any way. testing with different databases. profiles can easily lead to differing build results from different members of your team.xml. and are available for subsequent builds originating from the repository or as transitive dependencies). if you had a profile in settings. and can be activated in several ways. and are meant to be used in complementary sets to give equivalent-but-different parameters for a set of target environments (providing. Using Profiles Profiles are Maven's way of letting you create environmental variations in the build life cycle to accommodate things like building on different platforms. Because of the portability implications. And the POM-based profiles are preferred.xml) • A file in the the same directory as the POM. then once that project is deployed to the repository it will never fully resolve its dependencies transitively when asked to do so. This is a pattern that is repeated throughout Maven. As such. so they're sort of a "global" location for profiles.xml and settings. 70 . the profiles specified in profiles. and production environments). you can still preserve build portability with profiles. That's because it left one of its dependencies sitting in a profile inside your settings.xml. you try to encapsulate as much as possible in the POM to ensure that builds are portable.m2/settings. Profiles are specified using a subset of the elements available in the POM itself (plus one extra section). So. You can define profiles in one of the following three places: • The Maven settings file (typically <user_home>/.xml • The POM itself In terms of which profile takes precedence.8. for example. and the project you were working on actually did depend on that settings-injected dependency in order to run. and profiles. settings.xml that was able to inject a new dependency. testing. For example. or referencing the local file system. because it is assumed to be a modification of a more general case. the path of the application server root in the development. or not at all. the local-most profile wins.Better Builds with Maven 3. building with different JVMs.
. Note that you must have defined the profiles in your settings. the profiles specified outside the POM are only allowed a small subset of the options available within the POM.... For example: <settings> [.xml file as well. via the activeProfiles section. but used behind the scenes) modules reporting dependencyManagement distributionManagement A subset of the build element. For example: mvn -Pprofile1.profile2 install Profiles can be activated in the Maven settings. This section takes a list of activeProfile elements.Creating Applications with Maven Note: repositories... no profiles other than those specified in the option argument will be activated.. So.] </profile> </profiles> <activeProfiles> <activeProfile>profile1</activeProfile> </activeProfiles> [. pluginRepositories.] <profiles> <profile> <id>profile1</id> [.] </settings> • 71 . When this option is specified. and properties can also be specified in profiles within the POM. You can define the following elements in the POM profile: • • • • • • • • • repositories pluginRepositories dependencies plugins properties (not actually available in the main POM. each containing a profile-id.
.] <activation> <property> <name>environment</name> <value>test</value> </property> </activation> </profile> This last example will activate the profile when the system property "environment" is specified with the value "test".4. "1. <profile> <id>profile1</id> [..0_08". Here are some examples: <profile> <id>profile1</id> [.4</jdk> </activation> </profile> This activator will trigger the profile when the JDK's version starts with "1. which uses the XStream-based store. "1.. Now that you are familiar with profiles. "1. Currently. or the value of a system property. These assemblies will be created in the proficio-cli module and the profiles used to control the creation of our tailored assemblies are defined there as well.Better Builds with Maven • Profiles can be triggered automatically based on the detected state of the build environment. this detection is limited to prefix-matching of the JDK version.. which uses the memory-based store. the presence of a system property.g.. These activators are specified via an activation section in the profile itself.4. 72 .2_07". you are going to use them to create tailored assemblies: an assembly of Proficio... and an assembly of Proficio.4").] <activation> <jdk>1.] <activation> <property> <name>debug</name> </property> </activation> </profile> This will activate the profile when the system property "debug" is specified with any value. <profile> <id>profile1</id> [.4" (e.
] <!-.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> <activation> <property> <name>memory</name> </property> </activation> </profile> <!-.Creating Applications with Maven If you take a look at the POM for the proficio-cli module you will see the following profile definitions: <project> [..Profiles for the two assemblies to create for deployment --> <profiles> <!-.
so that all child POMs can inherit this information.] <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>{basedir}/target/deploy</url> </repository> </distributionManagement> [.. but it illustrates how you can customize the execution of the life cycle using profiles to suit any requirement you might have. you would execute the following: mvn -Dmemory clean assembly:assembly If you wanted to create the assembly using the XStream-based store. SSH2 deployment.0-SNAPSHOT.. If you wanted to create the assembly using the memory-based store. SFTP deployment. 3. while the XStream-based store contains the proficio-store-xstream-1..jar file only.Better Builds with Maven You can see there are two profiles: one with an id of memory and another with an id of xstream. Deploying your Application Now that you have an application assembly.9. you will see that the memory-based assembly contains the proficiostore-memory-1. It should be noted that the examples below depend on other parts of the build having been executed beforehand.9. 3. it is now time to deploy your application assembly.1. Deploying to the File System To deploy to the file system you would use something like the following: <project> [.jar file only. You will also notice that the profiles are activated using a system property. Here are some examples of how to configure your POM via the various deployment mechanisms. This is a very simple example. Currently Maven supports several methods of deployment. FTP deployment. and external SSH deployment. In order to deploy. including simple file-based deployment.0-SNAPSHOT.] </project> 74 . In each of these profiles you are configuring the assembly plugin to point at the assembly descriptor that will create a tailored assembly. so it might be useful to run mvn install at the top level of the project to ensure that needed components are installed into the local repository. which would typically be your top-level POM. you’ll want to share it with as many people as possible! So. you would execute the following: mvn -Dxstream clean assembly:assembly Both of the assemblies are created in the target directory and if you use the jar tvf command on the resulting assemblies.. you need to correctly configure your distributionManagement element in your P
often you will want to customize the projects reports that are created and displayed in your Web site..maven. You do so by configuring the plugin as follows: <project> [. you need to list each report that you want to include as part of the site generation.... You can do so by executing the following command: mvn site 81 . and how to configure reports.apache. The reports created and displayed are controlled in the build/reports element in the POM..] <plugins> <plugin> <groupId>org. You may want to be more selective about the reports that you generate and to do so.] <reporting> [. it’s time to generate your project's web site.Creating Applications with Maven Even though the standard reports are useful> [.. how the site descriptor works.] </project> Now that you have a good grasp of what formats are supported..
you will end up with a directory structure (generated inside the target directory) with the generated content that looks like this: Figure 3-6: The target directory 82 .Better Builds with Maven After executing this command.
and as you can see in the directory listing above. you can add any resources you wish to your site. which contains an images directory. Keeping this simple rule in mind. 83 . you will have noticed the src/site/resources directory. it is located within the images directory of the generated site.
how to deploy your application. and more advanced uses of Maven. like creating your own plugins. and using Maven in a collaborative environment.11. augmenting your site to view quality metrics. 84 . You are now prepared to move on and learn about more advanced application directory structures like the J2EE example you will see in Chapter 4. You should now have a grasp of how project inheritance works.Better Builds with Maven 3. how to make small modifications to Maven's build life cycle. how to manage your application's dependencies. Summary In this chapter you have learned how to setup a directory structure for a typical application and learned the basics of managing the application's development with Maven. and how to create a simple web site for your application.. WAR.4. EAR. .Helen Keller 85 .
you’ll learn how to build EARs.2. it's likely that you are using J2EE in some of your projects. Introducing the DayTrader Application DayTrader is a real world application developed by IBM and then donated to the Apache Geronimo project. Figure 4-1: Architecture of the DayTrader application 86 . 4. Through this example. The functional goal of the DayTrader application is to buy and sell stock. As importantly.4 application and as a test bed for running performance tests. Introduction J2EE (or Java EE as it is now called) applications are everywhere. you’ll learn how to automate configuration and deployment of J2EE application servers.Better Builds with Maven 4. and its architecture is shown in Figure 4-1. As a consequence the Maven community has developed plugins to cover every aspect of building J2EE applications. Its goal is to serve as both a functional example of a full-stack J2EE 1.1. This chapter demonstrates how to use Maven on a real application to show how to address the complex issues related to automated builds. Web services. You'll learn not only how to create a J2EE build but also how to create a productive development environment (especially for Web application development) and how to deploy J2EE modules into your container. This chapter will take you through the journey of creating the build for a full-fledged J2EE application called DayTrader. Whether you are using the full J2EE stack with EJBs or only using Web applications with frameworks such as Spring or Hibernate. EJBs. and Web applications.
The user gives a buy order (by using the Web client or the Web services client).Quote and AccountProfile). 5. The Data layer consists of a database used for storing the business objects and the status of each purchase. get a stock quote. 3. 2. 87 . cancel an order. • A module producing another JAR that will contain the Web services client application. and a JMS Server for interacting with the outside world. • A module producing a JAR that will contain the Quote Streamer client application. • • • A typical “buy stock” use case consists of the following steps that were shown in Figure 4-1: 1. and using the Quote Streamer. A new “open” order is saved in the database using the CMP Entity Beans. The Trade Session is a stateless session bean that offers the business services such as login. Organizing the DayTrader Directory Structure The first step to organizing the directory structure is deciding what build modules are required. using Web services. Looking again at Figure 4-1. • In addition you may need another module producing an EAR which will contain the EJB and WAR produced from the other modules. you can see that the following modules will be needed: A module producing an EJB which will contain all of the server-side EJBs. The easy answer is to follow Maven’s artifact guideline: one module = one main artifact.3. logout. The Web layer offers a view of the application for both the Web client and the Web services client. Asynchronously the order that was placed on the queue is processed and the purchase completed. The user is notified of the completed order on a subsequent request. 4. Once this happens the Trade Broker MDB is notified 6. It uses servlets and JSPs. • A module producing a WAR which will contain the Web application. 4. The Trade Broker calls the Trade Session bean which in turn calls the CMP entity beans to mark the order as “completed". The EJB layer is where the business logic is. Holding. buy or sell a stock. The creation of the “open” order is confirmed for the user. and Message-Driven Beans (MDB) to send purchase orders and get quote changes.Building J2EE Applications There are 4 layers in the architecture: • The Client layer offers 3 ways to access the application: using a browser. The order is then queued for processing in the JMS Message Server. This request is handled by the Trade Session bean. The Quote Streamer is a Swing GUI application that monitors quote information about stocks in real-time as the price changes. This EAR will be used to easily deploy the server code into a J2EE container. and so on. Account. Thus you simply need to figure out what artifacts you need. It uses container-managed persistence (CMP) entity beans for storing the business objects (Order.
This file also contains the list of modules that Maven will build when executed from this directory (see the Chapter 3.xml file contains the POM elements that are shared between all of the modules. for more details): [.the module containing the client side streamer application wsappclient .. The next step is to give these modules names and map them to a directory structure.the module containing the EJBs web . it is important to split the modules when it is appropriate for flexibility. Creating Applications with Maven. it is usually easier to choose names that represent a technology instead. Best practices suggest to do this only when the need arises.the module producing the EAR which packages the EJBs and the Web application There are two possible layouts that you can use to organize these modules: a flat directory structure and a nested one.Better Builds with Maven Note that this is the minimal number of modules required. For example. It is flat because you're locating all the modules in the same directory.. On the other hand... Figure 4-2 shows these modules in a flat directory structure. For the DayTrader application the following names were chosen: • • • • • ejb .the module containing the Web application streamer . Let's discuss the pros and cons of each layout.] 88 . If there isn't a strong need you may find that managing several modules can be more cumbersome than useful. As a general rule. Figure 4-2: Module names and a simple flat directory structure The top-level daytrader/pom. For example. However. it is better to find functional names for modules.the module containing the Web services client application ear . you may want to split the WAR module into 2 WAR modules: one for the browser client and one for the Web services client. It is possible to come up with more. if you needed to physically locate the WARs in separate servlet containers to distribute the load.] <modules> <module>ejb</module> <module>web</module> <module>streamer</module> <module>wsappclient</module> <module>ear</module> </modules> [.
and is the structure used in this chapter. the structure clearly shows how nested modules are linked to their parent. In this case. However. you might separate the client side modules from the server side modules in the way shown in Figure 4-3. EJB and Web modules 89 . The other alternative is to use a nested directory structure. Figure 4-3: Modules split according to a server-side vs client-side directory organization As before. Having this nested Figure 4-4: Nested directory structure for the EAR. as shown in Figure 4-4.Building J2EE Applications This is the easiest and most flexible structure to use.xml file containing the shared POM elements and the list of modules underneath. For example. if you have many modules in the same directory you may consider finding commonalities between them and create subdirectories to partition them. Note that in this case the modules are still separate. each directory level containing several modules contains a pom. ejb and web modules are nested in the ear module. not nested within each other. This makes sense as the EAR artifact is composed of the EJB and WAR artifacts produced by the ejb and web modules.
• These examples show that there are times when there is not a clear parent for a module. In addition.xml of the project. you're going to create the Maven build for each module. In those cases using a nested directory structure should be avoided. For example. Or the ejb module might be producing a client EJB JAR which is not used by the EAR.0\daytrad er-1. starting with the wsappclient module after we take care of one more matter of business. so before we move on to developing these sub-projects we need to install the parent POM into our local repository so it can be further built on.]\. A flat layout is more neutral with regard to assembly and should thus be preferred. • It doesn’t allow flexible packaging. Depending on the target deployment environment the Assembler may package things differently: one EAR for one environment or two EARs for another environment where a different set of machines are used. but by some client-side application. the ejb or web modules might depend on a utility JAR and this JAR may be also required for some other EAR. the nested strategy doesn’t fit very well with the Assembler role as described in the J2EE specification. EAR project)..m2\repository\org\apache\geronimo\samples\daytrader\daytrader\1. [INFO] --------------------------------------------------------------------[INFO] Building DayTrader :: Performance Benchmark Sample [INFO] task-segment: [install] [INFO] --------------------------------------------------------------------[INFO] [site:attach-descriptor] [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\pom. You’d need to consider the three modules as one project.0. For example. but then you’ll be restricted in several ways. etc.xml to C:\[. the three modules wouldn’t be able to have different natures (Web application project.Better Builds with Maven However.. We are now ready to continue on with developing the sub-projects! 90 .. EJB project.. it has several drawbacks: Eclipse users will have issues with this structure as Eclipse doesn’t yet support nested projects. Now that you have decided on the directory structure for the DayTrader application. The modules we will work with from here on will each be referring to the parent pom.p Assembler has a pool of modules and its role is to package those modules for deployment. even though the nested directory structure seems to work quite well here.
We start our building process off by visiting the Web services portion of the build since it is a dependency of later build stages.apache. and Maven's ability to integrate toolkits can make them easier to add to the build process. the plugin uses the Axis framework (. Figure 4-5 shows the directory structure of the wsappclient module..org/axis/java/userguide. As you may notice.Building J2EE Applications 4. Building a Web Services Client Project Web Services are a part of many J2EE applications. see. which is the default used by the Axis Tools plugin: Figure 4-5: Directory structure of the wsappclient module 91 . As the name suggests.4. the Maven plugin called Axis Tools plugin takes WSDL files and generates the Java files needed to interact with the Web services it defines. and this will be used from DayTrader’s wsappclient module.org/axis/java/). For example.
codehaus..] <build> <plugins> [. it would fail..Better Builds with Maven The location of WSDL source can be customized using the sourceDirectory property.] <plugin> <groupId>org. This is because after the sources are generated.wsdl file. Similarly. While you might expect the Axis Tools plugin to define this for you.xml file must declare and configure the Axis Tools plugin: <project> [..] In order to generate the Java source files from the TradeServices. and more importantly. it allows users of your project to automatically get the dependency transitively.mojo</groupId> <artifactId>axistools-maven-plugin</artifactId> <configuration> <sourceDirectory> src/main/resources/META-INF/wsdl </sourceDirectory> </configuration> [. For example: [... you will require a dependency on Axis and Axis JAXRPC in your pom. any tools that report on the POM will be able to recognize the dependency.. the wsappclient/pom. it is required for two reasons: it allows you to control what version of the dependency to use regardless of what the Axis Tools plugin was built against.xml.] <plugin> <groupId>org.codehaus.
org/maven2/.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>axis</groupId> <artifactId>axis-jaxrpc</artifactId> <version>1. Thus the following three dependencies have been added to your POM: <dependencies> <dependency> <groupId>axis</groupId> <artifactId>axis</artifactId> <version>1.geronimo.Building J2EE Applications As before.4_spec</artifactId> <version>1. they are not present on ibiblio6 and you'll need to install them manually. 6 Artifacts can also be obtained from</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.0</version> <scope>provided</scope> </dependency> </dependencies> The Axis JAR depends on the Mail and Activation Sun JARs which cannot be redistributed by Maven. 93 . you need to add the J2EE specifications JAR to compile the project's Java sources. Run mvn install and Maven will fail and print the installation instructions.com/maven2/ and. Thus.specs</groupId> <artifactId>geronimo-j2ee_1.
m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-wsappclient\1. Now that we have discussed and built the Web services portion. The generated WSDL file could then be injected into the Web Services client module to generate client-side Java files.jar [. But that's another story. 94 . [INFO] [compiler:compile] Compiling 13 source files to C:\dev\m2book\code\j2ee\daytrader\wsappclient\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources. [INFO] [jar:jar] [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1. The Axis Tools reference documentation can be found at [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.jar to C:\[.. lets visit EJBs next.0...0. The Axis Tools plugin boasts several other goals including java2wsdl that is useful for generating the server-side WSDL file from handcrafted Java classes.]\. running the build with mvn install leads to: C:\dev\m2book\code\j2ee\daytrader\wsappclient>mvn install [.0.. in addition to the sources from the standard source directory.org/axistools-mavenplugin/. [INFO] [compiler:testCompile] [INFO] No sources to compile [INFO] [surefire:test] [INFO] No tests to run.] [INFO] [axistools:wsdl2java {execution: default}] [INFO] about to add compile source root [INFO] processing wsdl: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ src\main\wsdl\TradeServices..0\daytrader-wsappclient-1.codehaus..Better Builds with Maven After manually installing Mail and Activation.
Tests that require the container to run are called integration tests and are covered at the end of this chapter. 95 .5. Any container-specific deployment descriptor should also be placed in this directory. the standard ejb-jar. Unit tests are tests that execute in isolation from the container.xml.xml deployment descriptor is in src/main/resources/META-INF/ejbjar. • • Unit tests in src/test/java and classpath resources for the unit tests in src/test/resources.Building J2EE Applications. More specifically. • Runtime classpath resources in src/main/resources.
0</version> </parent> <artifactId>daytrader-ejb</artifactId> <name>Apache Geronimo DayTrader EJB Module</name> <packaging>ejb</packaging> <description>DayTrader EJBs</description> <dependencies> <dependency> <groupId>org.apache.apache.3</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1. This is because the DayTrader build is a multi-module build and you are gathering common POM elements in a parent daytrader/pom. take a look at the content of this project’s pom.geronimo.geronimo.0</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.0. you're extending a parent POM using the parent element. If you look through all the dependencies you should see that we are ready to continue with building and installing this portion of the build.geronimo.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.xml file: <project> <modelVersion>4.samples.class</clientExclude> </clientExcludes> </configuration> </plugin> </plugins> </build> </project> As you can see.specs</groupId> <artifactId>geronimo-j2ee_1.4_spec</artifactId> <version>1.Better Builds with Maven Now.samples.apache.0.xml file.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.0</modelVersion> <parent> <groupId>org.apache.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. 96 .maven.
In this example.class. **/*CMP.maven. it still needs to be listed in the POM so that the code can be compiled. This is achieved by specifying a dependency element on the J2EE JAR. you need to override the defaults using a clientExclude element because it happens that there are some required non-EJB files matching the default **/*Bean. However. The Client will be used in a later examples when building the web module. this prevents the EAR module from including the J2EE JAR when it is packaged.class</clientExclude> </clientExcludes> </configuration> </plugin> The EJB plugin has a default set of files to exclude from the client EJB JAR: **/*Bean. Even though this dependency is provided at runtime. so you must explicitly tell it to do so: <plugin> <groupId>org. the Geronimo project has made the J2EE JAR available under an Apache license and this JAR can be found on ibiblio.xml contains a configuration to tell the Maven EJB plugin to generate a Client EJB JAR file when mvn install is called. You could instead specify a dependency on Sun’s J2EE JAR.apache.class). 97 . • Lastly.html. Fortunately.class pattern and which need to be present in the generated client EJB JAR.class and **/package. You should note that you're using a provided scope instead of the default compile scope. the pom. this JAR is not redistributable and as such cannot be found on ibiblio. **/*Session. The reason is that this dependency will already be present in the environment (being the J2EE application server) where your EJB will execute. By default the EJB plugin does not generate the client JAR. This is done by specifying: <packaging>ejb</packaging> • As you’re compiling J2EE code you need to have the J2EE specifications JAR in the project’s build classpath.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean. You make this clear to Maven by using the provided scope. Note that it's also possible to specify a list of files to include using clientInclude elements.class. Thus you're specifying a pattern that only excludes from the generated client EJB JAR all EJB implementation classes located in the ejb package (**/ejb/*Bean.
]\.0\daytrader-ejb-1. Errors: 0.daytrader. Time elapsed: 0..jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.jar [INFO] Building ejb client daytrader-ejb-1.0.jar 98 . [INFO] [compiler:compile] Compiling 49 source files to C:\dev\m2book\code\j2ee\daytrader\ejb\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources.. [INFO] ----------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [install] [INFO] ----------------------------------------------------------[INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.0.geronimo.]\.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1.FinancialUtilsTest [surefire] Tests run: 1..jar [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.02 sec Results : [surefire] Tests run: 1.0-client..0-client..0-client.Better Builds with Maven You’re now ready to execute the build.samples. [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.apache..jar to C:\[.0\daytrader-ejb-1. Failures: 0.0-client [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.0.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. Failures: 0. Relax and type mvn install: C:\dev\m2book\code\j2ee\daytrader\ejb>mvn install [INFO] Scanning for projects. Errors: 0 [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1.jar to C:\[.
apache. There is a working prototype of an EJB3 Maven plugin. the EJB3 specification is still not final. Please refer to the EJB plugin documentation on. Stay tuned! 99 .Building J2EE Applications Maven has created both the EJB JAR and the client EJB JAR and installed them in your local repository. At the time of writing. Early adopters of EJB3 may be interested to know how Maven supports EJB3. however in the future it will be added to the main EJB plugin after the specification is finalized. The EJB plugin has several other configuration elements that you can use to suit your exact needs.
daytrader.interface * generate="remote" * remote-class= * "org.Trade" * […] */ public class TradeBean implements SessionBean { […] /** * Queue the Order identified by orderID to be processed in a * One Phase commit […] * * @ejb. the container-specific deployment descriptors.samples. the Remote and Local interfaces.daytrader..transaction * type="RequiresNew" *[…] */ public void queueOrderOnePhase(Integer orderID) throws javax.bean * display-name="TradeEJB" * name="TradeEJB" * view-type="remote" * impl-class-name= * "org. you can run the XDoclet processor to generate those files for you.samples.Better Builds with Maven 4.interface-method * view-type="remote" * @ejb.TradeBean" * @ejb.samples. you can safely skip this section – you won’t need it! Here’s an extract of the TradeBean session EJB using Xdoclet: /** * Trade Session EJB manages all Trading services * * @ejb.home * generate="remote" * remote-class= * "org.ejb.6. Note that if you’re an EJB3 user.daytrader.xml descriptor.geronimo.ejb.jms.TradeHome" * @ejb.JMSException. When writing EJBs it means you simply have to write your EJB implementation class and XDoclet will generate the Home interface. Exception […] 100 . and the ejb-jar.geronimo.apache.apache.apache.geronimo.
the project’s directory structure is the same as in Figure 4-6.java classes and remove all of the Home.java"></include> </fileset> <homeinterface/> <remoteinterface/> <localhomeinterface/> <localinterface/> <deploymentdescriptor destDir="${project.. As you can see in Figure 4-7. but you don’t need the ejb-jar.build. this has to be run before the compilation phase occurs.build.sourceDirectory}"> <include name="**/*Bean.outputDirectory}/META-INF"/> </ejbdoclet> </tasks> </configuration> </execution> </executions> </plugin> 101 .java"></include> <include name="**/*MDB.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>xdoclet</goal> </goals> <configuration> <tasks> <ejbdoclet verbose="true" force="true" ejbSpec="2.directory}/generated-sources/xdoclet"> <fileset dir="${project. Now you need to tell Maven to run XDoclet on your project.xml file anymore as it’s going to be generated by Xdoclet.build.codehaus. Since XDoclet generates source files.xml that configures the plugin: <plugin> <groupId>org. Here’s the portion of the pom.1" destDir= "${project. Local and Remote interfaces as they’ll also get generated. This is achieved by using the Maven XDoclet plugin and binding it to the generate-sources life cycle phase.Building J2EE Applications To demonstrate XDoclet.
codehaus.build.sourceforge. 2006 16:53:50 xdoclet.XDocletMain start INFO: Running <localinterface/> Generating Local interface for 'org. In addition.XDocletMain start INFO: Running <homeinterface/> Generating Home interface for 'org.directory}/generated-sources/xdoclet (you can configure this using the generatedSourcesDirectory configuration element).apache.daytrader.daytrader.ejb.Better Builds with Maven The XDoclet plugin is configured within an execution element.html). There’s also a Maven 2 plugin for XDoclet2 at. The plugin generates sources by default in ${project.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.ejb.ejb.org/Maven2+Plugin. 2006 16:53:51 xdoclet.samples. However.XDocletMain start INFO: Running <localhomeinterface/> Generating Local Home interface for 'org. This is required by Maven to bind the xdoclet goal to a phase. Finally. […] 10 janv.TradeBean'.daytrader. In practice you can use any XDoclet task (or more generally any Ant task) within the tasks element. […] 10 janv.AccountBean'.XDocletMain start INFO: Running <deploymentdescriptor/> Generating EJB deployment descriptor (ejb-jar. […] INFO: Running <remoteinterface/> Generating Remote interface for 'org.TradeBean'. it should be noted that XDoclet2 is a work in progress and is not yet fully mature.samples.AccountBean'. 2006 16:53:50 xdoclet.apache.geronimo. the XDoclet plugin will also trigger Maven to download the XDoclet libraries from Maven’s remote repository and add them to the execution classpath.geronimo.. […] [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1. 2006 16:53:51 xdoclet. but here the need is to use the ejbdoclet task to instrument the EJB class files.ejb.apache.apache. nor does it boast all the plugins that XDoclet1 has.samples. in the tasks element you use the ejbdoclet Ant task provided by the XDoclet project (for reference documentation see. It’s based on a new architecture but the tag syntax is backwardcompatible in most cases.0 […] You might also want to try XDoclet2.samples. […] 10 janv.geronimo.xml). It also tells Maven that this directory contains sources that will need to be compiled when the compile phase executes. 102 .geronimo.daytrader.
you will also learn how to test it automatically.codehaus.0.2. First.7. configuring them and deploying modules to them.Building J2EE Applications 4.x (containerId element) and that you want Cargo to download the JBoss 4.net/ sourceforge/jboss/jboss-4.directory}/cargo. For example: <container> <containerId>jboss4x</containerId> <output>${project. Cargo is a framework for manipulating containers. etc. Ant. Let's discover how you can automatically start a container and deploy your EJBs into it. The ejb/pom.. IntelliJ IDEA.org/Debugging for full details.xml file has been edited adding following Cargo plugin configuration: <build> <plugins> [. the JBoss container will be used. In this example. Maven 2. you will learn how to deploy it.log</log> [.. In order to build this project you need to create a Profile where you define the ${installDir} property's value.] <plugin> <groupId>org. Deploying EJBs Now that you know how to build an EJB project. In the container element you tell the Cargo plugin that you want to use JBoss 4. Later.directory}/jboss4x.. Netbeans.) for performing various actions on containers such as starting. you will need to have Maven start the container automatically. in the Testing J2EE Applications section of this chapter. 103 . stopping.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <zipUrlInstaller> <url>. It offers generic APIs (Java. The location where Cargo should install JBoss is a user-dependent choice and this is why the ${installDir} property was introduced.build. To do so you're going to use the Maven plugin for Cargo.log</output> <log>${project.0.sourceforge. you can use the log element to specify a file where Cargo logs will go and you can also use the output element to specify a file where the container's output will be dumped.dl.] See</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> </configuration> </plugin> </plugins> </build> If you want to debug Cargo's execution.build..codehaus.2 distribution from the specified URL and install it in ${installDir}. Maven 1.
Nor should the content be shared with other Maven projects at large. and the EJB JAR has been deployed. In that case replace the zipURLInstaller element with a home element. That's it! JBoss is running. Thus the best place is to create a profiles. in a settings.2] [INFO] [talledLocalContainer] JBoss 4. It's also possible to tell Cargo that you already have JBoss installed locally....xml file. as the content of the Profile is user-dependent you wouldn't want to define it in the POM. the EJB JAR should first be created.0. Of course. activated by default and in which the ${installDir} property points to c:/apps/cargo-installs...0.. For example: <home>c:/apps/jboss-4. [INFO] [talledLocalContainer] JBoss 4.xml file. or in a settings.2 starting. In this case. [INFO] Searching repository for plugin with prefix: 'cargo'.0. 104 .2 started on port [8080] [INFO] Press Ctrl-C to stop the container. in a profiles.2</home> That's all you need to have a working build and to deploy the EJB JAR into JBoss. you can define a profile in the POM.xml file.xml file defines a profile named vmassol. it detects that the Maven project is producing an EJB from the packaging element and it automatically deploys it when the container is started. The Cargo plugin does all the work: it provides a default JBoss configuration (using port 8080 for example). so run mvn package to generate it.Better Builds with Maven As explained in Chapter 3. [INFO] ----------------------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [cargo:start] [INFO] ----------------------------------------------------------------------[INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4.
As you have told Cargo to download and install JBoss. modifying various container parameters. the first time you execute If the container was already started and you wanted to just deploy the EJB.8. The layout is the same as for a JAR module (see the first two chapters of this book). especially if you are on a slow connection. Cargo has many other configuration options such as the possibility of using an existing container installation. Subsequent calls will be fast as Cargo will not download JBoss again. let’s focus on building the DayTrader web module. deploying on a remote machine.Building J2EE Applications cargo:start it will take some time.codehaus. to stop the container call mvn cargo:stop. you would run the cargo:deploy goal. etc. Building a Web Application Project Now. Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources 105 . Check the documentation at. JSPs. except that there is an additional src/main/webapp directory for locating Web application resources such as HTML pages. Finally. (see Figure 4-8). and more. WEB-INF configuration files. 4.org/Maven2+plugin.
samples.samples. The reason you are building this web module after the ejb module is because the web module's servlets call the EJBs. Therefore.apache.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Therefore.apache. Depending on the main EJB JAR would also work.4_spec</artifactId> <version>1. for example to prevent coupling.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. 106 .0</version> </parent> <artifactId>daytrader-web</artifactId> <name>DayTrader :: Web Application</name> <packaging>war</packaging> <description>DayTrader Web</description> <dependencies> <dependency> <groupId>org.0</modelVersion> <parent> <groupId>org.xml: <dependency> <groupId>org.0</version> <scope>provided</scope> </dependency> </dependencies> </project> You start by telling Maven that it’s building a project generating a WAR: <packaging>war</packaging> Next.xml file: <project> <modelVersion>4. a dependency has been added on the ejb module in web/pom.geronimo.xml.specs</groupId> <artifactId>geronimo-j2ee_1.Better Builds with Maven As usual everything is specified in the pom.apache.apache. It’s always cleaner to depend on the minimum set of required classes.0. but it’s not necessary and would increase the size of the WAR file.geronimo. you specify the required dependencies.geronimo.0</version> <type>ejb-client</type> </dependency> <dependency> <groupId>org.0</version> <type>ejb-client</type> </dependency> Note that you’re specifying a type of ejb-client and not ejb. This is because the servlets are a client of the EJBs. the servlets only need the EJB client JAR in their classpath to be able to call the EJBs.geronimo.samples. This is why you told the EJB plugin to generate a client JAR earlier on in ejb/pom.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1.
the Geronimo J2EE specifications JAR is used with a provided scope (as seen previously when building the EJB).xml.m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-web\1.0.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.war 107 . [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. only files not in the existing Web application will be added.]\.0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. it’s a good practice to use the default conventions as much as possible. As seen in the introduction.xml file and reduces maintenance. However. The configuration is very simple because the defaults from the WAR plugin are being used. Otherwise it would have surfaced in the WEB-INF/lib directory of the generated WAR. Again.] [INFO] [war:war] [INFO] Exploding webapp.xml won't be merged. This is why we defined the J2EE JAR using a provided scope in the web module’s pom. An alternative is to use the uberwar goal from the Cargo Maven Plugin (see.. Maven 2 supports transitive dependencies. and files such as web. When it generates your WAR.war [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.. allowing the aggregation of multiple WAR files.0.0. Running mvn install generates the WAR and installs it in your local repository: C:\dev\m2book\code\j2ee\daytrader\web>mvn install [... As you know. it recursively adds your module's dependencies.war to C:\[. as it reduces the size of the pom.. The final dependency listed is the J2EE JAR as your web module uses servlets and calls EJBs.. unless their scope is test or provided.Building J2EE Applications If you add a dependency on a WAR.0\daytrader-web-1.0.org/Merging+WAR+files). then the WAR you generate will be overlaid with the content of that dependent WAR.
see the reference documentation for the WAR plugin at. Thus any recompilation in your IDE will trigger a redeploy of your Web application in Jetty. Improving Web Development Productivity If you’re doing Web development you know how painful it is to have to package your code in a WAR and redeploy it every time you want to try out a change you made to your HTML. the plugin reloads the Web application in Jetty.org/plugins/maven-war-plugin/. providing an extremely fast turnaround time for development. the src/main/webapp tree. A typical usage for this plugin is to develop the source code in your IDE and have the IDE configured to compile classes in target/classes (this is the default when the Maven IDE plugins are used to set up your IDE project). 4. The plugin monitors the source tree for changes. There are two plugins that can alleviate this problem: the Cargo plugin and the Jetty plugin. You’ll discover how to use the Jetty plugin in this section as you’ve already seen how to use the Cargo plugin in a previous section.Better Builds with Maven Table 4-2 lists some other parameters of the WAR plugin that you may wish to configure. Table 4-2: WAR plugin configuration properties Configuration property Default value Description warSourceDirectory webXml ${basedir}/src/main/webapp The web. The plugin is configured by default to look for resource files in src/main/webapp. the web.9.finalName} For the full list. warSourceIncludes/war All files are included SourceExcludes warName ${project.xml Location of Web application resources to include in the WAR. If any change is detected.apache. JSP or servlet code.xml file found in ${warSourceDirectory}/WEBINF/web. Name of the generated WAR.xml file.xml file. and it adds the compiled classes in target/classes to its execution classpath.xml file. Specify the files to include/exclude from the generated WAR. the project dependencies and the compiled classes and classpath resources in target/classes. Maven can help. including the pom. The Jetty plugin creates a custom Jetty configuration that is wired to your source tree. Fortunately.build. Specify where to find the web. 108 .
xml file: [.specs</groupId> <artifactId>geronimo-j2ee_1.. adding this dependency to the plugin adds it to the classpath for Jetty.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> <scanIntervalSeconds>10</scanIntervalSeconds> </configuration> <dependencies> <dependency> <groupId>org.0</version> <scope>provided</scope> </dependency> </dependencies> </plugin> [.Building J2EE Applications Let’s try the Jetty plugin on the DayTrader web module. 109 .4_spec</artifactId> <version>1. The reason for the dependency on the J2EE specification JAR is because Jetty is a servlet engine and doesn't provide the EJB specification JAR..] The scanIntervalSeconds configuration property tells the plugin to monitor for changes every 10 seconds. Since the Web application earlier declared that the specification must be provided through the provided scope.geronimo.] <build> <plugins> <plugin> <groupId>org.apache..mortbay. The following has been added to the web/pom..
Logging to org.0.impl.mortbay.jsp URL as shown in Figure 4-9 to see the Web application running . using defaults: org.xml file located at: C:\dev\m2book\code\j2ee\daytrader\ web\src\main\webapp\WEB-INF\web.SelectChannelConnector listening on 8080 with maxIdleTime 30000 0 [main] INFO org.Started SelectChannelConnector @ 0.log.mortbay. 110 . listening for changes.mortbay.jetty. As you can see.nio. Your Web application has been deployed and the plugin is waiting.. Open a browser with the. [INFO] Finished setting up classpath [INFO] Started configuring web.mortbay.SimpleLogger@1242b11 via org.0.xml.log . resource base= C:\dev\m2book\code\j2ee\daytrader\web\src\main\webapp [INFO] Finished configuring web.xml 681 [main] INFO org.log ... [INFO] No connectors configured. Maven pauses as Jetty is now started and may be stopped at anytime by simply typing Ctrl-C.Slf4jLog [INFO] Context path = /daytrader-web [INFO] Webapp directory = C:\dev\m2book\code\j2ee\daytrader\web\src\main\webapp [INFO] Setting up classpath .slf4j. but then the fun examples won't work!.0:8080 [INFO] Starting scanner at interval of 10 seconds..
Now let’s try to modify the content of this JSP by changing the opening account balance. The reason is that we have only deployed the Web application here. but the EJBs and all the back end code has not been deployed. Edit web/src/main/webapp/register. In practice it's easier to deploy a full EAR as you'll see below. In order to make it work you’d need to have your EJB container started with the DayTrader code deployed in it. .jsp.Building J2EE Applications Figure 4-9: DayTrader JSP registration page served by the Jetty plugin Note that the application will fail if you open a page that calls EJBs.
HashUserRealm"> <name>Test Realm</name> <config>etc/realm.properties</config> </userRealm> </userRealms> </configuration> </plugin> You can also configure the context under which your Web application is deployed by using the contextPath configuration element. that you have custom plugins that do all sorts of transformations to Web application resource files.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <userRealms> <userRealm implementation= "org. For a reference of all configuration options see the Jetty plugin documentation at. Now imagine that you have an awfully complex Web application generation process.. and so on.jetty..org/mavenplugin/index. possibly generating some files.] <connectors> <connector implementation= "org.xml file will be applied first.mortbay. For example if you wanted to run Jetty on port 9090 with a user realm defined in etc/realm. Fortunately there’s a solution.nio. The Jetty container automatically recompiled the JSP when the page was refreshed.mortbay. In that case anything in the jetty.Better Builds with Maven That’s nifty. you would use: <plugin> <groupId>org. There are various configuration parameters available for the Jetty plugin such as the ability to define Connectors and Security realms. It's also possible to pass in a jetty.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> [. isn’t it? What happened is that the Jetty plugin realized the page was changed and it redeployed the Web application automatically. The strategy above would not work as the Jetty plugin would not know about the custom actions that need to be executed to generate a valid Web application.html.mortbay.xml configuration file using the jettyConfig configuration element. 112 . By default the plugin uses the module’s artifactId from the POM.properties.
[INFO] Scan complete at Wed Feb 15 11:59:00 CET 2006 [INFO] Starting scanner at interval of 10 seconds. The plugin then watches the following files: WEB-INF/lib.. WEB-INF/classes.SimpleLogger@78bc3b via org. Then it deploys the unpacked Web application located in target/ (whereas the jetty:run-war goal deploys the WAR file).Started SelectChannelConnector @ 0.xml file is modified.0.log ..Building J2EE Applications The WAR plugin has an exploded goal which produces an expanded Web application in the target directory.0.0:8080 [INFO] Scanning . Calling this goal ensures that the generated Web application is the correct one. 113 .xml.xml and pom. any change to To demonstrate.mortbay. WEB-INF/web.impl.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. execute mvn jetty:run-exploded goal on the web module: C:\dev\m2book\code\j2ee\daytrader\web>mvn jetty:run-exploded [..0. for example) or when the pom....0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. those files results in a hot redeployment.war [INFO] [jetty:run-exploded] [INFO] Configuring Jetty for project: DayTrader :: Web Application [INFO] Starting Jetty Server . [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.Slf4jLog [INFO] Context path = /daytrader-web 2214 [main] INFO org.log .slf4j.] [INFO] [war:war] [INFO] Exploding webapp.mortbay. • jetty:run-war: The plugin first runs the package phase which generates the WAR file.log.mortbay.Logging to org. jetty:run-exploded: The plugin runs the package phase as with the jetty:run-war goal..war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. 0 [main] INFO org.
.xml file has the following added for Cargo configuration: <plugin> <groupId>org. Deploying Web Applications You have already seen how to deploy a Web application for in-place Web development in the previous section. Restart completed.. Restarting. You're now ready for productive web development. Reconfiguring webapp .Better Builds with Maven As you can see the WAR is first assembled in the target directory and the Jetty plugin is now waiting for changes to happen.org/Containers).servlet.10.....port>8280</cargo. Listeners completed... so now the focus will be on deploying a packaged WAR to your target container. This is very useful when you're developing an application and you want to verify it works on several containers. The web module's pom. If you open another shell and run mvn package you'll see the following in the first shell's console: [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] Scan complete at Wed Feb 15 12:02:31 CET 2006 Calling scanner listeners . This example uses the Cargo Maven plugin to deploy to any container supported by Cargo (see. Stopping webapp .codehaus. Scanning .port> </properties> </configuration> </configuration> </plugin> 114 . No more excuses! 4.codehaus.servlet..
xml file: [. This is very useful if you have containers already running your machine and you don't want to interfere with them. There are two differences though: • Two new properties have been introduced (containerId and url) in order to make this build snippet generic.apache.0.30/bin/ jakarta-tomcat-5. However.net/sourceforge/jboss/jboss4.sourceforge.org/dist/jakarta/tomcat-5/v5.] </build> <profiles> <profile> <id>jboss4x</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <containerId>jboss4x</containerId> <url>. You could add as many profiles as there are containers you want to execute your Web application on.0. Those properties will be defined in a Profile.2. 115 .Building J2EE Applications As you can see this is a configuration similar to the one you have used to deploy your EJBs in the Deploying EJBs section of this chapter.servlet.0.port element has been introduced to show how to configure the containers to start on port 8280 instead of the default 8080 port.dl.zip</url> </properties> </profile> </profiles> </project> You have defined two profiles: one for JBoss and one for Tomcat and the JBoss profile is defined as active by default (using the activation element)..zip </url> </properties> </profile> <profile> <id>tomcat5x</id> <properties> <containerId>tomcat5x</containerId> <url>. • As seen in the Deploying EJBs section the installDir property is user-dependent and should be defined in a profiles. the containerId and url properties should be shared for all users of the build.. A cargo.xml file.30. Thus the following profiles have been added to the web/pom.
2 starting.remote.remote...port>${remotePort}</cargo. you would need the following Cargo plugin configuration in web/pom.port> <cargo.0.password>${remotePassword}</cargo..codehaus.] [.username> <cargo.username>${remoteUsername}</cargo. [INFO] [talledLocalContainer] Tomcat 5. once this is verified you'll want a solution to deploy your WAR into an integration platform.remote.hostname> <cargo... One solution is to have your container running on that integration platform and to perform a remote deployment of your WAR to it..war] to [C:\[.2 started on port [8280] [INFO] Press Ctrl-C to stop the container.0.Better Builds with Maven Executing mvn install cargo:start generates the WAR.0.30 starting.servlet..0.. This is useful for development and to test that your code deploys and works. To deploy the DayTrader’s WAR to a running JBoss server on machine remoteserver and executing on port 80.. However.. starts the JBoss container and deploys the WAR into it: C:\dev\m2book\code\j2ee\daytrader\web>mvn install cargo:start [.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <type>remote</type> </container> <configuration> <type>runtime</type> <properties> <cargo. [INFO] [CopyingLocalDeployer] Deploying [C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.servlet.remote. [INFO] [talledLocalContainer] JBoss 4.0.30 started on port [8280] [INFO] Press Ctrl-C to stop the container.] [INFO] [cargo:start] [INFO] [talledLocalContainer] Tomcat 5.password> </properties> </configuration> </configuration> </plugin> 116 .0.hostname>${remoteServer}</cargo.2] [INFO] [talledLocalContainer] JBoss 4..xml: <plugin> <groupId>org.....]\Temp\cargo\50866\webapps].
It’s time to package the server module artifacts (EJB and WAR) into an EAR for convenient deployment.apache. All the properties introduced need to be declared inside the POM for those shared with other users and in the profiles.11. Figure 4-11: Directory structure of the ear module As usual the magic happens in the pom.xml file) for those user-dependent.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.Building J2EE Applications When compared to the configuration for a local deployment above.codehaus.. Building an EAR Project You have now built all the individual modules.xml file (or the settings. it solely consists of a pom.0. Note that there was no need to specify a deployment URL as it is computed automatically by Cargo.geronimo. 4. The POM has defined that this is an EAR project by using the packaging element: <project> <modelVersion>4.org/Deploying+to+a+running+container.samples. the changes are: A remote container and configuration type to tell Cargo that the container is remote and not under Cargo's management. • Check the Cargo reference documentation for all details on deployments at. • Several configuration properties (especially a user name and password allowed to deploy on the remote JBoss container) to specify all the details required to perform the remote deployment.0</modelVersion> <parent> <groupId>org. The ear module’s directory structure can't be any simpler.xml file (see Figure 4-11).xml file.0</version> </parent> <artifactId>daytrader-ear</artifactId> <name>DayTrader :: Enterprise Application</name> <packaging>ear</packaging> <description>DayTrader EAR</description> 117 ..
the description to use. 118 .daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <version>1.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Web modules. At the time of writing.samples.samples.0</version> </dependency> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.apache. the pom. jar.daytrader</groupId> <artifactId>daytrader-web</artifactId> <version>1.apache.geronimo.0</version> <type>war</type> </dependency> <dependency> <groupId>org.0</version> </dependency> </dependencies> Finally. ejb3.apache. ejb-client.geronimo. par. the EAR plugin supports the following module types: ejb. war. rar.samples.samples.xml file defines all of the dependencies that need to be included in the generated EAR: <dependencies> <dependency> <groupId>org.Better Builds with Maven Next. you need to configure the Maven EAR plugin by giving it the information it needs to automatically generate the application.apache.geronimo.xml deployment descriptor file. and EJB modules. This includes the display name to use. and the J2EE version to use.0</version> <type>ejb</type> </dependency> <dependency> <groupId>org. It is also necessary to tell the EAR plugin which of the dependencies are Java modules.geronimo. sar and wsr.
geronimo.samples. with the exception of those that are optional. the contextRoot element is used for the daytrader-web module definition to tell the EAR plugin to use that context root in the generated application.daytrader</groupId> <artifactId>daytrader-web</artifactId> <contextRoot>/daytrader</contextRoot> </webModule> </modules> </configuration> </plugin> </plugins> </build> </project> Here.4</version> <modules> <javaModule> <groupId>org.geronimo.Building J2EE Applications By default. all dependencies are included.apache. 119 . You should also notice that you have to specify the includeInApplicationXml element in order to include the streamer and wsappclient libraries into the EAR. By default. However.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>Trade</displayName> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <version>1.apache.apache.samples. only EJB client JARs are included when specified in the Java modules list.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <javaModule> <groupId>org.xml file.geronimo.apache.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <webModule> <groupId>org.maven. or those with a scope of test or provided. it is often necessary to customize the inclusion of some dependencies such as shown in this example: <build> <plugins> <plugin> <groupId>org.samples.
. The streamer module's build is not described in this chapter because it's a standard build generating a JAR: [.apache.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> <bundleDir>lib</bundleDir> </javaModule> <javaModule> <groupId>org.geronimo.Better Builds with Maven It is also possible to configure where the JARs' Java modules will be located inside the generated EAR...apache.geronimo.samples.org/plugins/maven-ear-plugin.apache.samples.. </javaModule> [.. 120 . Run mvn install in daytrader/streamer.] <defaultBundleDir>lib</defaultBundleDir> <modules> <javaModule> ... if you wanted to put the libraries inside a lib subdirectory of the EAR you would use the bundleDir element: <javaModule> <groupId>org. For example.
ear to C:\[.samples.geronimo.samples.ear [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.war] [INFO] Copying artifact [ejb:org.daytrader: daytrader-ejb:1.0-client.geronimo.0.0] to[daytrader-ejb-1.geronimo.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ear\1.jar] [INFO] Copying artifact [war:org.Building J2EE Applications To generate the EAR.ear 121 .]\.0\daytrader-ear-1.geronimo.0] to [daytrader-ejb-1.0] to [daytrader-web-1.daytrader: daytrader-ejb:1.apache.0] to [daytrader-streamer-1.daytrader: daytrader-wsappclient:1.jar] [INFO] Copying artifact [jar:org.apache.geronimo.0.apache. run mvn install: C:\dev\m2book\code\j2ee\daytrader\ear>mvn install […] [INFO] [ear:generate-application-xml] [INFO] Generating application.samples.samples.0] to [daytrader-wsappclient-1.apache.daytrader: daytrader-web:1.samples.xml [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources..jar] [INFO] Could not find manifest file: C:\dev\m2book\code\j2ee\daytrader\ear\src\main\application\ META-INF\MANIFEST.0.Generating one [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.0.daytrader: daytrader-streamer:1.0..0. [INFO] [ear:ear] [INFO] Copying artifact [jar:org.MF .jar] [INFO] Copying artifact [ejb-client:org.apache.0.
w3. Geronimo is somewhat special among J2EE containers in that deploying requires calling the Deployer tool with a deployment plan.sun. how to map J2EE resources in the container.0.0" encoding="UTF-8"?> <application xmlns="</ejb> </module> </application> This looks good. However. enabling the Geronimo plan to be modified to suit the deployment environment.jar</java> </module> <module> <java>daytrader-wsappclient-1.12.xsd" version="1.com/xml/ns/j2ee. You'll need to use the JDK 1. etc. it is recommended that you use an external plan file so that the deployment configuration is independent from the archives getting deployed. Deploying a J2EE Application You have already learned how to deploy EJBs and WARs into a container individually. 122 .sun.jar</java> </module> <module> <web> <web-uri>daytrader-web-1. Geronimo also supports having this deployment descriptor located within the J2EE archives you are deploying.Better Builds with Maven You should review the generated application.0. In this example. Deploying EARs follows the same principle. The next section will demonstrate how to deploy this EAR into a container.com/xml/ns/j2ee" xmlns: <description> DayTrader Stock Trading Performance Benchmark Sample </description> <display-name>Trade</display-name> <module> <java>daytrader-streamer-1. 4. The DayTrader application does not deploy correctly when using the JDK 5 or newer. Like any other container.com/xml/ns/j2ee/application_1_4.4 for this section and the following.sun.
Figure 4-12: Directory structure of the ear module showing the Geronimo deployment plan How do you perform the deployment with Maven? One option would be to use Cargo as demonstrated earlier in the chapter.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url> configuration snippet: <plugin> <groupId>org. You would need the following pom.xml</plan> </properties> </deployable> </deployables> </deployer> </configuration> </plugin> 123 .apache.org/dist/geronimo/1.Building J2EE Applications To get started. as shown on Figure 4-12.0.0/ geronimo-tomcat-j2ee-1.codehaus..
home>c:/apps/geronimo-1.build. or when Cargo doesn't support the container you want to deploy into.Better Builds with Maven However.build.ear </argument> <argument> ${basedir}/src/main/deployment/geronimo/plan.home}/bin/deployer.home> </properties> </profile> </profiles> At execution time. put the following profile in a profiles. learning how to use the Exec plugin is useful in situations where you want to do something slightly different.finalName}.jar –user system –password manager deploy C:\dev\m2book\code\j2ee\daytrader\ear\target/daytrader-ear-1.xml </argument> </arguments> </configuration> </plugin> You may have noticed that you're using a geronimo.xml or settings.directory}/${project. Even though it's recommended to use a specific plugin like the Cargo plugin (as described in 4. As the location where Geronimo is installed varies depending on the user.xml to configure the Exec plugin: <plugin> <groupId>org.0-tomcat</geronimo.xml 124 . As you've seen in the EJB and WAR deployment sections above and in previous chapters it's possible to create properties that are defined either in a properties section of the POM or in a Profile.xml file: <profiles> <profile> <id>vmassol</id> <properties> <geronimo.13 Testing J2EE Applications).codehaus.home property that has not been defined anywhere.0-tomcat/bin/deployer.jar</argument> <argument>--user</argument> <argument>system</argument> <argument>--password</argument> <argument>manager</argument> <argument>deploy</argument> <argument> ${project.ear C:\dev\m2book\code\j2ee\daytrader\ear/src/main/deployment/geronimo/plan. Modify the ear/pom.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-jar</argument> <argument>${geronimo.0. This plugin can execute any process. You'll use it to run the Geronimo Deployer tool to deploy your EAR into a running Geronimo container. in this section you'll learn how to use the Maven Exec plugin. the Exec plugin will transform the executable and arguments elements above into the following command line: java -jar c:/apps/geronimo-1.
You will need to make sure that the DayTrader application is not already deployed before running the exec:exec goal or it will fail.. Since Geronimo 1.jar [INFO] [INFO] `-> TradeDataSource [INFO] [INFO] `-> TradeJMS You can now access the DayTrader application by opening your browser to\bin>deploy undeploy Trade 125 .0-tomcat\bin>deploy stop geronimo/daytrader-derby-tomcat/1. you should first stop it. by creating a new execution of the Exec plugin or run the following: C:\apps\geronimo-1.war [INFO] [INFO] `-> daytrader-ejb-1.jar [INFO] [INFO] `-> daytrader-wsappclient-1.jar [INFO] [INFO] `-> daytrader-streamer-1. start your pre-installed version of Geronimo and run mvn exec:exec: C:\dev\m2book\code\j2ee\daytrader\ear>mvn exec:exec [..0/car If you need to undeploy the DayTrader version that you've built above you'll use the “Trade” identifier instead: C:\apps\geronimo-1.0-SNAPSHOT.0-SNAPSHOT.0 comes with the DayTrader application bundled.] [INFO] [exec:exec] [INFO] Deployed Trade [INFO] [INFO] `-> daytrader-web-1.0-SNAPSHOT.Building J2EE Applications First.
see Chapter 7. For example. To achieve this. 126 . so you can define a profile to build the functional-tests module only on demand.xml so that it's built along with the others. At the time of writing.Better Builds with Maven 4. Maven only supports integration and functional testing by creating a separate module.13. modify the daytrader/pom. create a functional-tests module as shown in Figure 4-13. Functional tests can take a long time to execute. Figure 4-13: The new functional-tests module amongst the other DayTrader modules This module has been added to the list of modules in the daytrader/pom. Testing J2EE Application In this last section you'll learn how to automate functional testing of the EAR built previously.
the compiler and Surefire plugins are not triggered during the build life cycle of projects with a pom packaging. so these need to be configured in the functional-tests/pom. take a look in the functional-tests module itself. • Classpath resources required for the tests are put in src/it/resources (this particular example doesn't have any resources). . • The Geronimo deployment Plan file is located in src/deployment/geronimo/plan. Now. However.xml. Figure 4-14: Directory structure for the functional-tests module As this module does not generate an artifact. the packaging should be defined as pom.
..samples..] </plugins> </build> </project> 128 .Better Builds with Maven <project> <modelVersion>4.apache.apache.geronimo.apache.0-SNAPSHOT</version> </parent> <artifactId>daytrader-tests</artifactId> <name>DayTrader :: Functional Tests</name> <packaging>pom</packaging> <description>DayTrader Functional Tests</description> <dependencies> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.samples.apache.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> [..geronimo.0-SNAPSHOT</version> <type>ear</type> <scope>provided</scope> </dependency> [.] </dependencies> <build> <testSourceDirectory>src/it</testSourceDirectory> <plugins> <plugin> <groupId>org.0</modelVersion> <parent> <groupId>org.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <version>1.0.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.maven.
] <dependency> <groupId>org.cargo</groupId> <artifactId>cargo-core-uberjar</artifactId> <version>0.8</version> <scope>test</scope> </dependency> </dependencies> 129 . in the case of the DayTrader application. This is because the EAR artifact is needed to execute the functional tests. you'll bind the Cargo plugin's start and deploy goals to the preintegration-test phase and the stop goal to the postintegration-test phase.sourceforge.. so DBUnit is not needed to perform any database operations. and it is started automatically by Geronimo. It also ensures that the daytrader-ear module is built before running the functional-tests build when the full DayTrader build is executed from the toplevel in daytrader/. To set up your database you can use the DBUnit Java API (see. As the Surefire plugin's test goal has been bound to the integration-test phase above.xml file: <project> [.codehaus.. Start by adding the Cargo dependencies to the functional-tests/pom. there's a DayTrader Web page that loads test data into the database. you will usually utilize a real database in a known state..] <dependencies> [. You may be asking how to start the container and deploy the DayTrader EAR into it.cargo</groupId> <artifactId>cargo-ant</artifactId> <version>0. Derby is the default database configured in the deployment plan.net/). For integration and functional tests. However. thus ensuring the proper order of execution.codehaus.8</version> <scope>test</scope> </dependency> <dependency> <groupId>org.. You're going to use the Cargo plugin to start Geronimo and deploy the EAR into it. In addition.Building J2EE Applications As you can see there is also a dependency on the daytrader-ear module.
] The deployer element is used to configure the Cargo plugin's deploy goal.0/ geronimo-tomcat-j2ee-1..codehaus.0. thus ensuring that the EAR is ready for servicing when the tests execute.org/dist/geronimo/1.xml </plan> </properties> <pingURL></pingURL> </deployable> </deployables> </deployer> </configuration> </execution> [.Better Builds with Maven Then create an execution element to bind the Cargo plugin's start and deploy goals: <build> <plugins> [... In addition.. a pingURL element is specified so that Cargo will ping the specified URL till it responds. 130 .apache.geronimo.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <wait>false</wait> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url></groupId> <artifactId>daytrader-ear</artifactId> <type>ear</type> <properties> <plan> ${basedir}/src/deployment/geronimo/plan. It is configured to deploy the EAR using the Geronimo Plan file.apache.] <plugin> <groupId>org.samples.
Add the JUnit and HttpUnit dependencies. add an execution element to bind the Cargo plugin's stop goal to the post-integration-test phase: [. with both defined using a test scope.6.8.1</version> <scope>test</scope> </dependency> 131 .. as you're only using them for testing: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.net/) to call a Web page from the DayTrader application and check that it's working..] <execution> <id>stop-container</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> The functional test scaffolding is now ready.. You're going to use the HttpUnit testing framework (</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1.sourceforge.Building J2EE Applications Last.
framework. WebResponse response = wc.531 sec [INFO] [cargo:stop {execution: stop-container}] 4. public class FunctionalTest extends TestCase { public void testDisplayMainPage() throws Exception { WebConversation wc = new WebConversation(). WebRequest request = new GetMethodWebRequest( "").14. assertEquals("DayTrader". add a JUnit test class called src/it/java/org/apache/geronimo/samples/daytrader/FunctionalTest. Failures: 0. 132 .] .getResponse(request). import junit. deploying J2EE archives and implementing functional tests. Summary You have learned from chapters 1 and 2 how to build any type of application and this chapter has demonstrated how to build J2EE applications.*. In addition you've discovered how to automate starting and stopping containers.Better Builds with Maven Next. how to effectively set up Maven in a team.. how to gather project health information from your builds.daytrader.apache.samples..meterware.geronimo. In the class.getTitle()).samples. } } It's time to reap the benefits from your build. At this stage you've pretty much become an expert Maven user! The following chapters will show even more advanced topics such as how to write Maven plugins.httpunit.daytrader. Errors: 0.geronimo. response. and more. type mvn install and relax: C:\dev\m2book\code\j2ee\daytrader\functional-tests>mvn install [. import com. Change directory into functional-tests.apache.*.java.FunctionalTest [surefire] Tests run: 1. the URL is called to verify that the returned page has a title of “DayTrader”: package org. Time elapsed: 0.
for Nature cannot be fooled.5. . and resources from a plugin • Attaching an artifact to the project • • For a successful technology.Richard Feynman 133 . Developing Custom Maven Plugins Developing Custom Maven Plugins This chapter covers: How plugins execute in the Maven life cycle Tools and languages available to aid plugin developers • Implementing a basic plugin using Java and Ant • Working with dependencies. reality must take precedence over public relations. source directories.
plugins provide a grouping mechanism for multiple mojos that serve similar functions within the build life cycle. injecting runtime parameter information. When a number of mojos perform related tasks. resolving dependencies.2. the maven-compiler-plugin incorporates two mojos: compile and testCompile. or even at the Web sites of third-party tools offering Maven integration by way of their own plugins (for a list of some additional plugins available for use. For example. Recall that a mojo represents a single task in the build process. However. Packaging these mojos inside a single plugin provides a consistent access mechanism for users.1. With most projects. and is defined as a set of task categories. but also extending a project's build to incorporate new functionality. Additionally. it will discuss the various ways that a plugin can interact with the Maven build environment and explore some examples. such as integration with external tools and systems. it enables these mojos to share common code more easily. In this case. 134 . A mojo is the basic unit of work in the Maven application. it traverses the phases of the life cycle in order. The actual functional tasks. in order to perform the tasks necessary to build a project. or work. Even if a project requires a special task to be performed. the plugins provided “out of the box” by Maven are enough to satisfy the needs of most build processes (see Appendix A for a list of default plugins used to build a typical project). This ordering is called the build life cycle. resolving project dependencies. It executes an atomic build task that represents a single step in the build process. When Maven executes a build. of the build process are executed by the set of plugins associated with the phases of a project's build life-cycle. it may be necessary to write a custom plugin to integrate these tasks into the build life cycle. the chapter will cover the tools available to simplify the life of the plugin developer. This association of mojos to phases is called binding and is described in detail below. the build process for a project comprises a set of mojos executing in a particular. A Review of Plugin Terminology Before delving into the details of how Maven plugins function and how they are written. 5. It starts by describing fundamentals. Maven's core APIs handle the “heavy lifting” associated with loading project definitions (POMs). the common theme for these tasks is the function of compiling code. From there. called phases. Just like Java packages. the loosely affiliated CodeHaus Mojo project. it is still likely that a plugin already exists to perform this task. they are packaged together into a plugin. Such supplemental plugins can be found at the Apache Maven project. Finally. and organizing and running plugins. This chapter will focus on the task of writing custom plugins. and more. refer to the Plugin Matrix. including a review of plugin terminology and the basic mechanics of the the Maven plugin framework. if your project requires tasks that have no corresponding plugin. This makes Maven's plugin framework extremely important as a means of not only building a project. allowing shared configuration to be added to a single section of the POM. Maven is actually a platform that executes plugins within a build life cycle. executing all the associated mojos at each phase of the build. Introduction As described in Chapter 2. Correspondingly. well-defined order. let's begin by reviewing the terminology used to describe a plugin and its role in the build.Better Builds with Maven 5. Each mojo can leverage the rich infrastructure provided by Maven for loading projects.
Think of these mojos as tangential to the the Maven build process. Together. to ensure compatibility with other plugins. they can be bound to any phase in the life cycle. before a mojo can execute. mojos have a natural phase binding which determines when a task should execute within the life cycle. and as such. As a result. since they often perform tasks for the POM maintainer. Using the life cycle. As a plugin developer. Indeed. which is used for the majority of build activities (the other two life cycles deal with cleaning a project's work directory and generating a project Web site). you must understand the mechanics of life-cycle phase binding and parameter injection. which correspond to the phases of the build life cycle. including a well-defined build life cycle. will not have a life-cycle phase binding at all since they don't fall into any natural category within a typical build process. Since phase bindings provide a grouping mechanism for mojos within the life cycle. the discussion in this chapter is restricted to the default life cycle. 5. Most mojos fall into a few general categories.Developing Custom Maven Plugins Together with phase binding. While mojos usually specify a default phase binding. Each execution can specify a separate phase binding for its declared set of mojos. In some cases. A discussion of all three build life cycles can be found in Appendix A. sequencing the various build operations. a given mojo can even be bound to the life cycle multiple times during a single build. and parameter resolution and injection. using the plugin executions section of the project's POM. 5. it may still require that certain activities have already been completed. Maven also provides a welldefined procedure for building a project's sources into a distributable archive. parameter injection and life-cycle binding form the cornerstone for all mojo development. Understanding this framework will enable you to extract the Maven build-state information that each mojo requires. Therefore. in addition to determining its appropriate phase binding.3.1. While Maven does in fact define three different lifecycles. a mojo can pick and choose what elements of the build state it requires in order to execute its task. The Plugin Framework Maven provides a rich framework for its plugins. Binding to a phase of the Maven life cycle allows a mojo to make assumptions based upon what has happened in the preceding phases. it is important to provide the appropriate phase binding for your mojos. the ordered execution of Maven's life cycle gives coherence to the build process. a mojo may be designed to work outside the context of the build life cycle. However. you will also need a good understanding of how plugins are structured and how they interact with their environment. so be sure to check the documentation for a mojo before you re-bind it. These mojos are meant to be used by way of direct invocation. 135 .3. Using Maven's parameter injection infrastructure. dependency management. plus much more. successive phases can make assumptions about what work has taken place in the previous phases. Such mojos may be meant to check out a project from version control. or aid integration with external development tools. or even create the directory structure for a new project. Bootstrapping into Plugin Development In addition to understanding Maven's plugin terminology.
These mojos were always present in the life-cycle definition. none of the mojos from the maven-resources-plugin will be executed. Instead. Indeed. determining when not to execute. each of the resource-related mojos will discover this lack of non-code resources and simply opt out without modifying the build in any way. the jar mojo from the maven-jarplugin will harvest these class files and archive them into a jar file. Only those mojos with tasks to perform are executed during this build. Maven will execute a default life cycle for the 'jar' packaging. did not execute.. then two additional mojos will be triggered to handle unit testing. generation of the project's Web site. is often as important as the modifications made during execution itself. The testCompile mojo from the maven-compiler-plugin will compile the test sources. Maven's plugin framework ensures that almost anything can be integrated into the build life cycle. the compile mojo from the maven-compiler-plugin will compile the source code into binary class files in the output directory. consider a very basic Maven build: a project with source code that should be compiled and archived into a jar file for redistribution. As a specific example of how plugins work together through the life cycle. Since our hypothetical project has no “non-code” resources. If this basic Maven project also includes source code for unit tests. then the test mojo from the maven-surefire-plugin will execute those compiled tests.Better Builds with Maven Participation in the build life cycle Most plugins consist entirely of mojos that are bound at various phases in the life cycle according to their function in the build process. This level of extensibility is part of what makes Maven so powerful. validation of project content. 136 . many more plugins can be used to augment the default life-cycle definition. Then. but until now they had nothing to do and therefore. and much more. at least two of the above mojos will be invoked. but a requirement of a well-designed mojo. This is not a feature of the framework. providing functions as varied as deployment into the repository system. In good mojo design. Depending on the needs of a given project. First.
and the resulting value is injected into the mojo. It contains information about the mojo's implementation class (or its path within the plugin jar). whether it is required for the mojo's execution. and more. along with any system properties that were provided when Maven was launched. the expression to retrieve that information might look as follows: ${patchDirectory} For more information about which mojo expressions are built into Maven. and once resolved. That is to say. the life-cycle phase to which the mojo should be bound. each declared mojo parameter includes information about the various expressions used to resolve its value.Developing Custom Maven Plugins Accessing build information In order for mojos to execute effectively. The Maven plugin descriptor is a file that is embedded in the plugin jar archive. The descriptor is an XML file that informs Maven about the set of mojos that are contained within the plugin. and consists of the user. a mojo that applies patches to the project source code will need to know where to find the project source and patch files. 137 . whether it is editable. For example. see Appendix A.and machinelevel Maven settings. how do you instruct Maven to inject those values into the mojo instance? Further. • To gain access to the current build state. thereby avoiding traversal of the entire build-state object graph.compileSourceRoots} Then. Using the correct parameter expressions. see Appendix A. under the path /META-INF/maven/plugin. a mojo can keep its dependency list to a bare minimum. in addition to any programmatic modifications made by previous mojo executions. using a language-appropriate mechanism. assuming the patch directory is specified as mojo configuration inside the POM. and what methods Maven uses to extract mojo parameters from the build state. they require information about the state of the current build. and the mechanism for injecting the parameter value into the mojo instance. The plugin descriptor Though you have learned about binding mojos to life-cycle phases and resolving parameter values using associated expressions. This information comes in two categories: • Project information – which is derived from the project POM. the expression associated with a parameter is resolved against the current build state. Within this descriptor. Environment information – which is more static. until now you have not seen exactly how a life-cycle binding occurs. At runtime. the set of parameters the mojo declares. For the complete plugin descriptor syntax. This mojo would retrieve the list of source directories from the current build information using the following expression: ${project. Maven allows mojos to specify parameters whose values are extracted from the build state using expressions. how do you associate mojo parameters with their expression counterparts.xml. how do you instruct Maven to instantiate a given mojo in the first place? The answers to these questions lie in the plugin descriptor.
it consists of a framework library which is complemented by a set of provider libraries (generally. and direct invocations (as from the command line). These plugindevelopment tools are divided into the following two categories: • The plugin extractor framework – which knows how to parse the metadata formats for every language supported by Maven. Maven provides plugin tools to parse mojo metadata from a variety of formats. adding any other plugin-level metadata through its own configuration (which can be modified in the plugin's POM). • Of course. it's a simple case of providing special javadoc annotations to identify the properties and parameters of the mojo. Maven's plugindevelopment tools remove the burden of maintaining mojo metadata by hand. This metadata is embedded directly in the mojo's source code where possible. POM configurations. except when configuring the descriptor. Using Java. and orchestrates the process of extracting metadata from mojo implementations. the maven-plugin-plugin simply augments the standard jar life cycle mentioned previously as a resource-generating step (this means the standard process of turning project sources into a distributable jar archive is modified only slightly.2. it uses a complex syntax. one per supported mojo language). 5. The maven-plugin-plugin – which uses the plugin extractor framework. Maven's development tools expose only relevant specifications in a format convenient for a given plugin's implementation language.3. In short. However. By abstracting many of these details away from the plugin developer. The clean mojo also defines the following: 138 .Better Builds with Maven The plugin descriptor is very powerful in its ability to capture the wiring information for a wide variety of mojos. This framework generates both plugin documentation and the coveted plugin descriptor. and its format is specific to the mojo's implementation language. This is where Maven's plugin development tools come into play. Writing a plugin descriptor by hand demands that plugin developers understand low-level details about the Maven plugin framework – details that the developer will not use. the clean mojo in the mavenclean-plugin provides the following class-level javadoc annotation: /** * @goal clean */ public class CleanMojo extends AbstractMojo This annotation tells the plugin-development tools the mojo's name. Plugin Development Tools To simplify the creation of plugin descriptors. to generate the plugin descriptor). the format used to write a mojo's metadata is dependent upon the language in which the mojo is implemented. this flexibility comes at a price. For example. so it can be referenced from lifecycle mappings. To accommodate the extensive variability required from the plugin descriptor.
verbose}" default-value="false" */ private boolean verbose. 139 java. expression and default-value. If you choose to write mojos in another language. But consider what would happen if the default value you wanted to inject contained a parameter expression. This parameter annotation also specifies two attributes. especially when you could just declare the field as follows: private boolean verbose = false. Since the plugin tools can also generate documentation about plugins based on these annotations. The second specifies that this parameter can also be configured from the command line as follows: -Dclean. Here. see Appendix A. Remember. it might seem counter-intuitive to initialize the default value of a Java field using a javadoc annotation. For instance.build. then the mechanism for specifying mojo metadata such as parameter definitions will be different. this value is resolved based on the POM and injected into this field. At first.io. The first specifies that this parameter's default value should be set to false. it's impossible to initialize the Java field with the value you need. the annotation identifies this field as a mojo parameter. consider the following field annotation from the resources mojo in the maven-resources-plugin: /** * Directory containing the classes. these annotations are specific to mojos written in Java. it's a good idea to consistently specify the parameter's default value in the metadata. For a complete list of javadoc annotations available for specifying mojo metadata. it specifies that this parameter can be configured from the POM using: <configuration> <verbose>false</verbose> </configuration> You may notice that this configuration name isn't explicitly specified in the annotation.outputDirectory}" */ private File classesDirectory. rather than in the Java field initialization code. it's implicit when using the @parameter annotation. namely the mojo is instantiated. which references the output directory for the current project. like Ant. the underlying principles remain the same.File instance. However. In this case. * * @parameter default-value="${project.verbose=false Moreover.Developing Custom Maven Plugins /** * Be verbose in the debug log-level? * * @parameter expression="${clean. When the .
when translating a project build from Ant to Maven (refer to Chapter 8 for more discussion about migrating from Ant to Maven). This is especially important during migration. and so on. you risk confusing the issue at hand – namely. Simple javadoc annotations give the plugin processing plugin (the maven-plugin-plugin) the instructions required to generate a descriptor for your mojo. and Beanshell.3. Since Java is currently the easiest language for plugin development. Maven can accommodate mojos written in virtually any language. Whatever language you use. the examples in this chapter will focus on a relatively simple problem space: gathering and publishing information about a particular build. During the early phases of such a migration. or any combination thereof. Maven can wrap an Ant build target and use it as if it were a mojo. Therefore. Since it provides easy reuse of third-party APIs from within your mojo.Better Builds with Maven Choose your mojo implementation language Through its flexible plugin descriptor format and invocation framework. This pairing of the build script and accompanying metadata file follows a naming convention that allows the mavenplugin-plugin to correlate the two files and create an appropriate plugin descriptor. Ant-based plugins can consist of multiple mojos mapped to a single build script. For many mojo developers. Plugin parameters can be injected via either field reflection or setter methods. individual mojos each mapped to separate scripts. mojo mappings and parameter definitions are declared via an associated metadata file. in certain cases you may find it easier to use Ant scripts to perform build tasks. this chapter will focus primarily on plugin development in this language. Ant. it's important to keep the examples clean and relatively simple. this technique also works well for Beanshell-based mojos. due to the migration value of Ant-based mojos when converting a build to Maven. the specific snapshot versions of dependencies used in the build. In these cases. called buildinfo. In addition. which is used to read and write build information metadata files. Maven currently supports mojos written in Java. For example. the particular feature of the mojo framework currently under discussion. it is often simpler to wrap existing Ant build targets with Maven mojos and bind them to various phases in the life cycle. To make Ant scripts reusable. and minimizes the number of dependencies you will have on Maven's core APIs. and because many Maven-built projects are written in Java. it also provides good alignment of skill sets when developing mojos from scratch. Java is the language of choice. this chapter will also provide an example of basic plugin development using Ant. Otherwise. 5. To facilitate these examples. You can install it using the following simple command: mvn install 140 . This relieves you of the burden associated with traversing a large object graph in your code. Such information might include details about the system environment. However. Since Beanshell behaves in a similar way to standard Java. you will need to work with an external project.3. This project can be found in the source code that accompanies this book. Maven lets you select pieces of the build state to inject as mojo parameters. A Note on the Examples in this Chapter When learning how to interact with the different aspects of Maven from within a mojo.
zip file provides sequential instructions for building the code. BuildInfo Example: Capturing Information with a Java Mojo To begin. In addition to simply capturing build-time information. Here. since it can have a critical effect on the build process and the composition of the resulting Guinea Pig artifacts. The buildinfo plugin is a simple wrapper around this generator. Capturing this information is key. a default profile injects a dependency on a windows-specific library. This development effort will have the task of maintaining information about builds that are deployed to the development repository. you are free to write any sort of adapter or front-end code you wish. this dependency is used only during testing. providing a thin adapter layer that allows the generator to be run from a Maven build. if the system property os.1. As a side note. reusable utility in many different scenarios. this approach encapsulates an important best practice. called Guinea Pig. This information should capture relevant details about the environment used to build the Guinea Pig artifacts. Therefore. then the value of the triggering system property – and the profile it triggers – could reasonably determine whether the build succeeds or fails. For simplicity. you will look at the development effort surrounding a sample project. and take advantage of a single. perform the following steps7: cd buildinfo mvn install 7 The README. When triggered. consider a case where the POM contains a profile. you will need to disseminate the build to the rest of the development team.Developing Custom Maven Plugins 5. this profile adds a new dependency on a Linux-specific library. If you have a test dependency which contains a defect. and this dependency is injected by one of the aforementioned profiles. for the purposes of debugging. Prerequisite: Building the buildinfo generator project Before writing the buildinfo plugin. eventually publishing it alongside the project's artifact in the repository for future reference (refer to Chapter 7 for more details on how teams use Maven). the values of system properties used in the build are clearly very important. you must first install the buildinfo generator library into your Maven local repository. Developing Your First Mojo For the purposes of this chapter.4. 141 .4.txt file in the Code_Ch05. which will be triggered by the value of a given system property – say. by separating the generator from the Maven binding code. which allows the build to succeed in that environment. When this profile is not triggered. To build the buildinfo generator library. it makes sense to publish the value of this particular system property in a build information file so that others can see the aspects of the environment that affected this build. and has no impact on transitive dependencies for users of this project. refer to Chapter 3). which will be deployed to the Maven repository system. 5.name is set to the value Linux (for more information on profiles.
under the following path: src\main\java\com\exist\mvnbook\plugins\MyMojo. This message does not indicate a problem. It can be found in the plugin's project directory. fairly simple Java-based mojo: [. as you know more about your mojos' dependencies. This will create a project with the standard layout under a new subdirectory called mavenbuildinfo-plugin within the current working directory. Inside. Finally.exist. since you will be creating your own mojo from scratch. * @parameter */ 142 . This is a result of the Velocity template. The mojo You can handle this scenario using the following. writing your custom mojo is simple. interacting with Maven's own plugin parameter annotations. this simple version will suffice for now. For the purposes of this plugin. • You will modify the POM again later.plugins \ -DartifactId=maven-buildinfo-plugin \ -DarchetypeArtifactId=maven-archetype-mojo When you run this command. you'll find a basic POM and a sample mojo.build. simply execute the following from the top level directory of the chapter 5 sample code: mvn archetype:create -DgroupId=com. you're likely to see a warning message saying “${project. However. * @goal extract * @phase package * @requiresDependencyResolution test * */ public class WriteBuildInfoMojo extends AbstractMojo { /** * Determines which system properties are added to the buildinfo file. Once you have the plugin's project structure in place. you will need to modify the POM as follows: Change the name element to Maven BuildInfo Plugin.. To generate a stub plugin project for the buildinfo plugin.. used to generate the plugin source code. • Remove the url element.mvnbook.java.] /** * Write the environment information for the current build execution * to an XML file. since this plugin doesn't currently have an associated Web site. it's helpful to jump-start the plugin-writing process by using Maven's archetype plugin to create a simple stub project from a standard pluginproject template.directory} is not a valid reference”. you should remove the sample mojo.Better Builds with Maven Using the archetype plugin to generate a stub plugin project Now that the buildinfo generator library has been installed.
you will use this name.getMessage(). In this case.\ value="${project." ).writeXml( buildInfo. When you invoke this mojo. there are two special annotations: /** * @goal extract * @phase package */ The first annotation.version}-buildinfo.getProperties(). you're collecting information from the environment with the intent of distributing it alongside the main project artifact in the repository. attaching to the package phase also gives you the best chance of capturing all of the modifications made to the build state before the jar is produced. addSystemProperties( buildInfo ). outputFile ).Developing Custom Maven Plugins private String systemProperties.xml" * @required */ private File outputFile.artifactId}. Reason: " + e.\ ${project. In general.split( ". /** * The location to write the buildinfo file. i++ ) { String key = keys[i].getProperty( key.outputFile}" default. Used to attach the buildinfo * to the project jar for installation and deployment. try { BuildInfoUtils. public void execute() throws MojoExecutionException { BuildInfo buildInfo = new BuildInfo().MISSING_INFO_PLACEHOLDER ).build. } catch ( IOException e ) { throw new MojoExecutionException( "Error writing buildinfo \ XML file. if ( systemProperties != null ) { String[] keys = systemProperties. @goal. e ).directory}/${project. In the class-level javadoc comment. value ).addSystemProperty( key. String value = sysprops. \ BuildInfoConstants.length. Therefore. tells the plugin tools to treat this class as a mojo named extract. i < keys. } } private void addSystemProperties( BuildInfo buildInfo ) { Properties sysprops = System. for ( int i = 0. buildInfo. * @parameter expression="${buildinfo.trim(). so it will be ready to attach to the project artifact. } } } } While the code for this mojo is fairly straightforward. it makes sense to execute this mojo in the package phase. it's worthwhile to take a closer look at the javadoc annotations. The second annotation tells Maven where in the build life cycle this mojo should be executed. 143 .
systemProperties=java. since you have more specific requirements for this parameter. In this case.outputFile}" default\ value="${project. you may want to allow a user to specify which system properties to include in the build information file.directory}/${project. Using the @parameter annotation by itself. Used to attach the buildinfo * for installation and deployment. you can see why the normal Java field initialization is not used. as execution without an output file would be pointless. the complexity is justified. In this example.version}-buildinfo.xml" * * @required */ In this case. First. you can specify the name of this parameter when it's referenced from the command line. Take another look: /** * The location to write the buildinfo file.user. which are used to specify the mojo's parameters. 144 . the guinea-pig module is bound to the mavenbuildinfo-plugin having the buildinfo goal prefix so run the above command from the guinea-pig directory. the mojo uses the @required annotation. * * @parameter expression="${buildinfo. you have several field-level javadoc comments. you want the mojo to use a certain value – calculated from the project's information – as a default value for this parameter. In this case. The default output path is constructed directly inside the annotation. will allow this mojo field to be configured using the plugin configuration specified in the POM. Using the expression attribute. using several expressions to extract project information on-demand. However. Each offers a slightly different insight into parameter specification.artifactId}\ ${project.version. the build will fail with an error. with no attributes.Better Builds with Maven Aside from the class-level comment. as follows: localhost $ mvn buildinfo:extract \ -Dbuildinfo. consider the parameter for the systemProperties variable: /** * @parameter expression="${buildinfo. Finally. the mojo cannot function unless it knows where to write the build information file.build. To ensure that this parameter has a value. If this parameter has no value when the mojo is configured. However.dir The module where the command is executed should be bound to a plugin with a buildinfo goal prefix. so they will be considered separately. the expression attribute allows you to specify a list of system properties on-the-fly. In addition. the outputFile parameter presents a slightly more complex example of parameter annotation. This is where the expression attribute comes into play.systemProperties}" */ This is one of the simplest possible parameter specifications.
0-SNAPSHOT</version> </dependency> [.0.mvnbook. you can construct an equally simple POM which will allow you to build the plugin.mvnbook.exist.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <version>1. note the packaging – specified as maven-plugin – which means that this plugin build will follow the maven-plugin life-cycle mapping.. Note the dependency on the buildinfo project. which simply adds plugin descriptor extraction and generation to the build process. This mapping is a slightly modified version of the one used for the jar packaging.] </dependencies> </project> This POM declares the project's identity and its two dependencies.exist.0-SNAPSHOT</version> <packaging>maven-plugin</packaging> <dependencies> <dependency> <groupId>org.Developing Custom Maven Plugins The Plugin POM Once the mojo has been written.shared</groupId> <artifactId>buildinfo</artifactId> <version>1.apache.0</modelVersion> <groupId>com.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2. Also. as follows: <project> <modelVersion>4. which provides the parsing and formatting utilities for the build information file.0</version> </dependency> <dependency> <groupId>com.. 145 .
so that every build triggers it.. The easiest way to guarantee this is to bind the extract mojo to the life cycle.] </build> The above binding will execute the extract mojo from your new maven-buildinfo-plugin during the package phase of the life cycle. and capture the os..exist.] <plugins> <plugin> <groupId>com.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> <configuration> <systemProperties>os. which you can do by adding the configuration of the new plugin to the Guinea Pig POM.name.. 146 . as follows: <build> [.mvnbook.version</systemProperties> </configuration> <goals> <goal>extract</goal> </goals> </execution> </executions> </plugin> [. This involves modification of the standard jar lifecycle. you need to ensure that every build captures this information.java.name system property..Better Builds with Maven Binding to the life cycle Now that you have a method of capturing build-time environmental information...] </plugins> [.
.. build the buildinfo plugin with the following commands: cd C:\book-projects\maven-buildinfo-plugin mvn clean install Next.....] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [.. you will find information similar to the following: <?xml version="1.0" encoding="UTF-8"?><buildinfo> <systemProperties> <java............468s] Guinea Pig API ...name> </systemProperties> <sourceRoots> <sourceRoot>src\main\java</sourceRoot> </sourceRoots> <resourceRoots> <resourceRoot>src\main\resources</resourceRoot> 147 .name>Windows XP</os............. you can build the plugin and try it out! First.. SUCCESS [0............ you should see output similar to the following: [....... SUCCESS [2......0-SNAPSHOT-buildinfo... SUCCESS [6....359s] Guinea Pig Core .... test the plugin by building Guinea Pig with the buildinfo plugin bound to its life cycle as follows: cd C:\book-projects\guinea-pig mvn package When the Guinea Pig build executes...0_06</java.....version> <os............xml In the file.........] [buildinfo:extract {execution: extract}] -----------------------------------------------------------------------Reactor Summary: -----------------------------------------------------------------------Guinea Pig Sample Application ....5.....Developing Custom Maven Plugins The output Now that you have a mojo and a POM...469s] -----------------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------------------ Under the target directory......version>1.. there should be a file named: guinea-pig-1...
Your new mojo will be in a file called notify. “deployment” is defined as injecting the project artifact into the Maven repository system. and how.Better Builds with Maven </resourceRoots> </buildinfo> While the name of the OS and the java version may differ. However. Information like the to: address will have to be dynamic. For now. and the dozens of well-tested. and both of these properties can have profound effects on binary compatibility. the output of of the generated build information is clear enough. you'll notice that this mojo expects several project properties. given the amount of setup and code required. therefore. so that other team members have access to it. simply declare mojo parameters for them. Your mojo has captured the name of operating system being used to execute the build and the version of the jvm. After writing the Ant target to send the notification e-mail.2. it's simpler to use Ant.name}" mailhost="${mailHost}" mailport="${mailPort}" messagefile="${buildinfo. 5. Of course. It's important to remember that in the Maven world. you need to share it with others in your team when the resulting project artifact is deployed. To ensure these project properties are in place within the Ant Project instance. you just need to write a mojo definition to wire the new target into Maven's build process. BuildInfo Example: Notifying Other Developers with an Ant Mojo Now that some important information has been captured.build.outputFile}"> <to>${listAddr}</to> </mail> </target> </project> If you're familiar with Ant. such a task could be handled using a Java-based mojo and the JavaMail API from Sun.4. and should look similar to the following: <project> <target name="notify-target"> <mail from="maven@localhost" replyto="${listAddr}" subject="Build Info for Deployment of ${project. The Ant target To leverage the output of the mojo from the previous example – the build information file – you can use that content as the body of the e-mail. it might be enough to send a notification e-mail to the project development mailing list. it's a simple matter of specifying where the email should be sent. mature tasks available for build script use (including one specifically for sending e-mails). it should be extracted directly from the POM for the project we're building.xml. From here. 148 .
The corresponding metadata file will be called notify.version}-buildinfo. which is associated to the build script using a naming convention.build. In this example.directory}/$ . ]]></description> <parameters> <parameter> <name>buildinfo. the build script was called notify.Developing Custom Maven Plugins The Mojo Metadata file Unlike the prior Java examples.mojos.xml </defaultValue> <required>true</required> <readonly>false</readonly> </parameter> <parameter> <name>listAddr</name> <required>true</required> </parameter> <parameter> <name>project. metadata for an Ant mojo is stored in a separate file.outputFile</name> <defaultValue> ${project.name</name> <defaultValue>${project.xml.build.\ ${project.artifactId}.xml and should appear as follows: <pluginMetadata> <mojos> <mojo> <call>notify-target</call> <goal>notify</goal> <phase>deploy</phase> <description><![CDATA[ Email environment information from the current build to the development mailing list when the artifact is deployed.
upon closer examination. This library defines a set of interfaces for parsing mojo descriptors from their native format and generating various output from those descriptors – including plugin descriptor files. As with the Java example. the contents of this file may appear different than the metadata used in the Java mojo. In Java. and parameter flags such as required are still present. However. The maven-plugin-plugin ships with the Java and Beanshell provider libraries which implement the above interface. however. mojo-level metadata describes details such as phase binding and mojo name. Finally. metadata specify a list of parameters for the mojo. As with the Java example. parameters are injected as properties and references into the Ant Project instance.String (the default).or Beanshell-based mojos with no additional configuration. with its use of the MojoDescriptorExtractor interface from the maven-plugin-tools-api library. a more in-depth discussion of the metadata file for Ant mojos is available in Appendix A. in order to capture the parameter's type in the specification. Modifying the Plugin POM for Ant Mojos Since Maven 2. Maven allows POM-specific injection of plugin-level dependencies in order to accommodate plugins that take a framework approach to providing their functionality. you'd have to add a <type> element alongside the <name> element.0. Also. expression. since you now have a good concept of the types of metadata used to describe a mojo. by binding the mojo to the deploy phase of life cycle. and more. the notification e-mails will be sent only when a new artifact becomes available in the remote repository. In this example. Maven still must resolve and inject each of these parameters into the mojo. In an Antbased mojo however. you will see many similarities. to develop an Ant-based mojo.Better Builds with Maven At first glance. This allows developers to generate descriptors for Java. The rule for parameter injection in Ant is as follows: if the parameter's type is java. because you're going to be sending e-mails to the development mailing list.0 shipped without support for Ant-based mojos (support for Ant was added later in version 2. Fortunately. 150 . notice that this mojo is bound to the deploy phase of the life cycle. Instead. the difference here is the mechanism used for this injection. If one of the parameters were some other object type.lang. each with its own information like name. Any build that runs must be deployed for it to affect other development team members. some special configuration is required to allow the maven-plugin-plugin to recognize Ant mojos. otherwise. then its value is injected as a property.lang. you will have to add support for Ant mojo extraction to the maven-plugin-plugin. all of the mojo's parameter types are java. First of all.2). This is an important point in the case of this mojo. The expression syntax used to extract information from the build state is exactly the same. its value is injected as a project reference. The maven-plugin-plugin is a perfect example.String. so it's pointless to spam the mailing list with notification e-mails every time a jar is created for the project. parameter injection takes place either through direct field assignment. When this mojo is executed. but expressed in XML. or through JavaBeans-style setXXX() methods. default value. the overall structure of this file should be familiar.
] <dependency> <groupId>org. the specifications of which should appear as follows: <dependencies> [. 151 ...6. If you don't have Ant in the plugin classpath.mvnbook. quite simply.apache. since the plugin now contains an Ant-based mojo.5</version> </dependency> [. a dependency on the core Ant library (whose necessity should be obvious).] </project> Additionally.maven</groupId> <artifactId>maven-plugin-tools-ant</artifactId> <version>2. you will need to add a dependency on the maven-plugin-tools-ant library to the maven-plugin-plugin using POM configuration as follows: <project> [..] <build> <plugins> <plugin> <groupId>com. it will be quite difficult to execute an Ant-based plugin. The second new dependency is..maven</groupId> <artifactId>maven-script-ant</artifactId> <version>2.exist.2</version> </dependency> </dependencies> </plugin> </plugins> </build> [.] </dependencies> The first of these new dependencies is the mojo API wrapper for Ant build scripts.apache..0.Developing Custom Maven Plugins To accomplish this.. it requires a couple of new dependencies. and it is always necessary for embedding Ant scripts as mojos in the Maven build process..plugins</groupId> <artifactId>maven-plugin-plugin</artifactId> <dependencies> <dependency> <groupId>org.2</version> </dependency> <dependency> <groupId>ant</groupId> <artifactId>ant</artifactId> <version>1.0..
it will also extract the relevant environmental details during the package phase...codehaus.] </plugins> </build> The existing <execution> section – the one that binds the extract mojo to the build – is not modified. See section on Deploying your Application of chapter 3. notification happens in the deploy phase only. and send them to the Guinea Pig development mailing list in the deploy phase. This is because an execution section can address only one phase of the build life cycle. Even its configuration is the same. a new section for the notify mojo is created. execute the following command: mvn deploy The build process executes the steps required to build and deploy a jar . In order to tell the notify mojo where to send this e-mail. Note: You have to configure distributionManagement and scm to successfully execute mvn deploy.org</listAddr> </configuration> </execution> </executions> </plugin> [..plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> [. 152 ..Better Builds with Maven Binding the Notify Mojo to the life cycle Once the plugin descriptor is generated for the Ant mojo.] </execution> <execution> <id>notify</id> <goals> <goal>notify</goal> </goals> <configuration> <listAddr>dev@guineapig. because non-deployed builds will have no effect on other team members. you should add a configuration section to the new execution section.exist. Instead.] <plugins> <plugin> <groupId>com.. which supplies the listAddr parameter value.except in this case. Now..mvnbook. Again. it behaves like any other type of mojo to Maven. and these two mojos should not execute in the same phase (as mentioned previously). Adding a life-cycle binding for the new Ant mojo in the Guinea Pig POM should appear as follows: <build> [.
Many different mojo's package resources with their generated artifacts such as web. However. in most cases. The project helper component can be injected as follows: /** * Helper class to assist in attaching artifacts to the project instance. the process of adding a new resource directory to the current build is straightforward and requires access to the MavenProject and MavenProjectHelper: /** * Project instance. the MavenProjectHelper is provided to standardize the process of augmenting the project instance. This declaration will inject the current project instance into the mojo. It provides methods for attaching artifacts and adding new resource definitions to the current project. the unadorned @component annotation – like the above code snippet – is adequate. it is a utility. in some special cases. which means it's always present. * project-helper instance. For example. as discussed previously. to simplify adding resources to a project.xml file found in all maven artifacts. Component requirements are not available for configuration by users. This component is part of the Maven application. To be clear.Developing Custom Maven Plugins Adding a resource to the build Another common practice is for a mojo to generate some sort of non-code resource. this is what Maven calls a component requirement (it's a dependency on an internal component of the running Maven application). the project helper is not a build state. which will be packaged up in the same jar as the project classes. or wsdl files for web services. as in the case of Maven itself and the components. * @component * @required * @readonly */ private MavenProjectHelper projectHelper. * Used to add new source directory to the build. the Maven application itself is well-hidden from the mojo developer. Right away. 159 . This could be a descriptor for binding the project artifact into an application framework. that it's not a parameter at all! In fact. used to make addition of resources simpler. Component requirements are simple to declare. Namely. you should notice something very different about this parameter. and abstract the associated complexities away from the mojo developer. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. Maven components can make it much simpler to interact with the build process. needed for attaching the buildinfo file. However.xml files for servlet engines. the mojo also needs access to the MavenProjectHelper component. Normally. so your mojo simply needs to ask for it. Whatever the purpose of the mojo.
addResource(project. Accessing the source-root list Just as some mojos add new source directories to the build. others must read the list of active source directories. adding a new resource couldn't be easier. it's important to understand where resources should be added during the build life cycle. List excludes = null. Simply define the resources directory to add. Resources are copied to the classes directory of the build during the process-resources phase. which actually compiles the source code contained in these root directories into classes in the project output directory. they have to modify the sourceDirectory element in the POM. In a typical case. With these two objects at your disposal. as in the following example: /** * The list of directories which contain source code for the project. Again. however. as it is particularly useful to mojo developers. * List of source roots containing non-test code. this parameter declaration states that Maven does not allow users to configure this parameter directly. and then call a utility method on the project helper. these values would come from other mojo parameters. projectHelper.Better Builds with Maven A complete discussion of Maven's architecture – and the components available – is beyond the scope of this chapter. includes. instead. in order to perform some operation on the source code. and exclusion patterns as local variables. The classic example is the compile mojo in the maven-compiler-plugin. The prior example instantiates the resource's directory. and the jar mojo in the maven-source-plugin. the MavenProjectHelper component is worth mentioning here. List includes = Collections. inclusion patterns. The most common place for such activities is in the generate-resources life-cycle phase. If your mojo is meant to add resources to the eventual project artifact. * @parameter default-value="${project. for the sake of brevity. the entire build will fail. excludes). The parameter is also required for this mojo to execute. it will need to execute ahead of this phase.singletonList("**/*"). along with inclusion and exclusion patterns for resources within that directory. directory.compileSourceRoots}" * @required * @readonly */ private List sourceRoots. conforming with these standards improves the compatibility of your plugin with other plugins in the build. or else bind a mojo to the life-cycle phase that will add an additional source directory to the build. Again. 160 . if it's missing. The code should look similar to the following: String directory = "relative/path/to/some/directory". all you have to do is declare a single parameter to inject them. Gaining access to the list of source root directories for a project is easy. Similar to the parameter declarations from previous sections. which may or may not be directly configurable. Other examples include javadoc mojo in the maven-javadoc-plugin.
By the time the mojo gains access to them. it can iterate through them.hasNext().. in order to incorporate list of source directories to the buildinfo object. 161 . it. binding to any phase later than compile should be acceptable. as in the case of the extract mojo. Therefore.. ) { String sourceRoot = (String) it. it could be critically important to track the list of source directories used in a particular build.] } private void addSourceRoots( BuildInfo buildInfo ) { if ( sourceRoots != null && !sourceR's better to bind it to a later phase like package if capturing a complete picture of the project is important. source roots are expressed as absolute file-system paths. Returning to the buildinfo example. you need to add the following code: public void execute() throws MojoExecutionException { [. buildInfo.. If a certain profile injects a supplemental source directory into the build (most likely by way of a special mojo binding). To be clear. When you add this code to the extract mojo in the maven-buildinfo-plugin.Developing Custom Maven Plugins Now that the mojo has access to the list of project source roots. Remember. However. In this case however. then this profile would dramatically alter the resulting project artifact when activated.iterator().addSourceRoot( makeRelative( sourceRoot ) ).. binding this mojo to an early phase of the life cycle increases the risk of another mojo adding a new source root in a later phase. the ${basedir} expression refers to the location of the project directory in the local file system. for eventual debugging purposes. [. since compile is the phase where source files are converted into classes.] addSourceRoots( buildInfo ). } } } One thing to note about this code snippet is the makeRelative() method. This involves subtracting ${basedir} from the source-root paths. any reference to the path of the project directory in the local file system should be removed.next().isEmpty() ) { for ( Iterator it = sourceRoots. it can be bound to any phase in the life cycle. In order to make this information more generally applicable. applying whatever processing is necessary.
It's also important to note that this list consists of Resource objects.apache. let's learn about how a mojo can access the list of resources used in a build. now. if an activated profile introduces a mojo that generates some sort of supplemental framework descriptor. which in fact contain information about a resource root. and excludes.util. includes. it is important that the buildinfo file capture the resource root directories used in the build for future reference. the resources list is easy to inject as a mojo parameter. allowing direct configuration of this parameter could easily produce results that are inconsistent with other resource-consuming mojos. Since mojos can add new resources to the build programmatically. * List of Resource objects for the current build. and can be accomplished through the following code snippet: 162 . Since the resources list is an instance of java.Better Builds with Maven Accessing the resource list Non-code resources complete the picture of the raw materials processed by a Maven build. This is the mechanism used by the resources mojo in the maven-resources-plugin.4 environment that doesn't support Java generics. this parameter is declared as required for mojo execution and cannot be edited by the user. capturing the list of resources used to produce a project artifact can yield information that is vital for debugging purposes. The parameter appears as follows: /** * The list of resource definitions to be included in the project jar.resources}" * @required * @readonly */ private List resources. and Maven mojos must be able to execute in a JDK 1.model. As noted before with the dependencies parameter. mojos must be smart enough to cast list elements as org. It's a simple task to add this capability. Much like the source-root list. it can mean the difference between an artifact that can be deployed into a server environment and an artifact that cannot. Just like the source-root injection parameter. Therefore. For instance. the user has the option of modifying the value of the list by configuring the resources section of the POM. * @parameter default-value="${project.Resource instances. which copies all non-code resources to the output directory for inclusion in the project artifact.List.maven. You've already learned that mojos can modify the list of resources included in the project artifact. In this case. along with some matching rules for the resource files it contains. containing * directory.
a corresponding activity can be written to work with their test-time counterparts. due to the similarities. instead. it. collecting the list of project resources has an appropriate place in the life cycle. Since all project resources are collected and copied to the project output directory in the processresources phase. [. which may be executed during the build process. which must be processed and included in the final project artifact.addResourceRoot( makeRelative( resourceRoot ) ). All POM paths injected into mojos are converted to their absolute form first.next(). The concepts are the same. ) { Resource resource = (Resource) it. Adding this code snippet to the extract mojo in the maven-buildinfo-plugin will result in a resourceRoots section being added to the buildinfo file.] } private void addResourceRoots( BuildInfo buildInfo ) { if ( resources != null && !resources. That section should appear as follows: <resourceRoots> <resourceRoot>src/main/resources</resourceRoot> <resourceRoot>target/generated-resources/xdoclet</resourceRoot> </resourceRoots> Once more. that for every activity examined that relates to source-root directories or resource definitions.. any mojo seeking to catalog the resources used in the build should execute at least as late as the process-resources phase.getDirectory(). by trimming the ${basedir} prefix. to avoid any ambiguity. it's worthwhile to discuss the proper place for this type of activity within the build life cycle.iterator().hasNext(). only the parameter expressions and method names are different. This ensures that any resource modifications introduced by mojos in the build process have been completed. It's important to note however. } } } As with the prior source-root example. This chapter does not discuss test-time and compile-time source roots and resources as separate topics.isEmpty() ) { for ( Iterator it = resources. buildInfo.] addResourceRoots( buildInfo ). since the ${basedir} path won't have meaning outside the context of the local file system. Note on testing source-roots and resources All of the examples in this advanced development discussion have focused on the handling of source code and resources. It's necessary to revert resource directories to relative locations for the purposes of the buildinfo plugin. Like the vast majority of activities. This method converts the absolute path of the resource directory into a relative path. you'll notice the makeRelative() method.. 163 .. String resourceRoot = resource..Developing Custom Maven Plugins public void execute() throws MojoExecutionException { [. the key differences are summarized in the table below.
While an e-mail describing the build environment is transient. * Used to add new source directory to the build. Therefore. and only serves to describe the latest build. First.addCompileSourceRoot() ${project. an extra piece of code must be executed in order to attach that artifact to the project artifact. produces a derivative artifact.resources} project. These artifacts are typically a derivative action or side effect of the main build process.testResources} 5.Better Builds with Maven Table 5-2: Key differences between compile-time and test-time mojo activities Activity Change This To This Add testing source root Get testing source roots Add testing resource Get testing resources project.addTestResource () ${project. When a mojo. can provide valuable information to the development team. which sets it apart from the main project artifact in the repository. since it provides information about how each snapshot of the project came into existence. or set of mojos.5. the distribution of the buildinfo file via Maven's repository will provide a more permanent record of the build for each snapshot in the repository. you'll need a parameter that references the current project instance as follows: /** * Project instance.addTestSourceRoot() ${project. an artifact attachment will have a classifier. Once an artifact attachment is deposited in the Maven repository. which is still missing from the maven-buildinfo-plugin example. like sources or javadoc. javadoc bundles. needed for attaching the buildinfo file. this classifier must also be specified when declaring the dependency for such an artifact. Usually. Maven treats these derivative artifacts as attachments to the main project artifact. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. This extra step. Attaching Artifacts for Installation and Deployment Occasionally. it can be referenced like any other artifact. by using the classifier element for that dependency section within the POM.testSourceRoots} projectHelper. in that they are never distributed without the project artifact being distributed. Doing so guarantees that attachment will be distributed when the install or deploy phases are run. 164 . mojos produce new artifacts that should be distributed alongside the main project artifact in the Maven repository system. Classic examples of attached artifacts are source archives.compileSourceRoots} projectHelper. Including an artifact attachment involves adding two parameters and one line of code to your mojo.4. and even the buildinfo file produced in the examples throughout this chapter.addResource() ${project. for historical reference.
If you build the Guinea Pig project using this modified version of the maven-buildinfo-plugin. From the prior examples. "buildinfo".0-SNAPSHOT dir 165 . then running Maven to the install life-cycle phase on our test project. you're telling Maven that the file in the repository should be named using a. However.Developing Custom Maven Plugins The MavenProject instance is the object with which your plugin will register the attachment with for use in later phases of the lifecycle. These values represent the artifact extension and classifier. It identifies the file as being produced by the the maven-buildinfoplugin. the meaning and requirement of project and outputFile references should be clear.attachArtifact( project. * project-helper instance. Now that you've added code to distribute the buildinfo file. you should see the buildinfo file appear in the local repository alongside the project jar.m2\repository cd com\exist\mvnbook\guineapig\guinea-pig-core\1.xml extension. This serves to attach meaning beyond simply saying. which will make the process of attaching the buildinfo artifact a little easier: /** * Helper class to assist in attaching artifacts to the project instance. as opposed to another plugin in the build process which might produce another XML file with different meaning. By specifying an extension of “xml”. * @component */ private MavenProjectHelper projectHelper. “This is an XML file”. respectively. See Section 5. outputFile ). By specifying the “buildinfo” classifier.2 for a discussion about MavenProjectHelper and component requirements. For convenience you should also inject the following reference to MavenProjectHelper. the process of attaching the generated buildinfo file to the main project artifact can be accomplished by adding the following code snippet: projectHelper. there are also two somewhat cryptic string values being passed in: “xml” and “buildinfo”.5. used to make addition of resources simpler. you're telling Maven that this artifact should be distinguished from other project artifacts by using this value in the classifier element of the dependency declaration. you can test it by re-building the plugin. Once you include these two fields in the extract mojo within the maven-buildinfo-plugin. as follows: mvn install cd C:\Documents and Settings\[user_home]\. "xml".
It can extract relevant details from a running build and generate a buildinfo file based on these details. From there. Finally. the maven-buildinfo-plugin is ready for action.Better Builds with Maven guinea-pig-core-1.0-SNAPSHOT. it can attach the buildinfo file to the main project artifact so that it's distributed whenever Maven installs or deploys the project. the maven-buildinfo-plugin can also generate an e-mail that contains the buildinfo file contents.pom Now. when the project is deployed. and route that message to other development team members on the project development mailing list.xml guinea-pig-core-1.0-SNAPSHOT-buildinfo. 166 .0-SNAPSHOT.jar guinea-pig-core-1.
Maven can build a basic project with little or no modification – thus covering the 80% case. the mojos – that are bound to the build life cycle. Summary In its unadorned state. enabling you to attach custom artifacts for installation or deployment. 167 . If not. only a tiny fraction of which are a part of the default lifecycle mapping. the Codehaus Mojo project. Maven can integrate these custom tasks into the build process through its extensible plugin framework.. it's unlikely to be a requirement unique to your project. So. Whether they be code-generation. However. Finally. you can integrate almost any tool into the build process. Many plugins already exist for Maven use. Using the plugin mechanisms described in this chapter. Using the default lifecycle mapping. If your project requires special handling. chances are good that you can find a plugin to address this need at the Apache Maven project. reporting. or verification steps. a project requires special tasks in order to build successfully. developing a custom Maven plugin is an easy next step.6. Working with project dependencies and resources is equally as simple. or the project web site of the tools with which your project's build must integrate. However. there is a standardized way to inject new behavior into the build by binding new mojos at different life-cycle phases. In this chapter. Maven represents an implementation of the 80/20 rule. please consider contributing back to the Maven community by providing access to your new plugin. Mojo development can be as simple or as complex (to the point of embedding nested Maven processes within the build) as you need it to be. if you have the means. you've also learned how a plugin generated file can be distributed alongside the project artifact in Maven's repository system. Since the build process for a project is defined by the plugins – or more accurately. in certain circumstances. It is in great part due to the re-usable nature of its plugins that Maven can offer such a powerful build platform.Developing Custom Maven Plugins 5.
168 .Better Builds with Maven This page left intentionally blank.
6.
if the build fails its checks? The Web site also provides a permanent record of a project's health. In this chapter. the project will meet only the lowest standard and go no further. and whether the conditions for the checks are set correctly. it was pointed out that Maven's application of patterns provides visibility and comprehensibility. But.Better Builds with Maven 6. What Does Maven Have to Do with Project Health? In the introduction. which everyone can see at any time. to get a build to pass. you'll learn how to use a number of these tools effectively. • Maven takes all of the information you need to know about your project and brings it together under the project Web site. To begin.finding out whether there is any activity on the project.zip for convenience as a starting point. many of the reports illustrated can be run as part of the regular build in the form of a “check” that will fail the build if a certain condition is not met. because if the bar is set too high. new tools that can assess its health are easily integrated. how well it is tested. This is unproductive as minor changes are prioritized over more important tasks.8 8 Please see the README. and learning more about the health of the project. For this reason. you will be revisiting the Proficio application that was developed in Chapter 3. and how well it adapts to change. It provides additional information to help determine the reasons for a failed build. Conversely. Through the POM. When referring to health. relate. there will be too many failed builds. and what the nature of that activity is.determining how well the code works. and then run mvn install from the proficio subdirectory to ensure everything is in place. Because the POM is a declarative model of the project.1. there are two aspects to consider: • Code quality . and display that information in a single place. The code that concluded Chapter 3 is also included in Code_Ch06. 170 . Maven has access to the information that makes up a project. unzip the Code_Ch06.txt file included in the chapter 6 code samples zip file for additional details about building Proficio. It is important not to get carried away with setting up a fancy Web site full of reports that nobody will ever use (especially when reports contain failures they don't want to know about!). Maven can analyze. This is important. Project vitality . if the bar is set too low. why have a site. In this chapter. It is these characteristics that assist you in assessing the health of your project. The next three sections demonstrate how to set up an effective project Web site. and using a variety of tools.zip file into C:\mvnbook or your selected working directory.
xml: 171 .Assessing Project Health with Maven 6. is the focus of the rest of this chapter. These reports are useful for sharing information with others.2. SCM. However. having these standard reports means that those familiar with Maven Web sites will always know where to find the information they need. The Project Info menu lists the standard reports Maven includes with your site by default. by including the following section in proficio/pom. The second menu (shown opened in figure 6-1). you can add the Surefire report to the sample application. and now shows how to integrate project health information. adding a new report is easy. These reports provide a variety of insights into the quality and vitality of the project. For example. issue tracker. this menu doesn't appear as there are no reports included. To start. Adding Reports to the Project Web site This section builds on the information on project Web sites in Chapter 2 and Chapter 3. and to reference as links in your mailing lists. Project Reports. For newcomers to the project. and so on. unless you choose to disable them. review the project Web site shown in figure 6-1. On a new project. Figure 6-1: The reports generated by Maven You can see that the navigation on the left contains a number of reports.
C:\mvnbook\proficio\proficio-core> mvn site This can be found in the file target/site/surefire-report. You can now run the following site task in the proficio-core directory to regenerate the site..Better Builds with Maven [.maven. it will be inherited by all of the child modules.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> </plugin> </plugins> </reporting> [.apache. Figure 6-2: The Surefire report 172 . and as a result.] </project> This adds the report to the top level project.html and is shown in figure 6-2....] <reporting> <plugins> <plugin> <groupId>org.
For a quicker turn around. Maven knows where the tests and test results are.. Configuration of Reports Before stepping any further into using the project Web site. That's all there is to generating the report! This is possible thanks to key concepts of Maven discussed in Chapter 2: through a declarative project model.xml.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.Assessing Project Health with Maven As you may have noticed in the summary. For example.. and due to using convention over configuration.] Configuration for a reporting plugin is very similar..] <build> <plugins> <plugin> <groupId>org. it is important to understand how the report configuration is handled in Maven.apache. however it is added to the reporting section of the POM.5</target> </configuration> </plugin> </plugins> </build> [.html file in target/surefire-reports/. You might recall from Chapter 2 that a plugin is configured using the configuration element inside the plugin declaration in pom. the report can be modified to only show test failures by adding the following configuration in pom. 6.. the report can also be run individually using the following standalone goal: C:\mvnbook\proficio\proficio-core> mvn surefire-report:report Executing mvn surefire-report:report generates the surefire-report.3.5</source> <target>1.maven. for example: [. the report shows the test results of the project. the defaults are sufficient to get started with a useful report.xml: 173 .
plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <outputDirectory> ${project... and not site generation..apache. as seen in the previous section. consider if you wanted to create a copy of the HTML report in the directory target/surefirereports every time the build ran. and the build.. 174 . To continue with the Surefire report.Better Builds with Maven [. or in addition to. the reporting section: [.] “Executions” such as this were introduced in Chapter 3.] <build> <plugins> <plugin> <groupId>org. while the configuration can be used to modify its appearance or behavior.build. the plugin would need to be configured in the build section instead of. To do this.maven. they will all be included.maven.directory}/surefire-reports </outputDirectory> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> [...] The addition of the plugin element triggers the inclusion of the report in the Web site. is used only during the build. even though it is not specific to the execution.. If a plugin contains multiple reports.apache. The plugin is included in the build section to ensure that the configuration.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <showSuccess>false</showSuccess> </configuration> </plugin> </plugins> </reporting> [.] <reporting> <plugins> <plugin> <groupId>org.. some reports apply to both the site. However.
However. always place the configuration in the reporting section – unless one of the following is true: 1. 2. consider if you had run Surefire twice in your build. and a list of reports to include.Assessing Project Health with Maven Plugins and their associated configuration that are declared in the build section are not used during site generation. 175 . by default all reports available in the plugin are executed once. When you configure a reporting plugin. which is the reporting equivalent of the executions element in the build section. When you are configuring the plugins to be used in the reporting section. Fortunately. this isn't the case – adding the configuration to the reporting section is sufficient. The reports will not be included in the site. you might think that you'd need to configure the parameter in both sections. what if the location of the Surefire XML reports that are used as input (and would be configured using the reportsDirectory parameter) were different to the default location? Initially. For example. Both of these cases can be achieved with the reportSets element. and cases where a particular report will be run more than once. However. each time with a different configuration. The configuration value is specific to the build stage. there are cases where only some of the reports that the plugin produces will be required. Any plugin configuration declared in the reporting section is also applied to those declared in the build section. and that you had had generated its XML results to target/surefire-reports/unit and target/surefire-reports/perf respectively. once for unit tests and once for a set of performance tests. Each report set can contain configuration.
maven. The reports in this list are identified by the goal names that would be used if they were run from the command line. However.html.build.Better Builds with Maven To generate two HTML reports for these results.xml: [. When a report is executed individually. running mvn surefire-report:report will not use either of these configurations. Maven will use only the configuration that is specified in the plugin element itself. outside of any report sets.directory}/surefire-reports/perf </reportsDirectory> <outputName>surefire-report-perf</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> [. they must be enumerated in this list.. If you want all of the reports in a plugin to be generated...] Running mvn site with this addition will generate two Surefire reports: target/site/surefirereport-unit.] <reporting> <plugins> <plugin> <groupId>org.apache. as with executions.build. 176 . The reports element in the report set is a required element.html and target/site/surefire-report-perf.. you would include the following section in your pom.
• To determine the correct balance. 177 . who isn't interested in the state of the source code. where the end user documentation is on a completely different server than the developer information. but quite separate to the end user documentation. but in the navigation there are reports about the health of the project.. • The open source reusable library. there's something subtly wrong with the project Web site. depending on the project. This may be confusing for the first time visitor.. where the developer information is available. each section of the site needs to be considered. which are targeted at the developers. For example.. and an inconvenience to the developer who doesn't want to wade through end user documentation to find out the current state of a project's test coverage. where much of the source code and Javadoc reference is of interest to the end user.4. • The open source graphical application. to generate only the mailing list and license pages of the standard reports.xml file: [. in some cases down to individual reports. 6. which are targeted at an end user. On the entrance page there are usage instructions for Proficio. and the content's characteristics.Assessing Project Health with Maven It is also possible to include only a subset of the reports in a plugin.] While the defaults are usually sufficient.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <reportSets> <reportSet> <reports> <report>mailing-list</report> <report>license</report> </reports> </reportSet> </reportSets> </plugin> [.maven. Consider the following: The commercial product. Table 6-1 lists the content that a project Web site may contain. Separating Developer Reports From User Documentation After adding a report. This approach to balancing these competing requirements will vary.. add the following to the reporting section of the pom. and most likely doesn't use Maven to generate it.apache.] <plugin> <groupId>org. this customization will allow you to configure reports in a way that is just as flexible as your build.
Some standard reports. Javadoc) that in a library or framework is useful to the end user. This is true of the news and FAQs. This is reference material (for example. For a single module library. These are the reports discussed in this chapter that display the current state of the project to the developers. like mailing list information and the location of the issue tracker and SCM are updated also. the source code reference material and reports are usually generated from the modules that hold the source code and perform the build. and a development branch where new features can be documented for when that version is released. The Separated column indicates whether the documentation can be a separate module or project. the Javadoc and other reference material are usually distributed for reference as well. source code references should be given a version and remain unchanged after being released.Better Builds with Maven Table 6-1: Project Web site content types Content Description Updated Distributed Separated News. including the end user documentation in the normal build is reasonable as it is closely tied to the source code reference. that can be updated between releases without risk of including new features. the Updated column indicates whether the content is regularly updated. This is typically true for the end user documentation. regardless of releases. but usually not distributed or displayed in an application. which are continuously published and not generally of interest for a particular release. It is also true of the project quality and vitality reports. 178 . FAQs and general Web site End user documentation Source code reference material Project health and vitality reports This is the content that is considered part of the Web site rather than part of the documentation. and to maintain only one set of documentation. It is important not to include documentation for features that don't exist in the last release. The Distributed column in the table indicates whether that form of documentation is typically distributed with the project. This is documentation for the end user including usage instructions and guides. is to branch the end user documentation in the same way as source code. The situation is different for end user documentation. Sometimes these are included in the main bundle. It is good to update the documentation on the Web site between releases. and not introducing incorrect documentation. Features that are available only in more recent releases should be marked to say when they were introduced. However. Yes Yes No Yes Yes No No Yes Yes No No No In the table. The best compromise between not updating between releases. You can maintain a stable branch. and sometimes they are available for download separately. While there are some exceptions. as it is confusing for those reading the site who expect it to reflect the latest release. which are based on time and the current state of the project. For libraries and frameworks. It refers to a particular version of the software.
the site currently contains end user documentation and a simple report. and is not distributed with the project. the documentation and Web site should be kept in a separate module dedicated to generating a site. In the following example. but make it an independent project when it forms the overall site with news and FAQs.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple 179 . In this case. or maybe totally independent. a module is created since it is not related to the source code reference material. It is important to note that none of these are restrictions placed on a project by Maven. This is done using the site archetype : C:\mvnbook\proficio> mvn archetype:create -DartifactId=user-guide \ -DgroupId=com. The current structure of the project is shown in figure 6-3. In Proficio. This separated documentation may be a module of the main project. Figure 6-3: The initial setup The first step is to create a module called user-guide for the end user documentation.mvnbook. you will learn how to separate the content and add an independent project for the news and information Web site. This avoids including inappropriate report information and navigation elements. you are free to place content wherever it best suits your project.exist. in most cases.Assessing Project Health with Maven However. You would make it a module when you wanted to distribute it with the rest of the project. While these recommendations can help properly link or separate content according to how it will be used.
version} </url> </site> </distributionManagement> 180 .Better Builds with Maven This archetype creates a very basic site in the user-guide subdirectory. whether to maintain history or to maintain a release and a development preview. Figure 6-4: The directory layout with a user guide The next step is to ensure the layout on the Web site is correct. Previously.com/web/guest/products/resources. The resulting structure is shown in figure 6-4. the URL and deployment location were set to the root of the Web site:. In this example.xml file to change the site deployment url: <distributionManagement> <site> [. the development documentation would go to that location. which you can later add content to..com/www/library/mvnbook/proficio/reference/${pom. the development documentation will be moved to a /reference/version subdirectory so that the top level directory is available for a user-facing Web site.exist. is useful if you are maintaining multiple public versions.] <url> scp://exist. and the user-guide to. Under the current structure.com/web/guest/products/resources/user-guide.exist. while optional.. Adding the version to the development documentation. First. edit the top level pom.
com/www/library/mvnbook/proficio/reference/${pom. This will include news and FAQs about the project that change regularly. edit the user-guide/pom. This time.com/www/library/mvnbook/proficio/user-guide. run it one directory above the proficio directory.version} and scp://exist.Assessing Project Health with Maven Next. Figure 6-5: The new Web site 181 .version}. a top level site for the project is required.exist.site</id> <url> scp://exist.mvnbook. Now that the content has moved. C:\mvnbook> mvn archetype:create -DartifactId=proficio-site \ -DgroupId=com. As before.com/www/library/mvnbook/proficio/user-guide </url> </site> </distributionManagement> There are now two sub-sites ready to be deployed: • • file to set the site deployment url for the module: <distributionManagement> <site> <id>mvnbook.com/www/library/mvnbook/proficio/reference/${pom.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple The resulting structure is shown in figure 6-5.com/www/library/mvnbook/proficio/user-guide You will not be able to deploy the Web site to the locations scp://exist. you can create a new site using the archetype. They are included here only for illustrative purposes.
] Next. add some menus to src/site/site. you will then be able to navigate through the links and see how they relate. Note that you haven't produced the apidocs directory yet.. like the following: ----Proficio ----Joe Blogs ----23 July 2007 ----Proficio Proficio is super.] You can now run mvn site in proficio-site to see how the separate site will look.. If you deploy both sites to a server using mvn site-deploy as you learned in Chapter 3.exist.xml that point to the other documentation as follows: [..] <url>. so that link won't work even if the site is deployed..0-SNAPSHOT/apidocs/" /> <item name="Developer Info" href="/reference/1.6 of this chapter.Proficio project started Finally.xml as follows: [...Better Builds with Maven You will need to add the same elements to the POM for the url and distributionManagement as were set originally for proficio-site/pom.com/web/guest/products/resources</url> [.. Generating reference documentation is covered in section 6..0-SNAPSHOT/" /> </menu> [.com/www/library/mvnbook/proficio</url> </site> </distributionManagement> [.. * News * <16 Jan 2006> .] <distributionManagement> <site> <id>mvnbook.apt file with a more interesting news page.website</id> <url>scp://exist. 182 .. replace the src/site/apt/index.] <menu name="Documentation"> <item name="User's Guide" href="/user-guide/" /> </menu> <menu name="Reference"> <item name="API" href="/reference/1.
183 . While this is certainly the case at present. In particular. for more information.5. Report results and checks performed should be accurate and conclusive – every developer should know what they mean and how to address them. or you can walk through all of the examples one by one. Team Collaboration with Maven. For each report. and which checks to perform during the build. Choosing Which Reports to Include Choosing which reports to include. the reports that utilize unit tests often have to re-run the tests with new parameters. While these aren't all the reports available for Maven. You may notice that many of these tools are Java-centric. 6. the guidelines should help you to determine whether you need to use other reports. in addition to the generic reports such as those for dependencies and change tracking. the performance of your build will be affected by this choice. there is also a note about whether it has an associated visual report (for project site inclusion). it is possible in the future that reports for other languages will be available.Assessing Project Health with Maven The rest of this chapter will focus on using the developer section of the site effectively and how to build in related conditions to regularly monitor and improve the quality of your project. and look at the output to determine which reports to use. Table 6-2 covers the reports discussed in this chapter and reasons to use them. is an important decision that will determine the effectiveness of your build reports. While future versions of Maven will aim to streamline this. You can use this table to determine which reports apply to your project specifically and limit your reading to just those relevant sections of the chapter. it is recommended that these checks be constrained to the continuous integration and release environments if they cause lengthy builds. and an applicable build check (for testing a certain condition and failing the build if it doesn't pass). In some instances. See Chapter 7.
Recommended to enhance readability of the code. Simple report on outstanding tasks or other markers in source code Analyze code statement coverage during unit tests or other code execution. Recommended for multiple module builds where consistent versions are important. Yes N/A Useful for most Java software Important for any projects publishing a public API Companion to Javadoc that shows the source code Important to include when using other reports that can refer to it. such as Checkstyle Doesn't handle JDK 5. Recommended for teams with a focus on tests Can help identify untested or even unused code. Doesn't identify all missing or inadequate tests. Checks your source code against known rules for code smells. Produces a source cross reference for any Java code. Yes N/A Checkstyle Checks your source code against a standard descriptor for formatting issues.0 features Use to enforce a standard code style. . Can also show any tests that are long running and slowing the build. Check already performed by surefire:test. Recommended for easier browsing of results. Show the results of unit tests visually. Avoids issues when one piece of code is fixed/updated and the other forgotten Useful for tracking TODO items Very simple. checks for duplicate source code blocks that indicates it was copy/pasted. convenient set up Can be implemented using Checkstyle rules instead. so additional tools may be required. Part of PMD. Can help find snapshots prior to release. Should be used to improve readability and identify simple and common bugs. Not useful if there are a lot of errors to be fixed – it will be slow and the result unhelpful.
html created. because it is often of interest to the end user of a library or framework. 185 . as well as to the developer of the project itself.6. and. Recommended for all publicly released projects. The two reports this section illustrates are: • • JXR – the Java source cross reference. Changes Produce release notes and road maps from issue tracking systems Yes N/A ✔ ✔ 6. Creating Reference Material Source code reference materials are usually the first reports configured for a new project. Should be used for keeping teams up to date on internal projects also..
maven.xml: [. Including JXR as a permanent fixture of the site for the project is simple. the links can be used to quickly find the source belonging to a particular exception..] </plugins> </reporting> [.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> [.. Or. crossreferenced Java source file for the selected class. The hyper links in the content pane can be used to navigate to other classes and interfaces within the cross reference. and can be done by adding the following to proficio/pom.] <reporting> <plugins> <plugin> <groupId>org. Those familiar with Javadoc will recognize the framed navigation layout.apache. however the content pane is now replaced with a syntax-highlighted. if you don't have the project open in your IDE.. A useful way to leverage the cross reference is to use the links given for each line number in a source file to point team mates at a particular piece of code.Better Builds with Maven Figure 6-6: An example source code cross reference Figure 6-6 shows an example of the cross reference....] 186 .
For example.html created. will link both the JDK 1.] The end result is the familiar Javadoc output. you can run it on its own using the following command: C:\mvnbook\proficio\proficio-core> mvn javadoc:javadoc Since it will be included as part of the project site..org/plugins/maven-jxr-plugin/. Unlike JXR. However. Now that you have a source cross reference. Using Javadoc is very similar to the JXR report and most other reports in Maven. see the plugin reference at. In the online mode. you should include it in proficio/pom. the default JXR configuration is sufficient.xml. however if you'd like a list of available configuration options. this will link to an external Javadoc reference at a given URL. browsing source code is too cumbersome for the developer if they only want to know about how the API works.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> </plugin> [.5 API documentation and the Plexus container API documentation used by Proficio: 187 . so an equally important piece of reference material is the Javadoc report.. This page contains the Source Xref and the Test Source Xref items listed in the Project Reports menu of the generated site. Again.apache. A Javadoc report is only as good as your Javadoc! Make sure you document the methods you intend to display in the report.xml as a site report to ensure it is run every time the site is regenerated: [. and if possible use Checkstyle to ensure they are documented. when added to proficio/pom. the Javadoc report is quite configurable.Assessing Project Health with Maven You can now run mvn site in proficio-core and see target/site/projectreports.apache.. In most cases. with most of the command line options of the Javadoc tool available.] <plugin> <groupId>org. in target/site/apidocs. the following configuration.maven. One useful option to configure is links. many of the other reports demonstrated in this chapter will be able to link to the actual code to highlight an issue..
the next section will allow you to start monitoring and improving its health. ensuring that the deployed Javadoc corresponds directly to the artifact with which it is deployed for use in an IDE.. of course!). Since it is preferred to have discrete functional pieces separated into distinct modules. Edit the configuration of the existing Javadoc plugin in proficio/pom. this is not sufficient. the Javadoc plugin provides a way to produce a single set of API documentation for the entire project. Try running mvn clean javadoc:javadoc in the proficio directory to produce the aggregated Javadoc in target/site/apidocs/index. but conversely to have the Javadoc closely related.maven. this simple change will produce an aggregated Javadoc and ignore the Javadoc report in the individual modules. but it results in a separate set of API documentation for each library in a multi-module build.xml by adding the following line: [...Better Builds with Maven <plugin> <groupId>org.lang.] </configuration> [.0-alpha-9/apidocs</link> </links> </configuration> </plugin> If you regenerate the site in proficio-core with mvn site again.0/docs/api</link> <link>. This setting must go into the reporting section so that it is used for both reports and if the command is executed separately.apache. but this would still limit the available classes in the navigation as you hop from module to module.5. 188 . are linked to API documentation on the Sun Web site.. this setting is always ignored by the javadoc:jar goal. as well as any references to classes in Plexus. Now that the sample application has a complete reference for the source code.] <configuration> <aggregate>true</aggregate> [.lang. However.. Setting up Javadoc has been very convenient.Object. you'll see that all references to the standard JDK classes such as java. One option would be to introduce links to the other modules (automatically generated by Maven based on dependencies.] When built from the top level project..sun.html. Instead.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <links> <link> and java.codehaus.
which is obtained by running mvn pmd:pmd.. and this section will look at three: PMD (. the line numbers in the report are linked to the actual source code so you can browse the issues. which in turn reduces the risk that its accuracy will be affected by change) Maven has reports that can help with each of these health factors.7.Assessing Project Health with Maven 6. Figure 6-7 shows the output of a PMD report on proficio-core. some source files are identified as having problems that could be addressed. 189 .net/) • Tag List • PMD takes a set of either predefined or user-defined rule sets and evaluates the rules across your Java source code. Also. and violations of a coding standard. such as unused methods and variables. this is important for both the efficiency of other team members and also to increase the overall level of code comprehension.sf. since the JXR report was included earlier. Figure 6-7: An example PMD report As you can see. copy-and-pasted code.net/) • Checkstyle (. The result can help identify bugs.sf.
add the following to the plugin configuration you declared earlier: [.xml file: [.] <plugin> <groupId>org.apache. you must configure all of them – including the defaults explicitly.... and the finalizer rule sets. Adding new rule sets is easy.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>/rulesets/basic. However.] <plugin> <groupId>org.xml</ruleset> <ruleset>/rulesets/imports.maven. For example..xml</ruleset> </rulesets> </configuration> </plugin> [. unnecessary statements and possible bugs – such as incorrect loop variables. and imports rule sets.xml</ruleset> <ruleset>/rulesets/finalizers. redundant or unused import declarations. if you configure these. The “basic” rule set includes checks on empty blocks.xml</ruleset> <ruleset>/rulesets/unusedcode.Better Builds with Maven Adding the default PMD report to the site is just like adding any other report – you can include it in the reporting section in the proficio/pom. by passing the rulesets configuration to the plugin. The “imports” rule set will detect duplicate.maven. to include the default rules... methods.apache.] The default PMD report includes the basic. variables and parameters.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> </plugin> [... The “unused code” rule set will locate unused private fields. unused code.] 190 .
xml.net/bestpractices.. you could create a rule set with all the default rules. but not others. create a file in the proficio-core directory of the sample application called src/main/pmd/custom.xml" /> <rule ref="/rulesets/unusedcode. and imports are useful in most scenarios and easily fixed.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>${basedir}/src/main/pmd/custom.html.. For example.xml</ruleset> </rulesets> </configuration> </plugin> </plugins> </reporting> [. In either case. and add more as needed. but exclude the “unused private field” rule.sf.html: • • Pick the rules that are right for you.0"?> <ruleset name="custom"> <description> Default rules.sf. basic. you can choose to create a custom rule set. Or.maven. no unused private field warning </description> <rule ref="/rulesets/basic. with the following content: <?xml version="1. If you've done all the work to select the right rules and are correcting all the issues being discovered.xml"> <exclude name="UnusedPrivateField" /> </rule> </ruleset> To use this rule set.] <reporting> <plugins> <plugin> <groupId>org. select the rules that apply to your own project. For PMD. Start small. There is no point having hundreds of violations you won't fix. It is also possible to write your own rules if you find that existing ones do not cover recurring problems in your source code.. One important question is how to select appropriate rules.xml" /> <rule ref="/rulesets/imports.] For more examples on customizing the rule sets.xml file by adding: [. 191 .Assessing Project Health with Maven You may find that you like some rules in a rule set. From this starting. see the instructions on the PMD Web site at. To try this. try the following guidelines from the Web site at.. override the configuration in the proficio-core/pom.net/howtomakearuleset. unusedcode. you need to make sure it stays that way.apache. you may use the same rule sets in a number of projects.
so that it is regularly tested. This is done by binding the goal to the build life cycle. You will see that the build fails.maven. To do so. By default.] </plugins> </build> You may have noticed that there is no configuration here. If you need to run checks earlier.. fix the errors in the src/main/java/com/exist/mvnbook/proficio/DefaultProficio. [INFO] --------------------------------------------------------------------------- Before correcting these errors. you should include the check in the build. but recall from Configuring Reports and Checks section of this chapter that the reporting configuration is applied to the build as well..java file by adding a //NOPMD comment to the unused variables and method: 192 . To correct this.apache. add the following section to the proficio/pom.. which occurs after the packaging phase. the pmd:check goal is run in the verify phase.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> [.Better Builds with Maven Try this now by running mvn pmd:check on proficio-core.xml file: <build> <plugins> <plugin> <groupId>org. try running mvn verify in the proficio-core directory. you could add the following to the execution block to ensure that the check runs just after all sources exist: <phase>process-sources</phase> To test this new setting.
. and will appear as “CPD report” in the Project Reports menu. or copy/paste detection report. While this check is very useful.] int j. Figure 6-8: An example CPD report 193 . While the PMD report allows you to run a number of different rules.Assessing Project Health with Maven [.] // Trigger PMD and checkstyle int i.] private void testMethod() // NOPMD { } [. adding the check to a profile. This is the CPD. // NOPMD [.. it can be slow and obtrusive during general development. See Continuous Integration with Continuum section in the next chapter for information on using profiles and continuous integration. there is one that is in a separate report. but mandatory in an integration environment.. and it includes a list of duplicate code fragments discovered across your entire source base..This report is included by default when you enable the PMD plugin in your reporting section. For that reason.. which is executed only in an appropriate environment.. the build will succeed. An example report is shown in figure 6-8...] If you run mvn verify again. // NOPMD [. can make the check optional for developers.
and a commercial product called Simian (. It was originally designed to address issues of format and style.redhillconsulting. rather than identifying a possible factoring of the source code. or to enforce a check will depend on the environment in which you are working. Whether to use the report only. • Use it to check code formatting and selected other problems. in many ways. Figure 6-9 shows the Checkstyle report obtained by running mvn checkstyle:checkstyle from the proficio-core directory. pmd:cpd-check can be used to enforce a failure if duplicate source code is found. This may not give you enough control to effectively set a rule for the source code. There are other alternatives for copy and paste detection. 194 . Checkstyle is a tool that is. and still rely on other tools for greater coverage. Simian can also be used through Checkstyle and has a larger variety of configuration options for detecting duplicate source code. Depending on your environment.net/availablechecks. such as Checkstyle. However. which defaults to 100. With this setting you can fine tune the size of the copies detected. but has more recently added checks for other code issues.au/products/simian/). you may choose to use it in one of the following ways: Use it to check code formatting only. If you need to learn more about the available modules in Checkstyle. and rely on other tools for detecting other problems. the CPD report contains only one variable to configure: minimumTokenCount. similar to PMD. refer to the list on the Web site at. resulting in developers attempting to avoid detection by making only slight modifications.Better Builds with Maven In a similar way to the main check. Some of the extra summary information for overall number of errors and the list of checks used has been trimmed from this display. • Use it to check code formatting and to detect other problems exclusively • This section focuses on the first usage scenario.
xml: [.. add the following to the reporting section of proficio/pom. so to include the report in the site and configure it to use the Maven style. the rules used are those of the Sun Java coding conventions.. with a link to the corresponding source line – if the JXR report was enabled. but Proficio is using the Maven team's code style.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>config/maven_checks.maven.Assessing Project Health with Maven Figure 6-9: An example Checkstyle report You'll see that each file with notices.xml</configLocation> </configuration> </plugin> 195 .apache. warnings or errors is listed in a summary. and then the errors are shown.] <plugin> <groupId>org. This style is also bundled with the Checkstyle plugin. That's a lot of errors! By default.
html. These checks are for backwards compatibility only. It is also possible to share a Checkstyle configuration among multiple projects.0 and above.apache. or would like to use the additional checks introduced in Checkstyle 3. filter the results. 196 . known as “Task List” in Maven 1.xml config/turbine_checks. Before completing this section it is worth mentioning the Tag List plugin.html No longer online – the Avalon project has closed. the Checkstyle documentation provides an excellent reference at. and to parameterize the Checkstyle configuration for creating a baseline organizational standard that can be customized by individual projects.Better Builds with Maven Table 6-3 shows the configurations that are built into the Checkstyle plugin.com/docs/codeconv/. The built-in Sun and Maven standards are quite different.xml Description Reference Sun Java Coding Conventions Maven team's coding conventions Conventions from the Jakarta Turbine project Conventions from the Apache Avalon project. will look through your source code for known tags and provide a report on those it finds.xml config/avalon_checks. This report.html#Maven%20Code%20Style. The configLocation parameter can be set to a file within your build.sun.xml config/maven_checks. as explained at. and typically. a URL. While this chapter will not go into an example of how to do this. By default.org/guides/development/guidem2-development. It is a good idea to reuse an existing Checkstyle configuration for your project if possible – if the style you use is common.org/turbine/common/codestandards. Table 6-3: Built-in Checkstyle configurations Configuration config/sun_checks. The Checkstyle plugin itself has a large number of configuration options that allow you to customize the appearance of the report.0. you will need to create a Checkstyle configuration. one or the other will be suitable for most people. then it is likely to be more readable and easily learned by people joining your project. However. if you have developed a standard that differs from these. or a resource within a special dependency also. this will identify the tags TODO and @todo in the comments of your source code.apache.org/plugins/maven-checkstyle-plugin/tips.
At the time of writing. based on the theory that you shouldn't even try to use something before it has been tested. 6.mojo</groupId> <artifactId>taglist-maven-plugin</artifactId> <configuration> <tags> <tag>TODO</tag> <tag>@todo</tag> <tag>FIXME</tag> <tag>XXX</tag> </tags> </configuration> </plugin> [. have beta versions of plugins available from the. using this report on a regular basis can be very helpful in spotting any holes in the test plan. such as FindBugs. Setting Up the Project Web Site. As you learned in section 6. Checkstyle. the report (run either on its own. and more plugins are being added every day. or as part of the site). however this plugin is a more convenient way to get a simple report of items that need to be addressed at some point later in time. it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package.. JavaNCSS and JDepend.. or XXX in your source code. FIXME. Another critical technique is to determine how much of your source code is covered by the test execution. In the build life cycle defined in Chapter 2. for assessing coverage. add the following to the reporting section of proficio/pom. Some other similar tools.org/ project at the time of this writing. will ignore these failures when generated to show the current test state.codehaus.] <plugin> <groupId>org. While you are writing your tests. @todo. In addition to that.xml: [. It is actually possible to achieve this using Checkstyle or PMD rules. Cobertura (. you saw that tests are run before the packaging of the library or application for distribution. it is easy to add a report to the Web site that shows the results of the tests that have been run. Knowing whether your tests pass is an obvious and important assessment of their health..] This configuration will locate any instances of TODO.8. PMD.codehaus. While the default Surefire configuration fails the build if the tests fail. 197 ..net) is the open source tool best integrated with Maven.2.sf. and Tag List are just three of the many tools available for assessing the health of your project's source code. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances. Monitoring and Improving the Health of Your Tests One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results.Assessing Project Health with Maven To try this plugin.
run mvn cobertura:cobertura in the proficio-core directory of the sample application. and a line-by-line coverage analysis of each source file. comments and white space.html. you'll notice the following markings: • Unmarked lines are those that do not have any executable code associated with them. The report contains both an overall summary. Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run.Better Builds with Maven To see what Cobertura is able to report. For a source file. a branch is an if statement that can behave differently depending on whether the condition is true or false. This includes method and class declarations. or for which all possible branches were not executed. Lines in red are statements that were not executed (if the count is 0). For example. in the familiar Javadoc style framed layout. 198 . • • Unmarked lines with a green number in the second column are those that have been completely covered by the test execution. Figure 6-10 shows the output that you can view in target/site/cobertura/index.
over 10). Add the following to the reporting section of proficio/pom.] <plugin> <groupId>org.xml: [. which measures the number of branches that occur in a particular method. If this is a metric of interest.Assessing Project Health with Maven Figure 6-10: An example Cobertura report The complexity indicated in the top right is the cyclomatic complexity of the methods in the class. High numbers (for example.codehaus. you might consider having PMD monitor it.. as it can be hard to visualize and test the large number of alternate code paths. might indicate a method should be re-factored into simpler pieces. The Cobertura report doesn't have any notable configuration.. so including it in the site is simple.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> 199 .
html.. 200 . the report will be generated in target/site/cobertura/index.xml: [. you'll see that the cobertura. While not required.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> [.Better Builds with Maven [.ser. Due to a hard-coded path in Cobertura. as well as the target directory. The Cobertura plugin also contains a goal called cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage. the database used is stored in the project directory as cobertura.. To ensure that this happens...] If you now run mvn clean in proficio-core..] If you now run mvn site under proficio-core. and is not cleaned with the rest of the project. there is another useful setting to add to the build section.] <build> <plugins> <plugin> <groupId>org. add the following to the build section of proficio/pom..ser file is deleted.
] Note that the configuration element is outside of the executions. This is because Cobertura needs to instrument your class files..] <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> [.xml: [.Assessing Project Health with Maven To configure this goal for Proficio. 201 .] <configuration> <check> <totalLineRate>80</totalLineRate> [. so are not packaged in your application). The Surefire report may also re-run tests if they were already run – both of these are due to a limitation in the way the life cycle is constructed that will be improved in future versions of Maven. add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database: [..] <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> [.. You can do this for Proficio to have the tests pass by changing the setting in proficio/pom.. You'll notice that your tests are run twice. as in the Proficio example.. The rules that are being used in this configuration are 100% overall line coverage rate. and 100% branch coverage rate.] If you run mvn verify again. the check will be performed. Normally. so running the check fails.... However. you would add unit tests for the functions that are missing tests. This ensures that if you run mvn cobertura:check from the command line. the check passes. If you now run mvn verify under proficio-core.. and decide to reduce the overall average required. looking through the report. You would have seen in the previous examples that there were some lines not covered. you may decide that only some exceptional cases are untested. This wouldn't be the case if it were associated with the life-cycle bound check execution. these are instrumented in a separate directory.. the configuration will be applied. and the tests are re-run using those class files instead of the normal ones (however.
For example. only allowing a small number of lines to be untested. For more information. so that they understand and agree with the choice.org/cobertura-maven-plugin. using lineRate and branchRate. If you have another tool that can operate under the Surefire framework. there is more to assessing the health of tests than success and coverage. involve the whole development team in the decision. using packageLineRate and packageBranchRate. see the Clover plugin reference on the Maven Web site at. Consider setting any package rates higher than the per-class rate. Remember. such as handling checked exceptions that are unexpected in a properly configured system and difficult to test. The best known commercial offering is Clover.Better Builds with Maven These settings remain quite demanding though. For more information. may be of assistance there. It is just as important to allow these exceptions. and get integration with these other tools for free. In both cases. it is possible for you to write a provider to use the new tool. Jester mutates the code that you've already determined is covered and checks that it causes the test to fail when run a second time with the wrong code. and you can evaluate it for 30 days when used in conjunction with Maven. refer to the Cobertura plugin configuration reference at. it is worth noting that one of the benefits of Maven's use of the Surefire abstraction is that the tools above will work for any type of runner introduced. Don't set it too low. Remain flexible – consider changes over time rather than hard and fast rules. although not yet integrated with Maven directly.0 support is also available. These reports won't tell you if all the features have been implemented – this requires functional or acceptance testing. or as the average across each package. these reports work unmodified with those test types. Set some known guidelines for what type of code can remain untested. Of course.codehaus. It also won't tell you whether the results of untested input values produce the correct results.org/plugins/maven-clover-plugin/. You may want to enforce this for each file individually as well. It is also possible to set requirements on individual packages or classes using the regexes parameter. the easiest way to increase coverage is to remove code that handles untested. Some helpful hints for determining the right code coverage settings are: • • • • • • Like all metrics. Cobertura is not the only solution available for assessing test coverage. exceptional cases – and that's certainly not something you want! The settings above are requirements for averages across the entire source tree.net). 202 . which is very well integrated with Maven as well. Tools like Jester (. as it is to require that the other code be tested. This will allow for some constructs to remain untested. Don't set it too high. as it will become a minimum benchmark to attain and rarely more. and setting the total rate higher than both. It behaves very similarly to Cobertura.sf. Surefire supports tests written with TestNG. and at the time of writing experimental JUnit 4. as it will discourage writing code to handle exceptional cases that aren't being tested. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally. To conclude this section on testing. Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven.apache.
The result is shown in figure 6-11.0 introduced transitive dependencies. and a number of other features such as scoping and version selection. run mvn site in the proficio-core directory.9. Maven 2. but does introduce a drawback: poor dependency maintenance or poor scope and version selection affects not only your own project. Figure 6-11: An example dependency report 203 . The first step to effectively maintaining your dependencies is to review the standard report included with the Maven site. This brought much more power to Maven's dependency mechanism. and browse to the file generated in target/site/dependencies. but any projects that depend on your project. where the dependencies of dependencies are included in a build. Monitoring and Improving the Health of Your Dependencies Many people use Maven primarily as a dependency manager. the full graph of a project's dependencies can quickly balloon in size and start to introduce conflicts. Left unchecked. used well it is a significant time saver. If you haven't done so already. While this is only one of Maven's features.html.Assessing Project Health with Maven 6.
This report is also a standard report.8. 204 . Currently. or an incorrect scope – and choose to investigate its inclusion.0-SNAPSHOT (selected for compile) proficio-model:1. The report shows all of the dependencies included in all of the modules within the project.0-alpha-9 (selected for compile) plexus-utils:1. To see the report for the Proficio project. but more importantly in the second section it will list all of the transitive dependencies included through those dependencies.1-alpha-2 (selected for compile) junit:3.html will be created.exist. run mvn site from the base proficio directory. Whether there are outstanding SNAPSHOT dependencies in the build. using indentation to indicate which dependencies introduce other dependencies.1 (selected for test) plexus-container-default:1.0-SNAPSHOT (selected for compile) Here you can see that. local scope test wins) proficio-api:1. which indicates dependencies that are in development. an incorrect version. This helps ensure your build is consistent and reduces the probability of introducing an accidental incompatibility. This can be quite difficult to read.0-SNAPSHOT junit:3. It also includes some statistics and reports on two important factors: • Whether the versions of dependencies used for each module is in alignment. • 9 Artifacts can also be obtained from. here is the resolution process of the dependencies of proficio-core (some fields have been omitted for brevity): proficio-core:1.0.1 (not setting scope to: compile. this requires running your build with debug turned on. The file target/site/dependencyconvergence. so at the time of this writing there are two features that are aimed at helping in this area: • • The Repository Manager (Archiva) will allow you to navigate the dependency tree through the metadata stored in the Ibiblio9 repository.Better Builds with Maven This report shows detailed information about your direct dependencies. proficio-model is introduced by proficio-api.org/maven2/. This will output the dependency tree as it is calculated. and is shown in figure 6-12. A dependency graphing plugin that will render a graphical representation of the information.maven.8. and must be updated before the project can be released. and why. as well as comments about what versions and scopes are selected. for example. but that it is overridden by the test scoped dependency in proficio-core. For example. and that plexus-container-default attempts to introduce junit as a compile dependency. such as mvn -X package.4 (selected for compile) classworlds:1.com/maven2/ and. Another report that is available is the “Dependency Convergence Report”. but appears in a multi-module build only. It's here that you might see something that you didn't expect – an extra dependency.
You can control what version is actually used by declaring the dependency version in a project that packages or runs the application. To improve your project's health and the ability to reuse it as a dependency itself. or runtime if it is needed to bundle with or run the application but not for compiling your source code).. 205 . However. Add exclusions to dependencies to remove poorly defined dependencies from the tree. Use a range of supported dependency versions. rather than using the latest available. they can provide basic help in identifying the state of your dependencies once you know what to find. declaring the absolute minimum supported as the lower boundary.
An important tool in determining whether a project is ready to be released is Clirr (. Libraries will often be substituted by newer versions to obtain new features or bug fixes. Because existing libraries are not recompiled every time a version is changed. Clirr detects whether the current version of a library has introduced any binary incompatibilities with the previous release. Figure 6-13: An example Clirr report This is particularly important if you are building a library or framework that will be consumed by developers outside of your own project. but there are plans for more: • • A class analysis plugin that helps identify dependencies that are unused in your current project Improved dependency management features including different mechanisms for selecting versions that will allow you to deal with conflicting versions.net/).Better Builds with Maven Given the importance of this task. Monitoring and Improving the Health of Your Releases Releasing a project is one of the most important procedures you will perform. Catching these before a release can eliminate problems that are quite difficult to resolve once the code is “in the wild”. An example Clirr report is shown in figure 6-13. but then expected to continue working as they always have. While the next chapter will go into more detail about how Maven can help automate that task and make it more reliable. 206 .sf. 6. more tools are needed in Maven. this section will focus on improving the quality of the code released. and the information released with it. specification dependencies that let you depend on an API and manage the implementation at runtime. Two that are in progress were listed above. there is no verification that a library is binary-compatible – incompatibility will be discovered only when there's a failure. and more. but it is often tedious and error prone.
] [clirr:clirr] Comparing to version: 0..] <reporting> <plugins> <plugin> <groupId>org. you may need to install the artifacts in your local repository yourself.. As a project grows.9 ------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------- This version is determined by looking for the newest release in repository. even if they are binary compatible. This is particularly true in a Maven-based environment.. Note: The older versions of proficio-api are retrieved from the repository..mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <minSeverity>info</minSeverity> </configuration> </plugin> </plugins> </reporting> [. If you run this command. or a quick patch may need to be made and a new version deployed into an existing application. the report will be generated in target/site/clirr-report. This gives you an overview of all the changes since the last release. that is before the current development version.9 of proficio-api against which to compare (and that it is downloaded if you don't have it already): [.. While methods of marking incompatibility are planned for future versions.] If you run mvn clirr:clirr in proficio-api..html.9. Different modules may use different versions. the Clirr report shows only errors and warnings. by setting the minSeverity parameter.xml: [. the interactions between the project's own components will start behaving as if they were externally-linked. Maven currently works best if any version of an artifact is backwards compatible. By default.8 and proficio-0. However. 207 .] [INFO] [INFO] [INFO] [INFO] [INFO] [. you can configure the plugin to show all informational messages.. the answer here is clearly – yes. However.. To see this in action. you'll notice that Maven reports that it is using version 0. which you can do by issuing the mvn install command from each sub-directory: proficio-0.Assessing Project Health with Maven But does binary compatibility apply if you are not developing a library for external consumption? While it may be of less importance. back to the first release.codehaus. add the following to the reporting section of proficio-api/pom. where the dependency mechanism is based on the assumption of binary compatibility between versions.
and later was redesigned to make sure that version 1.. so that fewer people are affected. This is the most important one to check. However. The longer poor choices remain. The Clirr plugin is also capable of automatically checking for introduced incompatibilities through the clirr:check goal.. to discuss and document the practices that will be used.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> [. it is important to agree up front.] </plugins> </build> [.0 version. there is nothing in Java preventing them from being used elsewhere. run the following command: mvn clirr:clirr -DcomparisonVersion=0.. you will see that the build fails due to the binary incompatibility introduced between the 0.] <build> <plugins> <plugin> <groupId>org. For example. In this instance.0 would be more stable in the long run..9 release.. It is best to make changes earlier in the development cycle. Like all of the quality metrics. To add the check to the proficio-api/pom. and to check them automatically. Even if they are designed only for use inside the project. Since this was an acceptable incompatibility due to the preview nature of the 0.8 release. delegating the code.xml file.codehaus. add the following to the build section: [. as it will be used as the interface into the implementation by other applications. the harder they are to change as adoption increases.Better Builds with Maven You can change the version used with the comparisonVersion parameter. on the acceptable incompatibilities. Once a version has been released that is intended to remain binary-compatible going forward.. then there is no point in checking the others – it will create noise that devalues the report's content in relation to the important components. it is a good idea to monitor as many components as possible.8 You'll notice there are a more errors in the report. rather than removing or changing the original API and breaking binary compatibility.9 preview release and the final 1. since this early development version had a different API. you are monitoring the proficio-api component for binary compatibility changes only. to compare the current code to the 0. If it is the only one that the development team will worry about breaking. it is almost always preferable to deprecate an old API and add a new one.] If you now run mvn verify. if the team is prepared to do so. and it can assist in making your own project more stable. you can choose to exclude that from the report by adding the following configuration to the plugin: 208 .
Effective Java describes a number of practical rules that are generally helpful to writing code in Java. it is listed only in the build configuration.codehaus. With this simple setup. While the topic of designing a strong public API and maintaining binary compatibility is beyond the scope of this book. it will not pinpoint potential problems for you. not just the one acceptable failure. This allows the results to be collected over time to form documentation about known incompatibilities for applications using the library.Assessing Project Health with Maven [.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <excludes> <exclude>**/Proficio</exclude> </excludes> </configuration> [.] <plugin> <groupId>org. and ignored in the same way that PMD does. taking two source trees and comparing the differences in method signatures and Javadoc annotations. Note that in this instance. and particularly so if you are designing a public API. the following articles and books can be recommended: • Evolving Java-based APIs contains a description of the problem of maintaining binary compatibility. and then act accordingly... and so is most useful for browsing. Built as a Javadoc doclet. Hopefully a future version of Clirr will allow acceptable incompatibilities to be documented in the source code. as well as strategies for evolving an API without breaking it. you can create a very useful mechanism for identifying potential release disasters much earlier in the development process. This can be useful in getting a greater level of detail than Clirr on specific class changes.] </plugin> This will prevent failures in the Proficio class from breaking the build in the future. it takes a very different approach. which is available at.. so the report still lists the incompatibility. It has a functional Maven 2 plugin. A limitation of this feature is that it will eliminate a class entirely. 209 . However.codehaus. • A similar tool to Clirr that can be used for analyzing changes between releases is JDiff.org/jdiff-maven-plugin..
and will be used as the basis for the next chapter. a large amount of information was presented about a project.0 (for example. will reduce the need to gather information from various sources about the health of the project. the Dashboard plugin). Summary The power of Maven's declarative project model is that with a very simple setup (often only 4 lines in pom. it is important that your project information not remain passive. the model remains flexible enough to make it easy to extend and customize the information published on your project web site. These are all important features to have to get an overall view of the health of a project. However. Best of all.12. Viewing Overall Project Health In the previous sections of this chapter. a new set of information about your project can be added to a shared Web site to help your team visualize the health of the project. and have not yet been implemented for Maven 2. 6. It is important that developers are involved in the decision making process regarding build constraints. as there is a constant background monitor that ensures the health of the project is being maintained. Once established. and incorporates the concepts learned in this chapter. then.zip source archive. While some attempts were made to address this in Maven 1.11. this focus and automated monitoring will have the natural effect of improving productivity and reducing time of delivery again. 210 . The next chapter examines team development and collaboration. it requires a shift from a focus on time and deadlines. it should be noted that the Maven reporting API was written with these requirements in mind specifically. so that they feel that they are achievable. and run in the appropriate environment. and few of the reports aggregated information across a multiple module build. along with techniques to ensure that the build checks are now automated. regularly scheduled. they did not address all of these requirements.Better Builds with Maven 6. However. but none related information from another report to itself. In some cases.0. The additions and changes to Proficio made in this chapter can be found in the Code_Ch06. enforcing good. none of the reports presented how the information changes over time other than the release announcements. of the visual display is to aid in deriving the appropriate constraints to use. Some of the reports linked to one another. and as the report set stabilizes – summary reports will start to appear. each in discrete reports. individual checks that fail the build when they're not met. Finally. to a focus on quality. Most Maven plugins allow you to integrate rules into the build that check certain constraints on that piece of information once it is well understood. The purpose. How well this works in your own projects will depend on the development culture of your team.xml). In the absence of these reports.
Tom Clancy 211 . .
it's just as important that they don't waste valuable time researching and reading through too many information sources simply to find what they need.1. visualize. The CoRE approach to development also means that new team members are able to become productive quickly. While it's essential that team members receive all of the project information required to be productive. repeating errors previously solved or duplicating efforts already made. The Issues Facing Teams Software development as part of a team. These tools aid the team to organize. further contributing to the problem. CoRE emphasizes the relationship between project information and project members. but also to incorporate feedback. widely-distributed teams. and document for reuse the artifacts that result from a software project. or forgotten. However. one of the biggest challenges relates to the sharing and management of development information. component-based projects despite large. CoRE is based on accumulated learnings from open source projects that have achieved successful. real-time stakeholder participation. the fact that everyone has direct access to the other team members through the CoRE framework reduces the time required to not only share information. As teams continue to grow. resulting in shortened development cycles. misinterpreted. every other member (and particularly new members). This value is delivered to development teams by supporting project transparency. will inevitably have to spend time obtaining this localized information. Even when it is not localized. Many of these challenges are out of any given technology's control – for instance finding the right people for the team. This problem gets exponentially larger as the size of the team increases. This problem is particularly relevant to those working as part of a team that is distributed across different physical locations and timezones. rapid development. As each member retains project information that isn't shared or commonly accessible. and asynchronous engineering. iterative cycles. it does encompass a set of practices and tools that enable effective team communication and collaboration. which is enabled by the accessibility of consistently structured and organized information such as centralized code repositories. whether it is 2 people or 200 people. the key to the information issue in both situations is to reduce the amount of communication necessary to obtain the required information in the first place. faces a number of challenges to the success of the effort. although a distributed team has a higher communication overhead than a team working in a single location. working on complex.Better Builds with Maven 7. project information can still be misplaced. A Community-oriented Real-time Engineering (CoRE) process excels with this information challenge. However. CoRE enables globally distributed development teams to cohesively contribute to high-quality software. and dealing with differences in opinions. 212 . it is obvious that trying to publish and disseminate all of the available information about a project would create a near impossible learning curve and generate a barrier to productivity. and that existing team members become more productive and effective. in rapid. web-based communication channels and web-based project management tools. An organizational and technology-based framework. Using the model of a community. While Maven is not tied directly to the CoRE framework. Even though teams may be widely distributed.
2. error-prone and full of omissions. this is taken a step further. it's a good idea to leverage Maven's two different settings files to separately manage shared and user-specific settings. multiple JDK versions.xml file contains a number of settings that are user-specific. and to effectively define and declare them.Team Collaboration with Maven As described in Chapter 6. such as proxy settings. while still allowing for this natural variability. This chapter also looks at the adoption and use of a consistent development environment. the key is to minimize the configuration required by each individual developer. How to Set up a Consistent Developer Environment Consistency is important when establishing a shared development environment. In a shared development environment. you learned how to create your own settings. because the environment will tend to evolve inconsistently once started that way. This file can be stored in the conf directory of your Maven installation. or in the . Additionally. there are unavoidable variables that remain. 7. While one of Maven's objectives is to provide suitable conventions to reduce the introduction of inconsistencies in the build environment. In Chapter 2. but also several that are typically common across users in a shared environment. and to user-specific profiles.xml file. In Maven. Without it. Common configuration settings are included in the installation directory. 213 . To maintain build consistency. it will be the source of timeconsuming development problems in the future. such as different installation locations for software. through the practice of continuous integration.m2 subdirectory of your home directory (settings in this location take precedence over those in the Maven installation directory). The settings. demonstrating how Maven provides teams with real-time information on the builds and health of a project. In this chapter. the set up process for a new developer can be slow. while an individual developer's settings are stored in their home directory. these variables relate to the user and installation settings files. varying operating systems. and the use of archetypes to ensure consistency in the creation of new projects. and other discrete settings such as user names and passwords. Maven can gather and share the knowledge about the health of a project.
mycompany.m2/settings.mycompany.plugins</pluginGroup> </pluginGroups> </settings> 214 . <user_home>/.xml: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy</host> <port>8080</port> </proxy> </proxies> <servers> <server> <id>website</id> <username>${website.mycompany.Better Builds with Maven The following is an example configuration file that you might use in the installation directory.
Another profile. The user-specific configuration is also much simpler as shown below: <settings> <profiles> <profile> <id>property-overrides</id> <properties> <website. You'll notice that the local repository is omitted in the prior example. it is important that you do not configure this setting in a way that shares a local repository. While you may define a standard location that differs from Maven's default (for example.3 for more information on setting up an internal repository. ${user. Using the basic template. issues with inconsistently-defined identifiers and permissions are avoided. The server settings will typically be common among a set of developers. These repositories are independent of the central repository in this configuration. The profile defines those common. it would usually be set consistently across the organization or department.username> </properties> </profile> </profiles> </settings> To confirm that the settings are installed correctly.Team Collaboration with Maven There are a number of reasons to include these settings in a shared configuration: • • • • • • If a proxy server is allowed. which is typically one that has been set up within your own organization or department. The plugin groups are necessary only if an organization has plugins. internal repositories that contain a given organization's or department's released artifacts. without having to worry about integrating local changes made by individual developers.home}/maven-repo). By placing the common configuration in the shared settings. which are run from the command line and not defined in the POM.username>myuser</website. with only specific properties such as the user name defined in the user's settings. property-overrides is also enabled by default. The active profiles listed enable the profile defined previously in every environment. This profile will be defined in the user's settings file to set the properties used in the shared file. you can view the merged result by using the following help plugin command: C:\mvnbook> mvn help:effective-settings 215 . See section 7. The mirror element can be used to specify a mirror of a repository that is closer to you. The previous example forms a basic template that is a good starting point for the settings file in the Maven installation.username}. In Maven. See section 7. at a single physical location. across users. such as ${website.3 of this chapter for more information on creating a mirror of the central repository within your own organization. the local repository is defined as the repository of a single user. you can easily add and consistently roll out any new server and repository settings.
Place the Maven installation on a read-only shared or network drive from which each developer runs the application. Setting up an internal repository is simple. or other custom solution.Better Builds with Maven Separating the shared settings from the user-specific settings is helpful.3. To set up your organization's internal repository using Jetty.jar 8081 216 . see Chapter 2.1. each execution will immediately be up-to-date. or other source control management (SCM) system. If this infrastructure is available. but it is also important to ensure that the shared settings are easily and reliably installed with Maven. however it applies to all projects that are built in the developer's environment. located in the project directory. but requires a manual procedure. or create a new server using Apache HTTPd. Change to that directory. an individual will need to customize the build of an individual project. just as any other external repository would be. You can use an existing HTTP server for this. download the Jetty 5. create a new directory in which to store the files. To set up Jetty. Use an existing desktop management solution. In some circumstances however. 7. and when possible. The following are a few methods to achieve this: • • • • Rebuild the Maven release distribution to include the shared configuration file and distribute it internally. it is possible to maintain multiple Maven installations. in this example C:\mvnbook\repository will be used.10-bundle. Jetty. doing so will prevent Maven from being available off-line. For more information on profiles. Each developer can check out the installation into their own machines and run it from there.xml file. Now that each individual developer on the team has a consistent set up that can be customized as needed. Subversion. or if there are network problems. if M2_HOME is not set. Apache Tomcat. A new release will be required each time the configuration is changed. or any number of other servers. easily updated.10 server bundle from the book's Web site and copy it to the repository directory. and run: C:\mvnbook\repository> java -jar jetty-5. • Configuring the settings. by one of the following methods: Using the M2_HOME environment variable to force the use of a particular installation.xml file covers the majority of use cases for individual developer customization. Creating a Shared Repository Most organizations will need to set up one or more shared repositories. Retrieving an update from an SCM will easily update the configuration and/or installation. so that multiple developers and teams can collaborate effectively. This internal repository is still treated as a remote repository in Maven. While it can be stored anywhere you have permissions. see Chapter 3.1. However. For an explanation of the different types of repositories. While any of the available transport protocols can be used. the most popular is HTTP. since not everyone can deploy to the central Maven repository. developers must use profiles in the profiles. organization's will typically want to set up what is referred to as an internal repository. Check the Maven installation into CVS. To do this. the next step is to establish a repository to and from which artifacts can be published and dependencies downloaded. If necessary. To publish releases for use across different environments within their network. • Adjusting the path or creating symbolic links (or shortcuts) to the desired Maven executable.
using the following command: C:\mvnbook\repository> mkdir central This repository will be available at. In addition. For more information on Maestro please see:. Maestro is an Apache License 2.Team Collaboration with Maven You can now navigate to and find that there is a Web server running displaying that directory. and is all that is needed to get started. This creates an empty repository. For the first repository. sftp and more. searching. Continuum and Archiva build platform. While this isn't required. You can create a separate repository under the same server. and reporting. it is possible to use a repository on another server with any combination of supported protocols including http. However.8G. At the time of writing.0 distribution based on a pre-integrated Maven. the size of the Maven repository was 5. refer to Chapter 3. This chapter will assume the repositories are running from and that artifacts are deployed to the repositories using the file system. To populate the repository you just created. The repository manager can be downloaded from. 217 . it provides faster performance (as most downloads to individual developers come from within their own network). create a subdirectory called internal that will be available at. • The Repository Manager (Archiva)10 is a recent addition to the Maven build platform that is designed to administer your internal repository. by avoiding any reliance on Maven's relatively open central repository. Use rsync to take a copy of the central repository and regularly update it. and gives full control over the set of artifacts with which your software is built. For more information. Set up the Repository Manager (Archiva) as a proxy to the central repository. ftp. 10 Repository Manager (Archiva) is a component of Exist Global Maestro Project Server.com/.apache. It is deployed to your Jetty server (or any other servlet container) and provides remote repository proxies. it is common in many organizations as it eliminates the requirement for Internet access or proxy configuration.exist. Your repository is now set up. The server is set up on your own workstation for simplicity in this example. This will download anything that is not already present. and keep a copy in your internal repository for others on your team to reuse. as well as friendly repository browsing. configured securely and monitored to ensure it remains running at all times. but rather than set up multiple Web servers. C:\mvnbook\repository> mkdir internal It is also possible to set up another repository (or use the same one) to mirror content from the Maven central repository. scp. you will want to set up or use an existing HTTP server that is in a shared.org/repositorymanager/. accessible location. separate repositories. there are a number of methods available: • • Manually add content as desired using mvn deploy:deploy-file. However. Later in this chapter you will learn that there are good reasons to run multiple. you can store the repositories on this single server.
To override the central repository with your internal repository. or hierarchy. for a situation where a developer might not have configured their settings and instead manually installed the POM. this must be defined as both a regular repository and a plugin repository to ensure all access is consistent.2. as shown in section 7. you must define a repository in a settings file and/or POM that uses the identifier central. to configure the repository from the project level instead of in each user's settings (with one exception that will be discussed next). that declares shared settings within an organization and its departments.. and if it's acceptable to have developers configure this in their settings as demonstrated in section 7. it must retrieve the parent from the repository. there are two choices: use it as a mirror.2. Developers may choose to use a different mirror. it would need to be declared in every POM. and as a result. Not only is this very inconvenient.xml file. or to include your own artifacts in the same repository. This makes it impossible to define the repository in the parent. On the other hand. it is necessary to declare only those that contain an inherited POM. or have it override the central repository. otherwise Maven will fail to download any dependencies that are not in your local repository. there is a problem – when a POM inherits from another POM that is not in the central repository. It is still important to declare the repositories that will be used in the top-most POM itself. you should override the central repository. However. if you want to prevent access to the central repository for greater control. Repositories such as the one above are configured in the POM usually. You would use it as a mirror if it is intended to be a copy of the central repository exclusively. 218 . or had it in their source code check out. If you have multiple repositories. Usually.Better Builds with Maven When using this repository for your projects. so that a project can add repositories itself for dependencies located out of those repositories configured initially. The next section discusses how to set up an “organization POM”. it would be a nightmare to change should the repository location change! The solution is to declare your internal repository (or central replacement) in the shared settings. unless you have mirrored the central repository using one of the techniques discussed previously. or the original central repository directly without consequence to the outcome of the build.
Maven Continuum. depending on the information that needs to be shared..xml file in the shared installation (or in each developer's home directory). It is important to recall. there are three levels to consider when working with any individual module that makes up the Maven project. consider the Maven project itself. the easiest way to version a POM is through sequential numbering. the current project – Maven 2 now retrieves parent projects from the repository. its departments.scm</groupId> <artifactId>maven-scm</artifactId> <url>. These parents (levels) may be used to define departments. As an example. Future versions of Maven plan to automate the numbering of these types of parent projects to make this easier. then that repository will need to be added to the settings.apache.] <modules> <module>maven-scm-api</module> <module>maven-scm-providers</module> [.4. that if your inherited projects reside in an internal repository. Creating an Organization POM As previously mentioned in this chapter.apache. 219 ... itself. Any number of levels (parents) can be used. While project inheritance was limited by the extent of a developer's checkout in Maven 1. By declaring shared elements in a common parent POM. This project structure can be related to a company structure. and is a project that. has a number of sub-projects (Maven. wherein there's the organization. Since the version of the POM usually bears no resemblance to the software.org/maven-scm/</url> [.0 – that is..maven.maven</groupId> <artifactId>maven-parent</artifactId> <version>1</version> </parent> <groupId>org. You may have noticed the unusual version declaration for the parent project. etc.0</modelVersion> <parent> <groupId>org. you'd find that there is very little deployment or repositoryrelated information. consistency is important when setting up your build infrastructure. project inheritance can be used to assist in ensuring project consistency.] </modules> </project> If you were to review the entire POM.).Team Collaboration with Maven 7. and then the teams within those departments.0. To continue the Maven example. so it's possible to have one or more parents that define elements common to several projects.apache. which is shared across all Maven projects through inheritance.3. As a result. or the organization as a whole. Maven SCM. as this is consistent information. from section 7. It is a part of the Apache Software Foundation. consider the POM for Maven SCM: <project> <modelVersion>4.
org/</url> [..] </developer> </developers> </project> 220 .apache.] <mailingLists> <mailingList> <name>Maven Announcements List</name> <post>announce@maven.apache</groupId> <artifactId>apache</artifactId> <version>1</version> </parent> <groupId>org..org</post> [..0.0</modelVersion> <parent> <groupId>org.apache.apache...Better Builds with Maven If you look at the Maven project's parent POM. you'd see it looks like the following: <project> <modelVersion>4.] </mailingList> </mailingLists> <developers> <developer> [.maven</groupId> <artifactId>maven-parent</artifactId> <version>5</version> <url>..
.6).apache. modified. there is no best practice requirement to even store these files in your source control management system.0. most of the elements are inherited from the organization-wide parent project. when working with this type of hierarchy.] <distributionManagement> <repository> [. you can retain the historical versions in the repository if it is backed up (in the future.apache</groupId> <artifactId>apache</artifactId> <version>1</version> <organization> <name>Apache Software Foundation</name> <url>.. it is best to store the parent POM files in a separate area of the source control tree.. is regarding the storage location of the source POM files.org/maven-snapshot-repository</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> [. where they can be checked out.org/</url> </organization> <url></url> [. such as the announcements mailing list and the list of developers that work across the whole project. and the deployment locations.apache.] </repository> <snapshotRepository> [. An issue that can arise. and deployed with their new version as appropriate. In fact.Team Collaboration with Maven The Maven parent POM includes shared elements.] <repositories> <repository> <id>apache... These parent POM files are likely to be updated on a different. and less frequent schedule than the projects themselves.. in this case the Apache Software Foundation: <project> <modelVersion>4. the Maestro Repository Manager will allow POM updates from a Web interface). For this reason. Again.] </snapshotRepository> </distributionManagement> </project> The Maven project declares the elements that are common to all of its sub-projects – the snapshot repository (which will be discussed further in section 7..0</modelVersion> <groupId>org.apache.snapshots</id> <name>Apache Snapshot Repository</name> <url>. Source control management systems like CVS and SVN (with the traditional intervening trunk directory at the individual project level) do not make it easy to store and check out such a structure.. 221 .
/[maestro_home]/project-server/bin/linux-x86-32/run.sh You need to make the file executable in Linux by running the command from the directory where the file is located. rather than close to a release. Maestro is an Apache License 2. use the following command: [maestro_home]/project-server/bin/windows-x86-32/run. First. There are scripts for most major platforms. In this chapter.sh for use on other Unix-based platforms. Continuum and Archiva build platform. which you can obtain for your operating system from. This is very simple – once you have downloaded it from Refer to the Maestro Getting Started Guide for more instructions on starting the Maestro Project Server./[maestro_home]/project-server/bin/plexus/plexus.Better Builds with Maven 7. It is also a component of Exist Global's Maestro which is referred to as Build Management.com and unpacked it. The examples discussed are based on Maestro 1. Continuous Integration with Maestro If you are not already familiar with it.0 distribution based on a pre-integrated Maven. and learn how to use Maestro Build Management to build this project on a regular basis.tigris. For example on Windows. This document can be found in the Documentation link of Maestro user interface. Starting up the Maestro Project Server will also start a http server and servlet engine. iterative changes that can more easily support concurrent development processes. The examples assume you have Subversion installed. continuous integration can be done from the Exist Global Maestro Project Server.bat For Linux. For example: chmod +x run. For more information on Maestro please see:. you will pick up the Proficio example from earlier in the book. you can run it. 222 . ensuring that conflicts are detected earlier in a project's release life cycle.exist.5. you will need to install the Maestro Project Server. As such.com/. More than just nightly builds. as well as the generic bin/plexus/plexus.3. 11 Alternatively. You can verify the installation by viewing the web site at. continuous integration enables automated builds of your project on a regular interval. Continuum11 is Maven's continuous integration and build server. use .org/. continuous integration can enable a better development culture where team members can make smaller. however newer versions should be similar.exist. continuous integration is a key element of effective collaboration.sh start or .
Team Collaboration with Maven The first screen to appear will be the one-time setup page shown in figure 7-1. If you are running Maestro on your desktop and want to try the examples in this section. these additional configuration requirements can be set only after the previous step has been completed. some additional steps are required. As of Maestro 1. Figure 7-1: The Administrator account screen For most installations.3. press Ctrl-C in the window that is running Maestro). Figure 7-2 shows all the configuration that's required when Build Management is accessed for the first time. and you must stop the server to make the changes (to stop the server. Figure 7-2: Build Management general configuration screen 223 . The configuration on the screen is straight forward – all you should need to enter are the details of the administration account you'd like to use.
You can then check out Proficio from that location. If you do not have this set up on your machine. this is disabled as a security measure.plexus. ports. To enable this setting. you can start Maestro again. The default is to use localhost:25.. POM files will be read from the local hard disk where the server is running.zip archive and unpacking it in your environment.. for example if it was unzipped in C:\mvnbook\svn. 224 .formica.xml found in the [maestro_home]/projectserver/apps/continuum/webapp/WEB-INF/classes/META-INF/plexus/ directory and verify that the following lines are present and are not commented out: [.Better Builds with Maven To complete the Build Management setup page. edit the file above to change the smtp-host setting.UrlValidator </implementation> <configuration> <allowedSchemes> [.. The next step is to set up the Subversion repository for the examples. you will also need an SMTP server to which to send the messages. For instructions. By default.] Field Name Value To have Build Management send you e-mail notifications. servers.. This requires obtaining the Code_Ch07. and directories section of Maestro Project Server User Guide found in the Documentation link of the Maestro user interface. since paths can be entered from the Web interface. edit the application. execute the following: C:\mvnbook\proficio> svn co \ The command above works if the code is unpacked in C:\mvnbook\svn.] <implementation> org.validation.codehaus. refer to Configuring mail. If the code is unpacked in a different location. you can cut and paste field values from the following list: working-directory Working Directory Build Output Directory build-output-directory Base URL In the following examples. After these steps are completed. the file URL in the command should be similar to the following:[path_to_svn_code]/proficio/trunk...] <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration> [.
] <distributionManagement> <site> <id>website</id> <url> /reference/${project.Team Collaboration with Maven The POM in this repository is not completely configured yet.xml to correct the e-mail address to which notifications will be sent.3 for information on how to set this up. If you haven't done so already. refer to section 7..com</address> </configuration> </notifier> </notifiers> </ciManagement> [.... since not all of the required details were known at the time of its creation... from the directory C:\mvnbook\repository. by uncommenting and modifying the following lines: [. commit the file with the following command: C:\mvnbook\proficio\trunk> svn ci -m "my settings" pom..version} </url> </site> </distributionManagement> [. and edit the location of the Subversion repository. Edit proficio/trunk/pom.] The ciManagement section is where the project's continuous integration is defined and in the above example has been configured to use Maestro Build Management locally on port 8080.] <ciManagement> <system>continuum</system> <url> <notifiers> <notifier> <type>mail</type> <configuration> <address>youremail@yourdomain.xml 225 .] <scm> <connection> scm:svn: </connection> <developerConnection> scm:svn: </developerConnection> </scm> [. This assumes that you are still running the repository Web server on localhost:8081. The distributionManagement setting will be used in a later example to deploy the site from your continuous integration environment.. Once these settings have been edited to reflect your setup.
The login link is at the top-left of the screen. a ViewCVS installation. To make the settings take effect. While uploading is a convenient way to configure from your existing check out. click the Save button. you will enter either a HTTP URL to a POM in the repository. with the following command: C:\mvnbook\proficio\trunk> mvn install You are now ready to start using Continuum or Maestro Build Management. Figure 7-3: Add project screen shot When using the file:// protocol for the URL. as in the Proficio example. Go to the Project Server page. under the Maestro logo. you can now select Maven 2. or perform other tasks. To configure: 1. Before you can add a project to the list. Instead. Once you have logged in. This will present the screen shown in figure 7-3. If you return to the location that was set up previously. You have two options: you can provide the URL for a POM. check the box before FILE. or upload from your local drive. you will see an empty project list. in Maestro 1. or with another account you have since created with appropriate permissions.3 and newer versions. 226 . enter the file:// URL as shown. or a Subversion HTTP server.Better Builds with Maven You should build all these modules to ensure everything is in order. When you set up your own system later. you must either log in with the administrator account you created during installation. 2. 3. Under Configure > Build Management.0+ Project from the Add Project menu. the File protocol permission should be selected. this does not work when the POM contains modules.
] public Proficio [. check the file in: C:\mvnbook\proficio\trunk\proficio-api> svn ci -m "introduce error" \ src/main/java/com/exist/mvnbook/proficio/Proficio. After submitting the URL..] Now.Team Collaboration with Maven This is all that is required to add a Maven 2 project to Build Management. and send an e-mail notification if there are any problems.java. Initially. go to your earlier checkout and introduce an error into Proficio. and each of the modules will be added to the list of projects.java 227 ... Figure 7-4: Summary page after projects have built Build Management will now build the project hourly. the builds will be marked as New and their checkouts will be queued.. If you want to put this to the test. for example. Maestro Build Management will return to the project summary page. The result is shown in figure 7-4. remove the interface keyword: [.
Build all of a project's active branches. it is often ignored. not just that the project still compiles after one or more changes occur. This will be constrained by the length of the build and the available resources on the build machine. While rapid. the build will show an “In progress” status. Fix builds as soon as possible. This doesn’t mean committing incomplete code. Continuous integration is most beneficial when tests are validating that the code is working as it always has. Regardless of which continuous integration server you use. In addition. but you may wish to go ahead and try them. you might want to set up a notification to your favorite instant messenger – IRC. iterative builds are helpful in some situations. there are a few tips for getting the most out of the system: • Commit early. Run builds as often as possible. press the Build Now icon on the Build Management user interface next to the Proficio API module. Build Management can be configured to trigger a build whenever a commit occurs. if it isn't something already in use in other development. • • • • • • 228 . but rather keeping changes small and well tested. Build Management has preliminary support for system profiles and distributed testing. While this seems obvious. test and production environments. In addition. Continuum currently defaults to doing a clean build. restore the file above to its previous state and commit it again. operating system and other variables. Avoid customizing the JDK. For example. Run clean builds. and a future version will allow developers to request a fresh checkout. Consider a regular. you should receive an e-mail at the address you configured earlier. periodically. if the source control repository supports post-commit hooks. Continuous integration will be pointless if developers repetitively ignore or delete broken build notifications. marking the left column with an “!” to indicate a failed build (you will need to refresh the page using the Show Projects link in the navigation to see these changes). If multiple branches are in development. Continuous integration is most effective when developers commit regularly. When a failure occurs in the continuous integration environment. it is also important that failures don't occur due to old build state. Run comprehensive tests. the continuous integration environment should be set up for all of the active branches. This chapter will not discuss all of the features available in Maestro Build Management. The build in Build Management will return to the successful state. enhancements that are planned for future versions. This will make it much easier to detect the source of an error when the build does break. based on selected schedules. but it is best to detect a failure as soon as possible. before the developer moves on or loses focus. First. This also means that builds should be fast – long integration and performance tests should be reserved for periodic builds. Establish a stable environment.Better Builds with Maven Finally. commit often. To avoid receiving this error every hour. and then fail. it is beneficial to test against all different versions of the JDK. and your team will become desensitized to the notifications in the future. Jabber. or local settings. clean build. The Build History link can be used to identify the failed build and to obtain a full output log. MSN and Google Talk are all supported. it is important that it can be isolated to the change that caused it. and independent of the environment being used.
Click the Add button to add a new schedule. Verify that you are still logged into your Maestro instance.Team Collaboration with Maven • Run a copy of the application continuously. they need to be kept up-todate. If the application is a web application. run a servlet container to which the application can be deployed from the continuous integration environment. The appropriate configuration is shown in figure 7-5. Figure 7-5: Schedule configuration To complete the schedule configuration. Though it would be overkill to regenerate the site on every commit. In addition to the above best practices. This can be helpful for non-developers who need visibility into the state of the application. there are two additional topics that deserve special attention: automated updates to the developer web site. and profile usage. Next. you learned how to create an effective site containing project information and reports about the project's health and vitality. For these reports to be of value. In Chapter 6. you can cut and paste field values from the following list: Field Name Value Name Description Site Generation Redeploy the site to the development project site 229 . select Schedules. from the Administration menu on the left-hand side. it is recommend that a separate. but regular schedule is established for site generation. only the default schedule is available. This is another way continuous integration can help with project collaboration and communication. for example. You will see that currently. separate from QA and production releases. which will be configured to run every hour during business hours (8am – 4pm weekdays).
click the Add button below the default build definition. Since this is the root of the multi-module build – and it will also detect changes to any of the modules – this is the best place from which to build the site. Once you add this schedule. To add a new build definition. This is useful when using CVS. 16:00:00 from Monday to Friday. It is not typically needed if using Subversion. The project information shows just one build on the default schedule that installs the parent POM. The example above runs at 8:00:00. as well – if this is a concern. use the non-recursive mode instead... there is no way to make bulk changes to build definitions. The downside to this approach is that Build Management (Continuum) will build any unchanged modules. and select the top-most project. on the business hours schedule. In addition to building the sites for each module. The “quiet period” is a setting that delays the build if there has been a commit in the defined number of seconds prior. but does not recurse into the modules (the -N or --non-recursive argument).com/quartz/api/org/quartz/CronTrigger. In this example you will add a new build definition to run the site deployment for the entirety of the multi-module build.html. In Maestro Build Management. so you will need to add the definition to each module individually.opensymphony. since commits are not atomic and a developer might be committing midway through an update. 230 . return to the project list. Maven Proficio. 9:00:00... it can aggregate changes into the top-level site as required. and add the same build definition to all of the modules.
and that it is not the default build definition. The --non-recursive option is omitted. It is rare that the site build will fail. which is essential for all builds to ensure they don't block for user input. if you want to fail the build based on these checks as well. and view the generated site from. The meaning of this system property will be explained shortly. However. and -Pci. the generated site can be used as reference for what caused the failure. However. You can see also that the schedule is set to use the site generation schedule created earlier.xml clean site-deploy --batch-mode -Pci Site Generation maven2 The goals to run are clean and site-deploy. you can cut and paste field values from the following list: Field Name Value POM filename Goals Arguments Schedule Type pom. so that if the build fails because of a failed check. you can add the test. The site will be deployed to the file system location you specified in the POM. to ensure these checks are run. which means that Build Now from the project summary page will not trigger this build. Any of these test goals should be listed after the site-deploy goal. which will be visible from. verify or integration-test goal to the list of goals. since most reports continue under failure conditions. when you first set up the Subversion repository earlier in this chapter. 231 . Click this for the site generation build definition. which sets the given system property.Team Collaboration with Maven Figure 7-6: Adding a build definition for site deployment To complete the Add Build Definition screen. The arguments provided are --batch-mode. each build definition on the project information page (to which you would have been returned after adding the build definition) has a Build Now icon.
. at least in the version of Continuum current at the time of writing. The first is to adjust the default build definition for each module. In Chapter 6.apache.. a number of plugins were set up to fail the build if certain project health checks failed.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> [.Better Builds with Maven In the previous example. [. As you saw before.m2/settings. and clicking Edit next to the default build definition.home}/. There are two ways to ensure that all of the builds added in Maestro Build Management use this profile. Profiles are a means for selectively enabling portions of the build.xml file for the user under which it is running. If you compare the example proficio/trunk/pom. As Maven 2 is still executed as normal. for all projects in Build Management.maven. you'll see that these checks have now been moved to a profile. it reads the ${user..xml file in <user_home>/.xml file in your Subversion checkout to that used in Chapter 6. the profile is enabled only when the ci system property is set to true.xml: 232 . To enable this profile by default from these settings. add the following configuration to the settings. which can be a discouragement to using them. a system property called ci was set to true.] You'll find that when you run the build from the command line (as was done in Continuum originally). However. these checks delayed the build for all developers.m2/settings. by going to the module information page. such as the percentage of code covered in the unit tests dropping below a certain value. In this particular case. as well as the settings in the Maven installation. please refer to Chapter 3. The other alternative is to set this profile globally.. none of the checks added in the previous chapter are executed.] <profiles> <profile> <id>ci</id> <activation> <property> <name>ci</name> <value>true</value> </property> </activation> <plugins> <plugin> <groupId>org. it is necessary to do this for each module individually. The checks will be run when you enable the ci using mvn -Pci. If you haven't previously encountered profiles.
and in contrast to regular dependencies.] <activeProfiles> [. For example. which are not changed. The generated artifacts of the snapshot are stored in the local repository. In this section. in some cases. where projects are closely related. it may be necessary to schedule them separately for each module. the verify goal may need to be added to the site deployment build definition. and while dependency management is fundamental to any Maven build. which can be error-prone.. in an environment where a number of modules are undergoing concurrent development. which is discussed in section 7. these artifacts will be updated frequently. and how to enable this within your continuous integration environment.] In this case the identifier of the profile itself. but the timing and configuration can be changed depending upon your circumstances.6. the build involves checking out all of the dependent projects and building them yourself. Additionally. Usually.] <activeProfile>ci</activeProfile> </activeProfiles> [. rather than the property used to enable it. So far in this book. you will learn about using snapshots more effectively in a team environment. if the additional checks take too much time for frequent continuous integration builds. How you configure your continuous integration depends on the culture of your development team and other environmental factors such as the size of your projects and the time it takes to build and test them. Projects in Maven stay in the snapshot state until they are released. you must build all of the modules simultaneously from a master build. the team dynamic makes it critical. This will result in local inconsistencies that can produce non-working builds There is no common baseline against which to measure progress Building can be slower as multiple dependencies must be rebuilt also Changes developed against outdated code can make integration more difficult 233 .Team Collaboration with Maven [. it can lead to a number of problems: • • • • It relies on manual updates from developers. snapshots have been used to refer to the development version of an individual module. or for the entire multi-module project to run the additional checks after the site has been generated.. The guidelines discussed in this chapter will help point your team in the right direction. While building all of the modules from source can work well and is handled by Maven inherently. as discussed previously.. Snapshots were designed to be used in a team environment as a means for sharing development versions of artifacts that have already been built. indicates that the profile is always active when these settings are read.. 7..8 of this chapter. Team Dependency Management Using Snapshots Chapter 3 of this book discussed how to manage your dependencies in a multi-module build..
This technique allows you to continue using the latest version by declaring a dependency on 1. you will see that some of the dependencies are checked for updates. Now.120139-1. In Maven. Currently. or to lock down a stable version by declaring the dependency version to be the specific equivalent such as 1... If you were to deploy again. Considering that example. While this is not usually the case.0-20070726. you may also want to add this as a pluginRepository element as well.jar.] <repositories> <repository> <id>internal</id> <url></url> </repository> </repositories> [...] <distributionManagement> <repository> <id>internal</id> <url></url> </repository> [.xml: [. The filename that is used is similar to proficio-api-1..] </distributionManagement> Now. In this case.xml: [. To add the internal repository to the list of repositories used by Proficio regardless of settings.3. such as the internal repository set up in section 7..0SNAPSHOT. though it may have been configured as part of your settings files.12013-1. the time stamp would change and the build number would increment to 2. add the following to proficio/trunk/pom. you'll see that the repository was defined in proficio/trunk/pom. building from source doesn't fit well with an environment that promotes continuous integration. build proficio-core with the following command: C:\mvnbook\proficio\trunk\proficio-core> mvn -U install During the build.0-20070726. use binary snapshots that have been already built and tested. the Proficio project itself is not looking in the internal repository for dependencies. to see the updated version downloaded. but rather relying on the other modules to be built first. this is achieved by regularly deploying snapshots to a shared repository. similar to the example below (note that this output has been abbreviated): 234 .. deploy proficio-api to the repository with the following command: C:\mvnbook\proficio\trunk\proficio-api> mvn deploy You'll see that it is treated differently than when it was installed in the local repository. locking the version in this way may be important if there are recent changes to the repository that need to be ignored temporarily..] If you are developing plugins.Better Builds with Maven As you can see from these issues. the version used is the time that it was deployed (in the UTC timezone) and the build number. Instead.
but you can also change the interval by changing the repository configuration. assuming that the other developers have remembered to follow the process. However. This causes many plugins to be checked for updates. it makes sense to have it build snapshots. If it were omitted. making it out-of-date. To see this. Since the continuous integration server regularly rebuilds the code from a known state.. and interval:minutes.0-SNAPSHOT: checking for updates from internal [.. The settings that can be used for the update policy are never.. and then deploy the snapshot to share with the other team members. all that is being saved is some time.. no update would be performed. 235 . the updates will still occur only as frequently as new versions are deployed to the repository.. This technique can ensure that developers get regular updates. It is possible to establish a policy where developers do an update from the source control management (SCM) system before committing.] In this example. any snapshot dependencies will be checked once an hour to determine if there are updates in the remote repository.xml: [. Several of the problems mentioned earlier still exist – so at this point. deployed with uncommitted code. Whenever you use the -U argument.] The -U argument in the prior command is required to force Maven to update all of the snapshots in the build. You can always force the update using the -U command... This is because the default policy is to update snapshots daily – that is. daily (the default). by default. to check for an update the first time that particular dependency is used after midnight local time. as well as updating any version ranges. without having to manually intervene.] proficio-api:1. this introduces a risk that the snapshot will not be deployed at all.] <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> [.] <repository> [.Team Collaboration with Maven [.. and without slowing down the build by checking on every access (as would be the case if the policy were set to always).. or deployed without all the updates from the SCM. it updates both releases and snapshots.. always. add the following configuration to the repository configuration you defined above in proficio/trunk/pom. However. A much better way to use snapshots is to automate their creation. as well.
Log in as an administrator and go to the following Configuration screen. and click Build Now on the Proficio API project. Figure 7-7: Build Management configuration To complete the Continuum configuration page. follow the Show Projects link. However. Continuum can be configured to deploy its builds to a Maven snapshot repository automatically. To deploy from your server. you can cut and paste field values from the following list: working-directory Working Directory Build Output Directory build-output-directory Base URL The Deployment Repository Directory field entry relies on your internal repository and Continuum server being in the same location. Once the build completes. If this is not the case. shown in figure 7-7. you can enter a full repository URL such as scp://repositoryhost/www/repository/internal.Better Builds with Maven How you implement this will depend on the continuous integration server that you use. If there is a repository configured to which to deploy them. To try this feature. you have not been asked to apply this setting. as you saw earlier. you must ensure that the distributionManagement section of the POM is correctly configured. return to your console and build proficio-core again using the following command: C:\mvnbook\proficio\trunk\proficio-core> mvn -U install Field Name Value 236 . so let's go ahead and do it now. this feature is enabled by default in a build definition. So far in this section.
snapshots</id> <url></url> </snapshotRepository> </distributionManagement> [. you can avoid all of the problems discussed previously.... when it doesn't. you would add the following: [.Team Collaboration with Maven You'll notice that a new version of proficio-api is downloaded. and deploy to the regular repository you listed earlier.] <snapshotRepository> <id>internal.] This will deploy to that repository whenever the version contains SNAPSHOT.. Given this configuration. when necessary. or build from source. Another point to note about snapshots is that it is possible to store them in a separate repository from the rest of your released artifacts. while you get regular updates from published binary dependencies. this separation is achieved by adding an additional repository to the distributionManagement section of your POM.. if you had a snapshot-only repository in /www/repository/snapshots..snapshots</id> <url></url> <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> </repositories> [. This can be useful if you need to clean up snapshots on a regular interval. If you are using the regular deployment mechanism (instead of using Maestro Build Management or Continuum). you can either lock a dependency to a particular build. The replacement repository declarations in your POM would look like this: [.] <distributionManagement> [. Better yet.. With this setup.] <repositories> <repository> <id>internal</id> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>internal. For example.. you can make the snapshot update process more efficient by not checking the repository that has only releases for updates.. with an updated time stamp and build number. but still keep a full archive of releases.] 237 ..
either in adding or removing content from that generated by the archetypes. There are two ways to create an archetype: one based on an existing project using mvn archetype:create-from-project. you can create one or more of your own archetypes. archetypes give you the opportunity to start a project in the right way – that is. To avoid this. As you saw in this chapter.mvnbook \ -DartifactId=proficio-archetype \ -DarchetypeArtifactId=maven-archetype-archetype 238 . To get started with the archetype.exist. in a way that is consistent with other projects in your environment. Beyond the convenience of laying out a project structure instantly. there is always some additional configuration required.7. the requirement of achieving consistency is a key issue facing teams. you have seen the archetypes that were introduced in Chapter 2 used to quickly lay down a project structure. and the other.Better Builds with Maven 7. Creating a Standard Project Archetype Throughout this book. using an archetype. Writing an archetype is quite like writing your own project. by hand. While this is convenient. run the following command: C:\mvnbook\proficio\trunk> mvn archetype:create \ -DgroupId=com. and replacing the specific values with parameters.
java</source> </sources> <testSources> <source>src/test/java/AppTest. you'll see that the archetype is just a normal JAR project – there is no special build configuration required. The example descriptor looks like the following: <archetype> <id>proficio-archetype</id> <sources> <source>src/main/java/App. 239 . Figure 7-8: Archetype directory layout If you look at pom. The example above shows the sources and test sources.xml. The JAR that is built is composed only of resources.xml at the top level.java</source> </testSources> </archetype> Each tag is a list of files to process and generate in the created project. and siteResources. The archetype descriptor describes how to construct a new project from the archetype-resources provided. testResources. so everything else is contained under src/main/resources.Team Collaboration with Maven The layout of the resulting archetype is shown in figure 7-8. There are two pieces of information required: the archetype descriptor in META-INF/maven/archetype. and the template project in archetype-resources. but it is also possible to specify files for resources.
if omitted. since the archetype has not yet been released. These files will be used to generate the template files when the archetype is run.exist. go to an empty directory and run the following command: C:\mvnbook> mvn archetype:create -DgroupId=com. install and deploy it like any other JAR. now however. so you can run the following command: C:\mvnbook\proficio\trunk\proficio-archetype> mvn deploy The archetype is now ready to be used. a previous release would be used instead). For this example. Continuing from the example in section 7.org/POM/4. refer to the documentation on the Maven Web site. 240 .apache.3 of this chapter. To do so. Maven will build.0</modelVersion> <groupId>$groupId</groupId> <artifactId>$artifactId</artifactId> <version>$version</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. the content of the files will be populated with the values that you provided on the command line.0.0-SNAPSHOT Normally. It will look very similar to the content of the archetype-resources directory you created earlier.0" xmlns: <modelVersion>4. Since the archetype inherits the Proficio parent.exist.0.org/2001/XMLSchema-instance" xsi:schemaLocation=" of this chapter. it has the correct deployment settings already. you will use the “internal” repository.org/maven-v4_0_0. For more information on creating an archetype. However. Once you have completed the content in the archetype.0.apache. the groupId. the required version would not be known (or if this was later development.
As the command runs. without making any modifications to your project. The perform step could potentially be run multiple times to rebuild a release from a clean checkout of the tagged version. full of manual steps that need to be completed in a particular order. To demonstrate how the release plugin works. Maven provides a release plugin that provides the basic functions of a standard release process. and to perform standard tasks.exist. or check out the following: C:\mvnbook\proficio> svn co \ To start the release process. The prepare step is run once for a release. such as deployment to the remote repository. once a release has been made. releases should be consistent every time they are built. and does all of the project and source control manipulation that results in a tagged version. 12 Exist Global Maestro provides an automated feature for performing releases. the Proficio example will be revisited. Once the definition for a release has been set by a team.0 distribution based on a pre-integrated Maven. The release plugin takes care of a number of manual steps in updating the project POM.12 The release plugin operates in two steps: prepare and perform. it is usually difficult or impossible to correct mistakes other than to make another. It is usually tedious and error prone. allowing them to be highly automated.8.Team Collaboration with Maven 7. Accept the defaults in this instance (note that running Maven in “batch mode” avoids these prompts and will accept all of the defaults). run the following command: C:\mvnbook\proficio\trunk> mvn release:prepare -DdryRun=true This simulates a normal release preparation. You'll notice that each of the modules in the project is considered. For more information on Maestro please see:. Maestro is an Apache License 2. Worse. and creating tags (or equivalent for your SCM). Finally. updating the source control management system to check and commit release related changes. new release. Cutting a Release Releasing software is difficult.com/. and released as 1. you will be prompted for values. Continuum and Archiva build platform. it happens at the end of a long period of development when all everyone on the team wants to do is get it out there. You can continue using the code that you have been working on in the previous sections. 241 . which often leads to omissions or short cuts.
1. how to run Ant tasks from within Maven. how to split your sources into modules or components. we will focus only on building version 2. recommended Maven directory structure). Introducing the Spring Framework The Spring Framework is one of today's most popular Java frameworks. while still running your existing. Introduction The purpose of this chapter is to show a migration path from an existing build in Ant to Maven.Better Builds with Maven 8. Maven build. You will learn how to start building with Maven.1. This will allow you to evaluate Maven's technology. . which uses an Ant script.1. You will learn how to use an existing directory structure (though you will not be following the standard. Ant-based build system.0-m1 of Spring. 8. For the purpose of this example. The Maven migration example is based on the Spring Framework build. This example will take you through the step-by-step process of migrating Spring to a modularized. which is the latest version at the time of writing. while enabling you to continue with your required work. The Spring release is composed of several modules. and among other things. you will be introduced to the concept of dependencies. component-based.
5 classes. 249 . For Spring. using inclusions and exclusions that are based on the Java packages of each class. resulting in JARs that contain both 1. TLD files. you can see graphically the dependencies between the modules. and each produces a JAR. with the Java package structure.).4 and 1. etc.4 compatible source code and JUnit tests respectively tiger/src and tiger/test: contain additional JDK 1. The src and tiger/src directories are compiled to the same destination as the test and tiger/test directories..Migrating to Maven Figure 8-1: Dependency relationship between Spring modules In figure 8-1. Optional dependencies are indicated by dotted lines. more or less. Each of these modules corresponds.5 compatible source code and JUnit Each of the source directories also include classpath resources (XML files. the Ant script compiles each of these different source directories and then creates a JAR for each module. properties files.
To start.Better Builds with Maven 8. you will need to create a directory for each of Spring's modules.) per Maven project file.2. Inside the 'm2' directory. that means you will need to have a Maven project (a POM) for each of the modules listed above. WAR. Figure 8-2: A sample spring module directory 250 . etc. the rule of thumb to use is to produce one artifact (JAR. In the Spring example. Where to Begin? With Maven. you will create a subdirectory called 'm2' to keep all the necessary Maven changes clearly separated from the current build system.
SNAPSHOT – that is. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. • packaging: the jar. • it should mimic standard package naming conventions to avoid duplicate values. company. department.Migrating to Maven In the m2 directory. which will be used for testing in every module. project. You will use the parent POM to store the common configuration settings that apply to all of the modules. <groupId>com.migrating. each module will inherit the following values (settings) from the parent POM. non-snapshot version for a short period of time.exist. as it is our 'unofficial' example version of Spring.8. in Spring. however. war. Recall from previous chapters that during the release process. etc. Let's begin with these directories. the main source and test directories are src and test. and ear values should be obvious to you (a pom value means that this project is used for metadata only) The other values are not strictly required.. the version you are developing in order to release. in order to tag the release in your SCM.1</version> <scope>test</scope> </dependency> </dependencies> As explained previously.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. For example. Maven will convert to the definitive. you will use com.0-m1-SNAPSHOT</version> <name>Spring parent</name> <packaging>pom</packaging> <description>Spring Framework</description> <inceptionYear>2002</inceptionYear> <url>. spring-parent) • version: this setting should always represent the next release version number appended with . and In this parent POM we can also add dependencies such as JUnit. respectively. For this example.springframework.m2book.springframework • artifactId: the setting specifies the name of this module (for example. you will need to create a parent POM. 251 . the Spring team would use org.org</url> <organization> <name>The Spring Framework Project</name> </organization> groupId: this setting indicates your area of influence. thereby eliminating the requirement to specify the dependency repeatedly across multiple modules.m2book. and are primarily used for documentation purposes.exist.
For now.dir}"/> <!-. At this point. so there is no need for you to add the configuration parameters. as you will learn about that later in this chapter.3" debug="${debug}" deprecation="false" optimize="false" failonerror="true"> <src path="${src./. and failonerror (true) values./src</sourceDirectory> <testSourceDirectory>. deprecation and optimize (false). For the debug attribute. Spring's Ant script uses a debug parameter.Include Commons Attributes generated Java sources --> <src path="${commons.3</target> </configuration> </plugin> </plugins> </build> 252 .Better Builds with Maven Using the following code snippet from Spring's Ant build script. These last three properties use Maven's default values. you can retrieve some of the configuration parameters for the compiler.tempdir. in the buildmain target.attributes.3</source> <target>1. that Maven automatically manages the classpath from its list of dependencies.apache... your build section will look like this: <build> <sourceDirectory>.. you will need to append -Dmaven. you don't have to worry about the commons-attributes generated sources mentioned in the snippet.debug=false to the mvn command (by default this is set to true).src}"/> <classpath refid="all-libs"/> </javac> As you can see these include the source and target compatibility (1.. <javac destdir="${target./. Recall from Chapter 2.compiler. so to specify the required debug function in Maven.dir}" source="1.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.classes.3" target="1./test</testSourceDirectory> <plugins> <plugin> <groupId>org.maven.3).
dir}" includes="${test.mockclasses. and this doesn't need to be changed.excludes}"/> </batchtest> </junit> You can extract some configuration information from the previous code: • forkMode=”perBatch” matches with Maven's forkMode parameter with a value of once.includes and test. haltonfailure and haltonerror settings.includes}" excludes="${test.properties files etc take precedence --> <classpath location="${target.headless=true -XX:MaxPermSize=128m -Xmx128m"/> <!-.excludes from the nested fileset. so you will not need to locate the test classes directory (dir).Must go first to ensure any jndi. Maven uses the default value from the compiler plugin. Maven sets the reports destination directory (todir) to target/surefire-reports.testclasses. The nested element jvmarg is mapped to the configuration parameter argLine As previously noted. by default.dir}"/> <!-.dir}"/> <classpath location="${target. since the concept of a batch for testing does not exist. From the tests target in the Ant script: <junit forkmode="perBatch" printsummary="yes" haltonfailure="yes" haltonerror="yes"> <jvmarg line="-Djava. • • • • • 253 .dir}"/> <classpath refid="all-libs"/> <formatter type="plain" usefile="false"/> <formatter type="xml"/> <batchtest fork="yes" todir="${reports.dir}"/> <classpath location="${target.dir}"> <fileset dir="${target. • • formatter elements are not required as Maven generates both plain text and xml reports. You will need to specify the value of the properties test. You will not need any printsummary. this value is read from the project.properties file loaded from the Ant script (refer to the code snippet below for details).awt. by default.Need files loaded as resources --> <classpath location="${test.classes. classpath is automatically managed by Maven from the list of dependencies.Migrating to Maven The other configuration that will be shared is related to the JUnit tests. as Maven prints the test summary and stops for any test error or failure.testclasses.
test. test.excludes=**/Abstract* #test.maven.3. # Second exclude needs to be used for JDK 1. 254 . It makes tests run using the standard classloader delegation instead of the default Maven isolated classloader. Note that it is possible to use another lower JVM to run tests if you wish – refer to the Surefire plugin reference documentation for more information. <plugin> <groupId>org.5 .4.Better Builds with Maven # Wildcards to be matched by JUnit tests. translate directly into the include/exclude elements of the POM's plugin configuration. Since Maven requires JDK 1. which are processed prior to the compilation.headless=true -XX:MaxPermSize=128m -Xmx128m </argLine> <includes> <include>**/*Tests.includes=**/*Tests.4 to run you do not need to exclude hibernate3 tests.excludes=**/Abstract* org/springframework/orm/hibernate3/** The includes and excludes referenced above. # Convention is that our JUnit test classes have XXXTests-style names.class</include> </includes> <excludes> <exclude>**/Abstract*</exclude> </excludes> </configuration> </plugin> The childDelegation option is required to prevent conflicts when running under Java 5 between the XML parser provided by the JDK and the one included in the dependencies in some modules.1 # being compiled with target JDK 1. due to Hibernate 3.and generates sources from them that have to be compiled with the normal Java compiler. Spring's Ant build script also makes use of the commons-attributes compiler in its compileattr and compiletestattr targets. When building only on Java 5 you could remove that option and the XML parser (Xerces) and APIs (xml-apis) dependencies.4.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <forkMode>once</forkMode> <childDelegation>false</childDelegation> <argLine> -Djava.class # # Wildcards to exclude among JUnit tests. mandatory when building in JDK 1.apache. The commons-attributes compiler processes javadoc style annotations – it was created before Java supported annotations in the core language on JDK 1.awt.
attributes.java"/> <fileset dir="${test.java"/> </attribute-compiler> From compiletestattr: <!-. --> <attribute-compiler <!-Only the PathMap attribute in the org. 255 . this same function can be accomplished by adding the commons-attributes plugin to the build section in the POM.dir}" includes="**/metadata/*. so you will only need to add the inclusions for the main source and test source compilation.Compile to a temp directory: Commons Attributes will place Java Source here.attributes.Compile to a temp directory: Commons Attributes will place Java Source here.java</include> </includes> <testIncludes> <include>org/springframework/aop/**/*.metadata package currently needs to be shipped with an attribute.tempdir.springframework.Migrating to Maven From compileattr: <!-.mojo</groupId> <artifactId>commons-attributes-maven-plugin</artifactId> <executions> <execution> <configuration> <includes> <include>**/metadata/*.java</include> <include>org/springframework/jmx/**/*. --> <attribute-compiler </attribute-compiler> In Maven.handler.servlet. --> <fileset dir="${src.tempdir. Maven handles the source and destination directories automatically. <plugin> <groupId>org.test}"> <fileset dir="${test.web.dir}" includes="org/springframework/jmx/**/*.codehaus.dir}" includes="org/springframework/aop/**/*.. [INFO] ------------------------------------------------------------------------ Upon closer examination of the report output.FileNotFoundException: class path resource [org/aopalliance/] cannot be resolved to URL because it does not exist. Errors: 1.015 sec <<<<<<<< FAILURE !! org. simply requires running mvn test.2. Errors: 1 [INFO] -----------------------------------------------------------------------[ERROR] BUILD ERROR [INFO] -----------------------------------------------------------------------[INFO] There are test failures. so to resolve the problem add the following to your POM <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.springframework. The org. you will find the following: [surefire] Running org. Within this file. you will get the following error report: Results : [surefire] Tests run: 113. you will need to check the test logs under target/surefire-reports. Time elapsed: 0. Failures: 1. for the test class that is failing 262 . Failures: 1.5.support.support.0</version> <scope>test</scope> </dependency> This output means that this test has logged a JUnit failure and error. there is a section for each failed test called stacktrace.aopalliance package is inside the aopalliance JAR. This indicates that there is something missing in the classpath that is required to run the tests.io.PathMatchingResourcePatternResolverTests [surefire] Tests run: 5.core. The first section starts with java.txt.io.springframework.PathMatchingResourcePatternResolverTe sts.Better Builds with Maven 8. To debug the problem. However. Running Tests Running the tests in Maven. when you run this command.io.
compile. is to run mvn install to make the resulting JAR available to other projects in your local Maven repository. compile tests. as it will process all of the previous phases of the build life cycle (generate sources. run tests. You will get the following wonderful report: [INFO] -----------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ The last step in migrating this module (spring-core) from Ant to Maven. etc.Migrating to Maven Now run mvn test again.) 263 . This command can be used instead most of the time.
Better Builds with Maven 8. In the same way.0. Other Modules Now that you have one module working it is time to move on to the other modules. Avoiding Duplication As soon as you begin migrating the second module. instead of repeatedly adding the same dependency version information to each module. since they have the same groupId and version: <dependency> <groupId>${project.6. Using the parent POM to centralize this information makes it possible to upgrade a dependency version across all sub-projects from a single location. each of the modules will be able to inherit the required Surefire configuration.6. That way. you will be adding the Surefire plugin configuration settings repeatedly for each module that you convert.4</version> </dependency> </dependencies> </dependencyManagement> The following are some variables that may also be helpful to reduce duplication: • • ${project. To avoid duplication.1.version}</version> </dependency> 264 .version}: version of the current POM being built ${project. If you follow the order of the modules described at the beginning of the chapter you will be fine. otherwise you will find that the main classes from some of the modules reference classes from modules that have not yet been built. move these configuration settings to the parent POM instead. See figure 8-1 to get the overall picture of the interdependencies between the Spring modules. you can refer to spring-core from spring-beans with the following. For instance.groupId}: groupId of the current POM being built For example. 8. and remove the versions from the individual modules (see Chapter 3 for more information). you will find that you are repeating yourself. <dependencyManagement> <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. use the parent POM's dependencyManagement section to specify this information once.groupId}</groupId> <artifactId>spring-core</artifactId> <version>${project.
3 and some compiled for Java 5 in the same JAR. users will know that if they depend on the module composed of Java 5 classes. Building Java 5 Classes Some of Spring's modules include Java 5 classes from the tiger folder. you can split Spring's mock classes into spring-context-mock. they will need to run them under Java 5.maven. with only those classes related to spring-context module. how can the Java 1. any users. it's easier to deal with small modules. First.2. attempting to use one of the Java 5 classes under Java 1. As the compiler plugin was earlier configured to compile with Java 1. would experience runtime errors.4. 265 . you can use it as a dependency for other components.6.5 sources be added? To do this with Maven. that a JAR that contains the test classes is also installed in the repository: <plugin> <groupId>org. However. Generally with Maven. there is a procedure you can use. make sure that when you run mvn install.3 compatibility. Consider that if you include some classes compiled for Java 1.6.groupId}</groupId> <artifactId>spring-beans</artifactId> <version>${project. you will need to create a new spring-beans-tiger module.3 or 1. be sure to put that JAR in the test scope as follows: <dependency> <groupId>${project.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> Once that JAR is installed. By splitting them into different modules.apache. and spring-web-mock. 8. So.version}</version> <type>test-jar</type> <scope>test</scope> </dependency> A final note on referring to test classes from other modules: if you have all of Spring's mock classes inside the same module. particularly in light of transitive dependencies. Referring to Test Classes from Other Modules If you have tests from one component that refer to tests from other modules. with only those classes related to spring-web. in this case it is necessary to avoid refactoring the test source code. Although it is typically not recommended. this can cause previously-described cyclic dependencies problem.3. you need to create a new module with only Java 5 classes instead of adding them to the same module and mixing classes with different requirements. by specifying the test-jar type.Migrating to Maven 8. To eliminate this problem.
Better Builds with Maven As with the other modules that have been covered. and then a directory for each one of the individual tiger modules. as follows: Figure 8-3: A tiger module directory The final directory structure should appear as follows: Figure 8-4: The final directory structure 266 . the Java 5 modules will share a common configuration for the compiler. The best way to split them is to create a tiger folder with the Java 5 parent POM.
apache. you will need to add a module entry for each of the directories../.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.maven.Migrating</target> </configuration> </plugin> </plugins> </build> 267 .. with all modules In the tiger POM..5</source> <target>1././tiger/src</sourceDirectory> <testSourceDirectory>..
RmiInvocationWrapper"/> <rmic base="${target.classes.rmi.5</id> <activation> <jdk>1.remoting.RmiInvocationWrapper" iiop="true"> <classpath refid="all-libs"/> </rmic> 268 . you need to use the Ant task in the spring-remoting module to use the RMI compiler.rmi.springframework.classes. you just need a new module entry for the tiger folder. <profiles> <profile> <id>jdk1. this is: <rmic base="${target.6. In this case. From Ant.5 JDK. you may find that Maven does not have a plugin for a particular task or an Ant target is so small that it may not be worth creating a new plugin.remoting.Better Builds with Maven In the parent POM. but to still be able to build the other modules when using Java 1. Maven can call Ant tasks directly from a POM using the maven-antrun-plugin.5</jdk> </activation> <modules> <module>tiger</module> </modules> </profile> </profiles> 8. with the Spring migration. For example. Using Ant Tasks From Inside Maven In certain migration cases.springframework.dir}" classname="org.4 you will add that module in a profile that will be triggered only when using 1.dir}" classname="org.4.
classpath"/> </rmic> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com. and required by the RMI task.jar above. such as reference constructed from all of the dependencies in the compile scope or lower.build.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> <tasks> <echo>Running rmic</echo> <rmic base="${project.rmi. there are some references available already.classpath.apache.maven.remoting. which is bundled with the JDK.directory}/classes" classname="org./lib/tools.sun</groupId> <artifactId>tools</artifactId> <scope>system</scope> <version>1.RmiInvocationWrapper" iiop="true"/> <classpath refid="maven. So. such as the reference to the tools.springframework. To complete the configuration. the most appropriate phase in which to run this Ant task is in the processclasses phase.rmi. There are also references for anything that was added to the plugin's dependencies section. stub and tie classes from them.4</version> <systemPath>${java. In this case.directory}/classes" classname="org.build. you will need to determine when Maven should run the Ant task. add: <plugin> <groupId>org.springframework.jar</systemPath> </dependency> </dependencies> </plugin> As shown in the code snippet above.. the rmic task. ${project.RmiInvocationWrapper"/> <rmic base="${project.remoting. will take the compiled classes and generate the rmi skeleton.directory} and maven.build.Migrating to Maven To include this in Maven build.compile. which applies to that plugin only. which is a classpath 269 .home}/.compile.
html. Using classpath resources is recommended over using file system resources. Some Special Cases In addition to the procedures outlined previously for migrating Spring to Maven.org/guides/mini/guide-coping-with-sun-jars.Better Builds with Maven 8. 8.6. For more information on dealing with this issue. mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging> For instance.2 -Dpackaging=jar You will only need to do this process once for all of your projects or you may use a corporate repository to share them across your organization. For example. there are two additional. which used relative paths in Log4JConfigurerTests class.mail -DartifactId=mail -Dversion=1. to install JavaMail: mvn install:install-file -Dfile=[path_to_file]/mail. Sun's Activation Framework and JavaMail are not redistributable from the repository due to constraints in their licenses. see. special cases that must be handled. There is some additional configuration required for some modules.6. as these test cases will not work in both Maven and Ant.apache.3.. NamespaceHandlerUtilsTests.jar -DgroupId=javax. These can be viewed in the example code. 270 . such as springaspects.6. You may need to download them yourself from the Sun site or get them from the lib directory in the example code for this chapter. These issues were shared with the Spring developer community and are listed below: • Moving one test class. You can then install them in your local repository with the following command. which uses AspectJ for weaving the classes.
Migrating to Maven 8. you can apply similar concepts to your own Ant based build. 271 . Summary By following and completing this chapter. Restructuring the Code If you do decide to use Maven for your project. ObjectUtilsTests. Now that you have seen how to do this for Spring. reports. ClassUtilsTests. these would move from the original test folder to src/test/java and src/test/resources respectively for Java sources and other files . you will be able to keep your current build working. it it highly recommended that you go through the restructuring process to take advantage of the many time-saving and simplifying conventions within Maven. All of the other files under those two packages would go to src/main/resources. you can realize Maven' other benefits . and install those JARs in your local repository using Maven.just remember not to move the excluded tests (ComparatorTests. create JARs. you would eliminate the need to include and exclude sources and resources “by hand” in the POM files as shown in this chapter. At the same time.7.you can delete that 80 MB lib folder. Finally.advantages such as built-in project documentation generation. as Maven downloads everything it needs and shares it across all your Maven projects automatically . By adopting Maven's standard directory structure. Once you decide to switch completely to Maven. you would move all Java files under org/springframework/core and org/springframework/util from the original src folder to the module's folder src/main/java. you will be able to take advantage of the benefits of adopting Maven's standard directory structure. in addition to the improvements to your build life cycle. In the case of the Spring example. and quality metrics. Once you have spent this initial setup time Maven. you will be able to take an existing Ant-based build. compile and test the code. The same for tests. For example. you can simplify the POM significantly. ReflectionUtilsTests. Maven can eliminate the requirement of storing jars in a source code management system. split it into modular components (if needed). reducing its size by two-thirds! 8. for the spring-core module.8. By doing this. SerializationTestUtils and ResourceTests).
272 .Better Builds with Maven This page left intentionally blank.
Star Trek 273 .Appendix A: Resources for Plugin Developers Appendix A: Resources for Plugin Developers In this appendix you will find: Maven's Life Cycles • Mojo Parameter Expressions • Plugin Metadata • Scotty: She's all yours. All systems automated and ready. sir. Scott. A chimpanzee and two trainees could run her! Kirk: Thank you. . Mr. I'll try not to take that personally.
compile – compile source code into binary form. 2. It continues by describing the mojos bound to the default life cycle for both the jar and maven-plugin packagings. It contains the following phases: 1. process-resources – perform any modification of non-code resources necessary. 4. this section will describe the mojos bound by default to the clean and site life cycles. 8. For example. 5. a mojo may apply source code patches here. 6. validate – verify that the configuration of Maven. initialize – perform any initialization steps required before the main part of the build can start.1. For the default life cycle.Better Builds with Maven A. 9. generate-test-sources – generate compilable unit test code from other source formats. etc. along with a summary of bindings for the jar and maven-plugin packagings. Maven's Life Cycles Below is a discussion of Maven's three life cycles and their default mappings. and generating a project web site. Life-cycle phases The default life cycle is executed in order to perform a traditional build. performing any associated tests. along with a short description for the mojos which should be bound to each.1. generate-sources – generate compilable code from other source formats. mojo-binding defaults are specified in a packaging-specific manner. as when using Aspect-Oriented Programming techniques. generate-resources – generate non-code resources (such as configuration files. such as instrumentation or offline code-weaving. This is necessary to accommodate the inevitable variability of requirements for building different types of projects. 274 . corresponding to the three major activities performed by Maven: building a project from source. archiving it into a jar. It begins by listing the phases in each life cycle. and the content of the current set of POMs to be built is valid. 3.) from other source formats. This section contains a listing of the phases in the default life cycle. The default Life Cycle Maven provides three life cycles. A. cleaning a project of the files generated by a build. process-sources – perform any source modification processes necessary to prepare the code for compilation. process-classes – perform any post-processing of the binaries produced in the preceding step. 7. it takes care of compiling the project's code. In other words. and distributing it into the Maven repository system.1. Finally. in the target output location. This may include copying these resources into the target classpath directory in a Java build.
integration-test – execute any integration tests defined for this project. This may involve installing the archive from the preceding step into some sort of application server. 21. 15. preintegration-test – setup the integration testing environment for this project. before it is available for installation or deployment. package – assemble the tested application code and resources into a distributable archive. 11. process-test-sources – perform any source modification processes necessary to prepare the unit test code for compilation. 17. test-compile – compile unit test source code into binary form. 18. 12.) from other source formats. in the testing target output location. verify – verify the contents of the distributable archive. install – install the distributable archive into the local Maven repository.Appendix A: Resources for Plugin Developers 10. 13. 14. post-integration-test – return the environment to its baseline form after executing the integration tests in the preceding step. 275 . generate-test-resources – generate non-code testing resources (such as configuration files. 19. This may include copying these resources into the testing target classpath location in a Java build. For example. test – execute unit tests on the application compiled and assembled up to step 8 above. 16. etc. deploy – deploy the distributable archive into the remote Maven repository configured in the distributionManagement section of the POM. using the environment configured in the preceding step. a mojo may apply source code patches here. 20. process-test-resources – perform any modification of non-code testing resources necessary. This could involve removing the archive produced in step 15 from the application server used to test it.
Table A-1: The default life-cycle bindings for the jar packaging Phase Mojo Plugin Description processresources compile resourc maven-resourceses plugin compile maven-compilerplugin Copy non-source-code resources to the staging directory for jar creation. Filter variables if necessary.Better Builds with Maven Bindings for the jar packaging Below are the default life-cycle bindings for the jar packaging. Create a jar archive from the staging directory. Install the jar archive into the local Maven repository.testRes maven-resourcesresources ources plugin test-compile test package install deploy testCom maven-compilerpile plugin test jar maven-surefireplugin maven-jar-plugin install maven-installplugin deploy maven-deployplugin 276 . Alongside each. you will find a short description of what that mojo does. Copy non-source-code test resources to the test output directory for unit-test compilation. specified in the POM distribution Management section. Compile project source code to the staging directory for jar creation. Deploy the jar archive to a remote Maven repository. process-test. Execute project unit tests. Compile unit-test source code to the test output directory.
testing. the maven-plugin packaging also introduces a few new mojo bindings. and generate a plugin descriptor. However. and metadata references to latest plugin version. 277 . packaging. for example). if one exists. and the rest. compiling source code. addPluginArtifact Metadata updateRegistry maven-plugin-plugin Integrate current plugin information with plugin search metadata. Indeed. to extract and format the metadata for the mojos within.Appendix A: Resources for Plugin Developers Bindings for the maven-plugin packaging The maven-plugin project packaging behaves in almost the same way as the more common jar packaging. maven-plugin-plugin Update the plugin registry. As such. they undergo the same basic processes of marshaling non-source-code resources. maven-plugin artifacts are in fact jar files. to install reflect the new plugin installed in the local repository..
Better Builds with Maven A. Alongside each. the state of the project before it was built.1.2. Default life-cycle bindings Below are the clean life-cycle bindings for the jar packaging. Table A-3: The clean life-cycle bindings for the jar packaging Phase Mojo Plugin Description clean clean maven-clean-plugin Remove the project build directory. Life-cycle phases The clean life-cycle phase contains the following phases: 1. along with a summary of the default bindings. along with any additional directories configured in the POM. Below is a listing of the phases in the clean life cycle. Maven provides a set of default mojo bindings for this life cycle. 278 . clean – remove all files that were generated during another build process 3. The clean Life Cycle This life cycle is executed in order to restore a project back to some baseline state – usually. effective for all POM packagings. you will find a short description of what that mojo does. pre-clean – execute any setup or initialization procedures to prepare the project for cleaning 2. which perform the most common tasks involved in cleaning a project. post-clean – finalize the cleaning process.
Default Life Cycle Bindings Below are the site life-cycle bindings for the jar packaging. Alongside each. and even deploy the resulting web site to your server. and render documentation source files into HTML.Appendix A: Resources for Plugin Developers A. and prepare the generated web site for potential deployment 4. Table A-4: The site life-cycle bindings for the jar packaging Phase Mojo Plugin Description site site maven-site-plugin maven-site-plugin Generate all configured project reports.3. you will find a short description of what that mojo does. The site Life Cycle This life cycle is executed in order to generate a web site for your project.1. effective for all POM packagings. Life-cycle phases The site life cycle contains the following phases: 1. Deploy the generated web site to the web server path specified in the POM distribution Management section. Below is a listing of the phases in the site life cycle. post-site – execute any actions required to finalize the site generation process. render your documentation source files into HTML. site-deploy – use the distributionManagement configuration in the project's POM to deploy the generated web site files to the web server. Maven provides a set of default mojo bindings for this life cycle. site – run all associated project reports. which perform the most common tasks involved in generating the web site for a project. pre-site – execute any setup or initialization steps to prepare the project for site generation 2. along with a summary of the default bindings. It will run any reports that are associated with your project. site-deploy deploy 279 . and render documentation into HTML 3.
280 . These expressions allow a mojo to traverse complex build state. Using the discussion below.List<org.apache. It is used for bridging results from forked life cycles back to the main line of execution. java. org. This contains avenSession methods for accessing information about how Maven was called.reporting. along with the published Maven API documentation. org. They are summarized below: Table A-5: Primitive expressions supported by Maven's plugin parameter Expression Type Description ${localRepository} ${session} org.util.execution.ma List of reports to be generated when the site ven.2.project.ArtifactRepository used to cache artifacts during a Maven build.util.re This is a reference to the local repository pository. and extract only the information it requires.maven.project.ma List of project instances which will be ven.apache. A. This section discusses the expression language used by Maven to inject build state and plugin configuration into mojos.1. and often eliminates dependencies on Maven itself beyond the plugin API.M The current build session.Better Builds with Maven A. ${reactorProjects} ${reports} ${executedProject} java.apache. Mojo Parameter Expressions Mojo parameter values are resolved by way of parameter expressions when a mojo is initialized. mojo developers should have everything they need to extract the build state they require.MavenProject> processed as part of the current build. it will describe the algorithm used to resolve complex parameter expressions.apache. Simple Expressions Maven's plugin parameter injector supports several primitive expressions. in addition to providing a mechanism for looking up Maven components on-demand. It will summarize the root objects of the build state which are available for mojo expressions. This reduces the complexity of the code contained in the mojo.2.MavenReport> life cycle executes.artifact.apache. Finally.maven.maven.Mav This is a cloned instance of the project enProject instance currently being built.List<org. which act as a shorthand for referencing commonly used build state objects.
During this process. First.m2/settings.io. then the value mapped to that expression is returned. if there is one. Maven supports more complex expressions that traverse the object graph starting at some root object that contains build state. this reflective lookup process is aborted.Sett The Maven settings.Maven Project instance which is currently being built. No advanced navigation can take place using is such expressions.xml in the user's home directory. A.2. and must correspond to one of the roots mentioned above. Project org. an expression part named 'child' translates into a call to the getChild() method on that object.maven.Appendix A: Resources for Plugin Developers A. If at some point the referenced object doesn't contain a property that matches the next expression part. Otherwise. the value that was resolved last will be returned as the expression's value. When there are no more expression parts. The Expression Resolution Algorithm Plugin parameter expressions are resolved using a straightforward algorithm.apache. if the expression matches one of the primitive expressions (mentioned above) exactly.' character. merged from ings conf/settings. 281 . the expression is split at each '. Repeating this.PluginDescriptor including its dependency artifacts. the next expression part is used as a basis for reflectively traversing that object' state. much like a primitive expression would.maven. following standard JavaBeans naming conventions.maven.xml in the maven application directory and from . unless specified otherwise. org.apache. The first is the root object.2. Complex Expression Roots In addition to the simple expressions above.settings.project.plugin. The valid root objects for plugin parameter expressions are summarized below: Table A-6: A summary of the valid root objects for plugin parameter expressions Expression Root Type Description ${basedir} ${project} ${settings} java.File The current project's root directory.apache. From there. ${plugin} org. successive expression parts will extract values from deeper and deeper inside the build state. rendering an array of navigational directions. ptor. This root object is retrieved from the running application using a hard-wired mapping.2.descri The descriptor instance for the current plugin.3. The resulting value then becomes the new 'root' object for the next round of traversal.
it will attempt to find a value in one of two remaining places. array index references. |-> <goalPrefix>myplugin</goalPrefix> <!-.plugins</groupId> <artifactId>maven-myplugin-plugin</artifactId> <version>2. Its syntax has been annotated to provide descriptions of the elements. This includes properties specified on the command line using the -D commandline option.apache. The POM properties. <plugin> <!-. If the value is still empty.0-SNAPSHOT</version> <!-. then the string literal of the expression itself is used as the resolved value. Maven plugin parameter expressions do not support collection lookups. If a user has specified a property mapping this expression to a specific value in the current POM. an ancestor POM. this plugin could be referred to from the command line using | the 'myplugin:' prefix.The name of the mojo. --> <description>Sample Maven Plugin</description> <!-.This element provides the shorthand reference for this plugin. Plugin descriptor syntax The following is a sample plugin descriptor. it will be resolved as the parameter value at this point.This is a list of the mojos contained within this plugin. or an active profile. If the parameter is still empty after these two lookups.These are the identity elements (groupId/artifactId/version) | from the plugin POM. resolved in this order: 1. as well as the metadata formats which are translated into plugin descriptors from Java. Plugin metadata Below is a review of the mechanisms used to specify metadata for plugins.maven. Combined with the 'goalPrefix' element above.and Ant-specific mojo source files.Whether the configuration for this mojo should be inherted from | parent to child POMs by default. |-> <inheritedByDefault>true</inheritedByDefault> <!-. |-> <goal>do-something</goal> 282 . | this name allows the user to invoke this mojo from the command line | using 'myplugin:do-something'. The system properties. For | instance.Better Builds with Maven If at this point Maven still has not been able to resolve a value for the parameter expression. Maven will consult the current system properties. --> <mojos> <mojo> <!-. 2. or method invocations that don't conform to standard JavaBean naming conventions. It includes summaries of the essential plugin descriptor.The description element of the plugin's POM. Currently. |-> <groupId>org.
|-> <executeLifecycle>myLifecycle</executeLifecycle> <!-. If the mojo is not marked as an | aggregator. | and specifies a custom life-cycle overlay that should be added to the | cloned life cycle before the specified phase is executed. If Maven is operating in offline mode. without | also having to specify which phase is appropriate for the mojo's | execution. This is | useful to inject specialized behavior in cases where the main life | cycle should remain unchanged. If a mojo is marked as an aggregator.Determines how Maven will execute this mojo in the context of a | multimodule build. via the | command line. |-> <executePhase>process-resources</executePhase> <!-. |-> <phase>compile</phase> <!-. but the mojo itself has certain life-cycle | prerequisites. It's restricted to this plugin to avoid creating inter-plugin | dependencies.Tells Maven that a valid project instance must be present for this | mojo to execute.Tells Maven that this mojo can ONLY be invoked directly. | This allows the user to specify that this mojo be executed (via the | <execution> section of the plugin configuration in the POM). it will only | execute once. |-> <requiresDirectInvocation>false</requiresDirectInvocation> <!-.Which phase of the life cycle this mojo will bind to by default.Ensure that this other mojo within the same plugin executes before | this one. --> <description>Do something cool. then execute that life cycle up to the specified phase. |-> <aggregator>false</aggregator> <!-.Appendix A: Resources for Plugin Developers <!-.This is optionally used in conjunction with the executePhase element. regardless of the number of project instances in the | current build. |-> <requiresReports>false</requiresReports> <!-. |-> <executeGoal>do-something-first</executeGoal> <!-.</description> <!-. |-> <requiresProject>true</requiresProject> <!-. to give users a hint | at where this task should run.This tells Maven to create a clone of the current project and | life cycle. such mojos will 283 . | This is useful when the user will be invoking this mojo directly from | the command line. it will be executed once for each project instance in the | current build.Some mojos cannot execute if they don't have access to a network | connection.Tells Maven that a valid list of reports for the current project are | required before this plugin can execute. It is a good idea to provide this.Description of what this mojo does. Mojos that are marked as aggregators should use the | ${reactorProjects} expression to retrieve a list of the project | instances in the current build.
| It will be used as a backup for retrieving the parameter value. --> <parameters> <parameter> <!-.Description for this parameter.The parameter's name.File</type> <!-. |-> <required>true</required> <!-.io. this parameter must be configured via some other section of | the POM. |-> <alias>outputDirectory</alias> <!-. |-> <requiresOnline>false</requiresOnline> <!-.maven.plugins. as in the case of the list of project dependencies. |-> <inheritedByDefault>true</inheritedByDefault> <!-.The Java type for this parameter.apache. This flag controls whether the mojo requires | Maven to be online. If set to | false.SiteDeployMojo</implementation> <!-. specified in the javadoc comment | for the parameter field in Java mojo implementations. |-> <description>This parameter does something important. In Java mojos.</description> </parameter> </parameters> 284 .Whether this parameter's value can be directly specified by the | user.The class or script path (within the plugin's jar) for this mojo's | implementation. --> <type>java. unless the user specifies | <inherit>false</inherit>.Whether this parameter is required to have a value. If true.site. this will often reflect the | parameter field name in the mojo class. the | mojo (and the build) will fail when this parameter doesn't have a | value. |-> <editable>true</editable> <!-.The implementation language for this mojo.This is a list of the parameters used by this mojo. --> <language>java</language> <!-. |-> <name>inputDirectory</name> <!-.Better Builds with Maven | cause the build to fail.This is an optional alternate parameter name for this parameter. either via command-line or POM configuration.Tells Maven that the this plugin's configuration should be inherted | from a parent POM by default. |-> <implementation>org.
Each parameter must | have an entry here that describes the parameter name.apache.File.For example. | | The general form is: | <param-nameparam-expr</param-name> | |-> <configuration> <!-.io. | and the primary expression used to extract the parameter's value.This is the operational specification of this mojo's parameters.artifact.Appendix A: Resources for Plugin Developers <!-.reporting. |-> <inputDirectory implementation="java.File">${project.maven.outputDirectory}</inputDirectory> </configuration> <!-.manager.Use a component of type: org.reporting.WagonManager</role> <!-.apache. parameter type. this parameter is named "inputDirectory".io.outputDirectory}.Inject the component instance into the "wagonManager" field of | this mojo. |-> <field-name>wagonManager</field-name> </requirement> </requirements> </mojo> </mojos> </plugin> 285 .artifact.maven.This is the list of non-parameter component references used by this | mojo. the requirement specification tells | Maven which mojo-field should receive the component instance. Finally. |-> <requirements> <requirement> <!-.WagonManager |-> <role>org.manager. as | compared to the descriptive specification above. and it | expects a type of java. Components are specified by their interface class name (role). | along with an optional classifier for the specific component instance | to be used (role-hint). The expression used to extract the | parameter value is ${project.
phase.2.. Class-level annotations The table below summarizes the class-level javadoc annotations which translate into specific elements of the mojo section in the plugin descriptor. executeLifecycle.4. Java Mojo Metadata: Supported Javadoc Annotations The Javadoc annotations used to supply metadata about a particular mojo come in two types. life cycle name. Classlevel annotations correspond to mojo-level metadata elements. Table A-7: A summary of class-level javadoc annotations Descriptor Element Javadoc Annotation Values Required? aggregator description executePhase. Alphanumeric. with dash ('-') Any valid phase name true or false (default is false) true or false (default is true) true or false (default is false) true or false (default is false) Yes No No No No No 286 .Better Builds with Maven A. and field-level annotations correspond to parameter-level metadata elements.
Its syntax has been annotated to provide descriptions of the elements. and Yes Requirements section required editable description deprecated usually left blank None None Anything Alternative parameter No No No No (recommended) No A. These metadata translate into elements within the parameter.2.Whether this mojo requires access to project reports --> <requiresReports>true</requiresReports> 287 . Maven will resolve | the dependencies in this scope before this mojo executes. corresponding to the ability to map | multiple mojos into a single build script.Whether this mojo requires a current project instance --> <requiresProject>true</requiresProject> <!-.Appendix A: Resources for Plugin Developers Field-level annotations The table below summarizes the field-level annotations which supply metadata about mojo parameters. Table A-8: Field-level annotations Descriptor Element Javadoc Annotation Values Required? alias. |-> <mojos> <mojo> <!-.The name for this mojo --> <goal>myGoal</goal> <!-. and requirements sections of a mojo's specification in the plugin descriptor.Contains the list of mojos described by this metadata file.The default life-cycle phase binding for this mojo --> <phase>compile</phase> <!-.5. Ant Metadata Syntax The following is a sample Ant-based mojo metadata file.The dependency scope required for this mojo. configuration. NOTE: | multiple mojos are allowed here. <pluginMetadata> <!-. |-> <requiresDependencyResolution>compile</requiresDependencyResolution> <!-. parameterconfiguration section @parameter expression=”${expr}” alias=”alias” default-value=”val” @component roleHint=”someHint” @required @readonly N/A (field comment) @deprecated Anything roleHint is optional.
This describes the mechanism for forking a new life cycle to be | executed prior to this mojo executing.This is an optional classifier for which instance of a particular | component type should be used. |-> <hint>custom</hint> </component> </components> <!-.Better Builds with Maven <!-.The phase of the forked life cycle to execute --> <phase>initialize</phase> <!-.Whether this mojo requires Maven to execute in online mode --> <requiresOnline>true</requiresOnline> <!-.artifact.The parameter name.The property name used by Ant tasks to reference this parameter | value. |-> <inheritByDefault>true</inheritByDefault> <!-.ArtifactResolver</role> <!-.List of non-parameter application components used in this mojo --> <components> <component> <!-. |-> <requiresDirectInvocation>true</requiresDirectInvocation> <!-.Whether the configuration for this mojo should be inherited | from parent to child POMs by default.The list of parameters this mojo uses --> <parameters> <parameter> <!-. |-> <execute> <!-. --> <role>org.apache.Whether this parameter is required for mojo execution --> <required>true</required> 288 . |-> <property>prop</property> <!-. |-> <goal>goal</goal> </execute> <!-.This is the type for the component to be injected. --> <name>nom</name> <!-.maven.Another mojo within this plugin to execute before this mojo | executes.A named overlay to augment the cloned life cycle for this fork | only |-> <lifecycle>mine</lifecycle> <!-.resolver.Whether this mojo must be invoked directly from the command | line.Whether this mojo operates as an aggregator --> <aggregator>true</aggregator> <!-.
|-> <deprecated>Use something else</deprecated> </parameter> </parameters> <!-. it provides advice on which alternative mojo | to use.If this is specified.project.The description of what the mojo is meant to accomplish --> <description> This is a test.The expression used to extract this parameter's value --> <expression>${my.maven.apache. </description> <!-. this element will provide advice for an | alternative parameter to use instead.Appendix A: Resources for Plugin Developers <!-.Whether the user can edit this parameter directly in the POM | configuration or the command line |-> <readonly>true</readonly> <!-.The description of this parameter --> <description>Test parameter</description> <!-.The Java type of this mojo parameter --> <type>org.property}</expression> <!-. |-> <deprecated>Use another mojo</deprecated> </mojo> </mojos> </pluginMetadata> 289 .An alternative configuration name for this parameter --> <alias>otherProp</alias> <!-.MavenProject</type> <!-.When this is specified.The default value provided when the expression won't resolve --> <defaultValue>${project.artifactId}</defaultValue> <!-.
290 .Better Builds with Maven This page left intentionally blank.
.
which is always at the top-level of a project. Directory for all generated output. Standard location for resource filters. Standard location for test sources. Standard location for test resource filters. Standard Directory Structure Table B-1: Standard directory layout for maven project content Standard Location Description pom. Standard location for application resources. generated sources that may be compiled. A simple note which might help first time users and is optional.1.Better Builds with Maven B.txt target/ Maven’s POM. Standard location for test resources. A license file is encouraged for easy identification by users and is optional. This would include compiled classes. Standard location for assembly filters.txt README.xml LICENSE. 292 . you src/main/java/ src/main/resources/ src/main/filters/ src/main/assembly/ src/main/config/ src/test/java/ src/test/resources/ src/test/filters/ Standard location for application sources. may generate some sources from a JavaCC grammar. the generated site or anything else that might be generated as part of your build. For example. target/generated-sources/<plugin-id> Standard location for generated sources. Standard location for application configuration filters.
org/maven2</url> <layout>default</layout> <snapshots> <enabled>false</enabled> </snapshots> <releases> <updatePolicy>never</updatePolicy> </releases> </pluginRepository> </pluginRepositories> <!-.0.Reporting Conventions --> <reporting> <outputDirectory>target/site</outputDirectory> </reporting> .Repository Conventions --> <repositories> <repository> <id>central</id> <name>Maven Repository Switchboard</name> <layout>default</layout> <url>. Maven’s Super POM <project> <modelVersion>4.org/maven2</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <!-..Plugin Repository Conventions --> <pluginRepositories> <pluginRepository> <id>central</id> <name>Maven Plugin Repository</name> <url>. </project> 293 .2> <!-.0</modelVersion> <name>Maven Default Project</name> <!-..maven.Appendix B: Standard Conventions B.
Post-process the generated files from compilation. Process the test source code. Run any checks to verify the package is valid and meets quality criteria. for example to filter any values. Perform actions required after integration tests have been executed. Copy and process the resources into the destination directory. ready for packaging. for example to do byte code enhancement on Java classes. Take the compiled code and package it in its distributable format. Copy and process the resources into the test destination directory. Done in an integration or release environment. 294 . Process and deploy the package if necessary into an environment where integration tests can be run. copies the final package to the remote repository for sharing with other developers and projects. Create resources for testing. for example to filter any values. Generate any test source code for inclusion in compilation. Compile the test source code into the test destination directory Run tests using a suitable unit testing framework. Install the package into the local repository. This may involve things such as setting up the required environment. for use as a dependency in other projects locally. Compile the source code of the project. This may including cleaning up the environment. Process the source code.. These tests should not require the code be packaged or deployed. such as a JAR. Perform actions required before integration tests are executed. Description Validate the project is correct and all necessary information is available.3.Better Builds with Maven B. Generate resources for inclusion in the package.
org/Maven2+plugin Cargo Merging War Files Plugin . Bloch.org/Merging+WAR+files Cargo Reference Documentation .org/Deploying+to+a+running+container Cargo Plugin Configuration Options .codehaus.apache. Axis Tool Plugin .codehaus. Cargo Containers Reference . Effective Java.eclipse. June 8.org/axis/java/ AxisTools Reference Documentation .html 295 . Web Sites Axis Building Java Classes from WSDL. Evolving Java-based APIs. 2001 Bibliography Online Books des Rivieres.. Joshua.org/ Checkstyle .codehaus. Sun Developer Network . Jim.org/Containers Cargo Container Deployments .
net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.html Xdoclet . Simian .html Ruby on Rails .sf.org/plugins/ Mojo .org/xdoclet-maven-plugin/ XDoclet Reference Documentation J2EE Specification .html Introduction to the Build Life Cycle – Maven .apache. XDoclet2 Maven Plugin .sourceforge.org/guides/introduction/introduction-to-archetypes.sf.apache. Jdiff .com Introduction to Archetypes .org/ PMD Best Practices .html Xdoclet2 .html Cobertura .org/jdiff-maven-plugin Jetty 6 Plugin Documentation .sf.Better Builds with Maven Checkstyle Available Checks . Clirr .org Maven Downloads .codehaus.codehaus.net/ EJB Plugin Documentation .net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.0-doc/manager-howto. PMD Rulesets . 296 .org/plugins/maven-ejb-plugin/ ibiblio.codehaus.rubyonrails. Jester .apache.org/plugins/maven-clover-plugin/ DBUnit Java API . Clover Plugin .sf.com/j2ee/reference/api/ Maven 2 Wiki .sourceforge.apache.org/maven-model/maven.html POM Reference .apache. XDoclet EjbDocletTask . Tomcat Manager Web Application .html XDoclet Maven Plugin .com.org/tomcat-5.html Maven Plugins .net/availablechecks..
134-136. 187. Tom 211 classpath adding resources for unit tests 48 filtering resources 49-51 handling resources 46. 237 Butler. 194. 234. 84 modules 56 preparing a release 241-244 project inheritance 55.Index Index A Alexander. Edward V. 264-271 Apache Avalon project 196 Commons Collections 260 Commons Logging library 258 Geronimo project 86. 83. 122. 232. 95. 41 main Spring source 256-260 test sources 42. 103-105. 84. 126-132 creating 55-64. 69. 143. 230. 206-209 Cobertura 184. 230. 219. 228. 112. 131. 59-62. 196 Clancy. 50-52 preventing filtering of resources 52 testing 35 clean life cycle 278 Clirr report 185. 43 tests 260. 261 Confluence format 79 container 62. 62. 114. 112. 279 build life cycle 30. 117. 226. 197-202 code improving quality of 206-209 restructuring 271 code restructuring to migrate to Maven 271 Codehaus Mojo project 134. 276. 82 creating a Web site for 78-81. 114-122. 23. 130 Changes report 185 Checkstyle report 184. 84 DayTrader 86-88. 213 setting up shared development environments 213-216 Community-oriented Real-time Engineering (CoRE) 212 compiling application sources 40. 224. 124. 124. 111. 226. 55. 240 conventions about 26 default 29 default build life cycle 294 Maven’s super POM 293 naming 56 single primary output per project 27 standard directory layout for projects 27 standard directory structure 292 standard naming conventions 28 297 B Bentley. 86. Samuel 169 . 55 bibliography 295. 84 managing dependencies 61. 167. 87. 135. 77. 100. Jon 37 Berard. 90-99. 229 Continuum 237 continuous integration with 224. 294 Build Management 222. 76. 236. 74. 112. 84 Separation of Concerns (SoC) 56 setting up the directory structure 56-59 APT format 78 archetypes creating standard project 238. 251. 129. 97 HTTPd 216 Maven project 134. 41. 232. 116. 101-110. 129. 258 ASF 23 aspectj/src directory 249 aspectj/test directory 249 C Cargo 103-105. 217. 163. 167 Software Foundation 22. 160. 239 definition of 39 artifactId 29. Christopher 25 Ant metadata syntax 287-289 migrating from 247-262. 48. 90 deploying 55. 107. 68. 167 collaborating with teams introduction to 211 issues facing teams 212. 296 binding 134. 189. 277. 237 creating standard project 239. 221 Tomcat 216 application building J2EE 85-88. 152.
34. 251 standard structure 292 test 249. 101-110. 268 tiger/src 249 tiger/test 249 directory structures building a Web services client project 94 flat 88 nested 89 DocBook format 79 E Einstein. 114122. 126-132 deploying applications 122. 258 H Hansson. 125 Geronimo specifications JAR 107 testing applications 126. 251. 129-132 Java description 30 java. 266. 127. 124. 208. 193 DocBook Simple format 78 D DayTrader architecture 86. 88. 47.Object 29 mojo metadata 286-289 Spring Framework 248-252. 294 conventions 29 location of local repository 44 naming conventions 56 pom. 77 to the file system 74 with an external SSH 76 with FTP 77 with SFTP 75 with SSH2 75 development environment 213-216 directories aspectj/src 249 aspectj/test 249 m2 250. 124. 76. 256 url 30 Java EE 86 Javadoc 298 . Richard filtering classpath resources preventing on classpath resources FindBugs report FML format FTP 133 49-51 52 197 78 77 G groupId 29.lang. 99 95 103-105 99 100. 209 216 J J2EE building applications 85-88. Albert EJB building a project canonical directory structure for deploying plugin documentation Xdoclet external SSH 21 95-97.xml 39. 249. 251 mock 249 my-app 39 src 40. 91-99. 124 deploying applications methods of 74. 101 76 F Feynman. 49-51 structures 24 dependencies determining versions for 259 locating dependency artifacts 34 maintaining 203-205 organization of 31 relationship between Spring modules 249 resolving conflicts 64-67 specifying snapshot versions for 63 using version ranges to resolve conflicts 64-67 Dependency Convergence report 184 Deployer tool 122. 90 Quote Streamer 87 default build life cycle 41. 68. 251 tiger 265. 87 building a Web module 105-108 organizing the directory structure 87. David Heinemeier hibernate3 test 26 254 I IBM improving quality of code internal repository 86 206.Better Builds with Maven copy/paste detection report CPD report 193 184. 112.
188 254 202 105 184-186 K Keller. 275 136 44 44. 41 configuration of reports 173. Helen 85 L life cycle default for jar packaging local repository default location of installing to requirement for Maven storing artifacts in locating dependency artifacts 274. 201. 155-163 basic development 141-144. 146. 227-234. 57 naming 88 naming conventions 56 nested directory structure 89 referring to test classes from 265 tiger 266 WAR 88 mojos accessing build information 137 addition 281 advanced development 153. 156 writing Ant mojos to send e-mail 149-152 my-app directory 39 N name (element) naming conventions 30 56 O overview benefits of Maven local repository 36 32-34 299 . 224. 79 getting started with 37-46. 136 developing custom plugins 133-140. 269 using plugins 53 using to assess project health 170 XDoclet plugin 101 XDoclet2 102 maven-plugin packaging 277 McIlroy. 69. 285-289 phase binding 134-136 requiring dependency resolution 155. 236.. 142. 175.Index class-level annotations field-level annotations report JDK Jester JSP JXR report 286 287 184. 40 default build life cycle 68. 237. 163-165. 229 Maven Apache Maven project 134. 225. 187. 48-53 groupId 34 integrating with Cobertura 197-199. 150-152 capturing information with Java 141-147 definition of 134 implementation language 140 parameter expressions 280-283. 45 32 35 35 M m2 directory 250. 185. 176 creating your first project 39. 167 artifact guideline 87 build life cycle 41 collaborating with 211-222. 148. 167 documentation formats for Web sites 78. 275 migrating to 247-260 naming conventions 56 origins of 23 plugin descriptor 137 plugin descriptor 138 preparing to use 38 Repository Manager (Archiva) 217 standard conventions 291-294 super POM 293 using Ant tasks from inside 268. 294 developing custom 135. 145-161. 143. 202 JDK requirement 254 life-cycle phases 274. 239-245 compiling application sources 40. 251 Maestro 222-224.
240. 39. 34. 196. 178-183. 106-109. 101. 193 selecting 183. 167 development tools 138-140 framework for 135. 30. 174-176. 206-209 173-176 193 V version version ranges 30. 155-161. 200-210 inheritance 55. 97. 194. 172. 134. 126. 232. 184. 241-244 75 80 279 55. 293 tiger 267 pom. 135 using 53. 143. 292 preparing to use Maven 38 profiles 55. 96. 242-244. 190-192. 219. 233 35. 103. 197 249. 184. 35 manager 217 types of 32 restructuring code 271 Ruby on Rails (ROR) 296 running tests 262 S SCM SFTP site descriptor site life cycle snapshot Spring Framework src directory SSH2 Surefire report 35. 54 PMD report 184. 65. 286-289 developing custom 133. 201. 185. 184. 219 creating an organization 219-221 creating files 256 key elements 29 super 29. 202 Tag List 184. 186-188. 67. 196 185. 180182. 235. 251 75 171. 248-252. 202 262 249 249 79 Q Quote Streamer 87 R releasing projects reports adding to project Web site Changes Checkstyle Clirr configuration of copy/paste detection 300 241-244 171-173 185 184. 189-191. 113-118. 218. 72-74 project assessing health of 169-174. 70. 234. 207. 196-198. 195. 84 monitoring overall health of 210 project management framework 22 Project Object Model 22 CPD 184. 185-187. 193 POM 22. 66. 59-62. 64. 256 40. 172. 136 Plugin Matrix 134 terminology 134. 201. 197 repository creating a shared 216-218 internal 216 local 32. 92. 145153. 225.xml 29. 137-140. 142. 32-35 26 P packaging 30. 202 T Tag List report test directory testing sources tests compiling hibernate3 JUnit monitoring running tiger/src directory tiger/test directory Twiki format 184. 251. 215. 199-201. 251 parameter injection 135 phase binding 134-136 plugin descriptor 137. 176. 67 . 187. 189. 191-194. 197. 124. 134 developer resources 273-282. 127. 189. 138 plugins definition of 28. 43 260. 258 55. 63. 184 separating from user documentation 177-182 standard project information reports 80 Surefire 171. 88-90.Better Builds with Maven principles of Maven Ruby on Rails (ROR) 25-27. 189-191. 284. 239. 173. 208. 235. 188 JavaNCSS 197 JXR 184-186 PMD 184. 177. 123. 231. 197. 193 creating source code reference 185. 163-165. 187 Dependency Convergence 184 FindBugs 197 Javadoc 184. 174-176. 197. 189. 156. 176. 210. 261 254 249 197-199. 171. 189. 216. 201. 249. 129. 64. 251 42. 196.
110-112. 115.Index W Web development building a Web services client project 91-93. 101 102 301 . 117-122 deploying Web applications 114. 114 XDOC format Xdoclet XDoclet2 X 78 100. 117 improving productivity 108.
This action might not be possible to undo. Are you sure you want to continue?
|
https://pt.scribd.com/doc/66655570/47487031-BetterBuildsWithMaven
|
CC-MAIN-2015-48
|
refinedweb
| 71,750
| 52.36
|
/* Parameters and display hooks for terminal devices.
2005, 2006, 2007, 2008 Free Software Foundation, Inc.. */
/* Miscellanea. */) P_ ((struct frame *f));
/* Input queue declarations and hooks. */
/* Expedient hack: only provide the below definitions to files that
are prepared to handle lispy things. CONSP is defined if lisp.h
has been included before this file. */
#ifdef CONSP defined (WINDOWSNT) || defined (MAC_OS)
LANGUAGE_CHANGE_EVENT, /* A LANGUAGE_CHANGE_EVENT is
generated on WINDOWSNTdef HAVE_GPM
, GPM_CLICK_EVENT
#ifdef HAVE_DBUS
, DBUS_EVENT
#endif
#ifdef WINDOWSNT
/*. */
/*_input_event. */
int *padding[2];
/*;
/* Additional event argument. This is used for TOOL_BAR_EVENTs and
HELP_EVENTs and avoids calling Fcons during signal handling. */
Lisp_Object arg;
};
#define EVENT_INIT(event) bzero (&(event), *);
extern void term_mouse_moveto (int, int);
/* The device for which we have enabled gpm support. */
extern struct tty_display_info *gpm_tty;
#endif /* CONSP */
struct mac_display_info;
struct w32_display_info;
/* Terminal-local parameters. */
struct terminal
{
/* The first two fields are really the header of a vector */
/* The terminal code does not refer to them. */
EMACS_UINT size;
struct Lisp_Vector *vec_next;
/* Parameter alist of this terminal. */
Lisp_Object param;
#ifdef MULTI_KBOARD
/* The terminal's keyboard object. */
struct kboard *kboard;
#endif
mac_display_info *mac; /* macter */
terminal *));
void (*set_terminal_modes_hook) P_ ((struct terminal *));
void (*update_begin_hook) P_ ((struct frame *));
void (*update_end_hook) P_ ((struct frame *));
void (*set_terminal_window_hook) P_ (. */
void (*mouse_position_hook) P_ ((struct frame **f, int,
Lisp_Object *bar_window,
enum scroll_bar_part *part,
Lisp_Object *x,
Lisp_Object *y,
unsigned long *time));
/* The window system handling code should set this if the mouse has
moved since the last call to the mouse_position_hook. Calling that
hook should clear this. */
int mouse_moved;
/* When a frame's focus redirection is changed, this hook tells the
window system code to re-decide where to put the highlight. Under
X, this means that Emacs lies about where the focus is. */
void (*frame_rehighlight_hook) P_ ( is non-zero, F is brought to the front, before all other
windows. If RAISE is zero, F is sent to the back, behind all other
windows. */
void (*frame_raise_lower_hook) P_ ((struct frame *f, int raise));
/* If the value of the frame parameter changed, whis hook is called.
For example, if going from fullscreen to not fullscreen this hook
may do something OS dependent, like extended window manager hints on X11. */
void (*fullscreen_hook) P_ () P_ () P_ ((struct frame *frame));
/* Unmark WINDOW's scroll bar for deletion in this judgement cycle.
Note that it's okay to redeem a scroll bar that is not condemned. */
void (*redeem_scroll_bar_hook) P_ (.
If non-zero, this hook should be safe to apply to any frame,
whether or not it can support scroll bars, and whether or not it is
currently displaying them. */
void (*judge_scroll_bars_hook) P_ ((struct frame *FRAME));
/* Called to read input events.
TERMINAL indicates which terminal device was closed (hangup), and it should be deleted.
XXX Please note that a non-zero value of EXPECTED only means that
there is available input on at least one of the currently opened
terminal devices -- but not necessarily on this device.
Therefore, in most cases EXPECTED should be simply ignored.
XXX This documentation needs to be updated. */
int (*read_socket_hook) P_ ((struct terminal *terminal,
int expected,
struct input_event *hold_quit));
/* Called when a frame's display becomes entirely up to date. */
void (*frame_up_to_date_hook) P_ ((struct frame *));
/* Called to delete the device-specific portions of a frame that is
on this terminal device. */
void (*delete_frame_hook) P_ (. Fdelete_frame ensures that there are no live
frames on the terminal when it calls this hook, so infinite
recursion is prevented. */
void (*delete_terminal_hook) P_ ((struct terminal *));
};
/*def MAC_OS
#define FRAME_WINDOW_P(f) FRAME_MAC_P (f)
#endif
#ifndef FRAME_WINDOW_P
#define FRAME_WINDOW_P(f) (0)
#endif
/* Return true if the terminal device is not suspended. */
#define TERMINAL_ACTIVE_P(d) ((d)->type != output_termcap || (d)->display_info.tty->input)
extern Lisp_Object get_terminal_param P_ ((struct terminal *, Lisp_Object));
extern struct terminal *get_terminal P_ ((Lisp_Object terminal, int));
extern struct terminal *create_terminal P_ ((void));
extern void delete_terminal P_ ((struct terminal *));
/* The initial terminal device, created by initial_term_init. */
extern struct terminal *initial_terminal;
/* arch-tag: 33a00ecc-52b5-4186-a410-8801ac9f087d
(do not change this comment) */
|
https://emba.gnu.org/emacs/emacs/-/blame/5ac58e4c9355b638a249d55fdffa03e39cf1db03/src/termhooks.h
|
CC-MAIN-2020-50
|
refinedweb
| 645
| 55.74
|
Reading: Deitel & Deitel, Chap. 8
Note: You may run into bizzare Microsoft compilation errors while doing the examples in this worksheet. If you do, please see the email Paul sent out regarding fixes and workarounds for Worksheet 9 (the text of the email is pasted here)
When you define your own types using classes, you often want to write code
which allows you to use C++ operators on them. You might want to
add two objects together using the
+ operator
or to output them using the
<< operator. This is called
operator overloading, because you are taking a previously
defined operator and giving it a new meaning.
In fact, most of the operators that we commonly use are already
overloaded, in the sense that they have different meanings depending
on the type of their operands. For example, the
+ operator
performs a different operation if its operands are of type double than if they are of type int.
Note that all operators have arguments (they are typically called
operands rather than arguments, but the meaning is the same) and
they all return a value. Thus, overloading an operator is like
writing a function. For example, the plus operator takes two
operands and returns a third operand, so overloading the plus
operator to add two instances of the class X which returns
an instance of the class X is exactly the same as writing
a function with this signature:
X Add(X x1, X x2)
There are two ways to overload an operator. You can overload the
operator with a member function definition, in which case the
current class instance (pointed by the
this pointer) is
an implicit argument, just as with other member functions. The second
way is to overload the operator with a function definition outside the
class and declare that function to be a friend, using a friend
declaration within the class.
The syntax of operator overloading is as follows:
type operator operator (argument list ) { statements }
Here is a simple example which overloads the
+ operator for the
class Point using the friend method. This allows us to add two points
together to get a third point where the
x value of the new point is
the sum of the two
x values and the
y value of the new point is the
sum of the
y values.
#include <iostream.h> class Point { private: double x,y; public: Point() { x = y = 0.0;} // default constructor Point(double a, double b){x=a; y=b;} // another constructor friend Point operator+(Point, Point); }; Point operator+(Point a, Point b) { Point temp(a.x + b.x, a.y + b.y); // calls the constructor return temp; } int main() { Point p1(3.4,5.6); Point p2(7.8,9.0); Point p3 = p1 + p2; return 0; }Note that when the overloading is done as a friend function, all class instance arguments must be written explicitly in the argument list.
Thus, overloading an operator is just like defining any other member or friend function definition except for two syntax differences:
operator.
+, you write the operator between its two operands (infix notation).
Plus
Point Add(Point a, Point b) { Point temp(a.x + b.x, a.y + b.y); // calls the constructor return temp; }
p1and
p2, as
Point p3 = Plus(p1, p2);
#include <iostream.h> class Point { private: double x,y; public: Point() { x = y = 0.0;} Point(double a, double b){x=a; y=b;} // another constructor Point operator+(Point n) { // note that there is only one argument Point temp(x + n.x, y + n.y); return temp; } }; int main() { Point p1(3.4,5.6); Point p2(7.8,9.0); Point p3 = p1 + p2; return 0; }
cout << p3 << endl;
<<operator is not defined for an object of type
Point. One simple solution to this would be to define a member function PrintPoint() of the
Pointclass as follows:
void PrintPoint() { cout << "X is " << x << " Y is " << y << endl; }This is what we've done up to this point. However, a better solution is to overload the
<<operator so that it knows how to output a Point. The
<<operator is a binary operator, which means that it takes two operands. The second operand would be a Point, but what is the first? The left side of the
<<can be cout, or an instance of the class ofstream (file open for output), among other things. Both of these are derived classes from the base class ostream,1 an output stream. Thus the left operand of the
<<operator should be of type ostream. There are two other things that you must do to write this function. The ostream must be a reference pointer, and the operator must return a reference pointer to an ostream.
Here is the solution:
class Point { ... friend ostream& operator<<(ostream &, Point); }; // end of definition of the class Point ostream& operator<<(ostream &os, Point p) { os << "X is " << p.x << " Y is " << p.y ; return os; }
Overloading the
>> operator is similar except that the first
operand is of type istream, and the second argument must be a
reference argument because its value is changed.
class Point { ... friend istream& operator>>(istream is&, Point&); }; // end of definition of the class Point istream& operator>>(istream &is, Point& p) { is >> p.x >> p.y; return is; }Note that the code for overloading the
>>and
<<operators must be done outside the class using the friend method because when operators are overloaded inside the class, the instance of the class is the first operand, and in these cases the first operand is not the same type as the class.
Exercise 1: Define a class IntArray which has one private member, an array of ten ints. The class should have a single constructor, which takes no arguments and sets all ten values in the array to zero. Define a public member function void Setval(int pos, int val) which sets the value of the array at position pos to val (Note that pos must be in the range 0 .. 9).
Define three operators on IntArray
ostream& << IntArray which displays all of ten values on
the terminal on a single line separated by a space.
IntArray + (IntArray a1, IntArray a2) which returns a
new IntArray whose values at each of the ten positions are the sum of the values
of the two arguments at the same position. For example, if the value
at position 2 of a1 was 17 and the value of position 2 of a2 was
5, the value of position 2 in the array which was returned would be 22.
IntArray - (IntArray a1, IntArray a2) which returns a new
IntArray whose values at each of the ten positions are the value of the
first argument minus the value of the second argument. For example, if the value
at position 2 of a1 was 17 and the value of position 2 of a2 was
5, the value of position 2 in the array which was returned would be 12.
Here is a short main to test your code:
#include <iostream> using namespace std; int main() { IntArray A, B; A.setval(0,5); A.setval(1,7); A.setval(2.23); B.setval(0,3); B.setval(1,11); B.setval(2,10); IntArray C = A + B; cout << C << endl; // should print 8 18 33 0 0 0 0 0 0 0 C = A - B; cout << C << endl; // should print 2 -4 13 0 0 0 0 0 0 0 return 0; }
Exercise 2 Rewrite the same code defining the last two operations as members of the class.
Any operator which is already defined in C++ can be overloaded with the following restrictions and exceptions.
. .* :: ?:and
sizeofcannot be overloaded (you have not seen some of these yet).
+is a binary operator in C and C++, and so if it is overloaded, it must still be a binary operator.
+operation is defined for integers, and so you cannot overload it so that it has a different meaning for integer operands.
Overloading the
++ and
-- operators
Unary operators (i.e., operators which only take one argument)
can be overloaded in the same fashion. Recall that for
integers the
++ operator
takes two forms, the prefix form and the postfix form. Both
increment their operands by 1, but the prefix form returns the
incremented value while the postfix form returns the value before
incrementing. Here is some sample code for integers
to refresh your memory on this:
int i, j; i = 7; j = i++; // postfix cout << i << ' ' << j << endl; // prints 8 7 i = 7; j = ++i; // prefix cout << i << ' ' << j << endl; // prints 8 8When we overload
++for some new type, we should define both a prefix and a postfix version, and their meanings should follow the same conventions as for the built-in integer versions: they should both increment the object they are applied to (in some sense), and the prefix form should return the incremented value while the postfix form returns the value before incrementing.
Here is code to overload the two versions of the
++ operator
for the class
Point.
class Point { ... friend Point& operator++(Point&); // prefix friend Point operator++(Point&, int); // postfix ... }; Point& operator++(Point& p) // prefix { ++p.x; ++p.y; return p; } Point operator++(Point& p, int) // postfix { Point temp = p; ++p.x; ++p.y; return temp; }There are several non-obvious things to notice about these definitions:
intparameter doing in the postfix version? It isn't named or used in the body of the function, so it seems rather useless. In fact, its only purpose is to distinguish between the prefix and postfix versions of the operator when defining them, so that the compiler can keep them separate.2 When you call the function you do not pass an argument corresponding to the
intparameter, you simply write
p++.
Pointit returns the result as a reference, which is more generally useful. For example, it makes an expressions like
++(++p)work as most people would expect--i.e.,
pwould be incremented twice.
The decrement operator
-- is overloaded in the same way.
Exercise 3 Add the following additional operators to
your class IntArray:
>>,
++ (prefix),
++ (postfix)
To increment an IntArray, add 1 to each of the ten values.
|
http://www.cs.rpi.edu//~lallip/cs2/wksht9/
|
crawl-003
|
refinedweb
| 1,691
| 58.62
|
OT Biological counter-current exchange a.k.a rete mirabile
- From: Jerry Avins <jya@xxxxxxxx>
- Date: Tue, 22 Jan 2008 11:49:49 -0500
Josh Hayes wrote:
maryann kolb <mkolb@xxxxxxxxxx> wrote in news:be89p35p2tp1eance968410129pq50lapo@xxxxxxx:
Having twice had Chicadees land on my finger, I was surprised at how
hot their little feet felt even in the fleeting moments that they
stood there. They trap air under their feathers and their boody heat
makes it even cozier that an L.L. Bean parka.
You could find out more about how this works by looking for web pages devoted to "counter-current exchange".
We do it too, but not as efficiently (well, of course: consider how superior the bird lung is as well!). Heck, even tuna have counter current exchange systems in their skulls to keep their brains cool. (tuna are incredible fish -- don't get me going on that!)
With efficient heat exchange, the chickadee's feet would seem cool, not warm. A seal's flipper and the blood in it is nearly at sea temperature, but the exchange of heat between blood entering and leaving brings the return (venous) blood nearly to body temperature. Such exchanges are not always of heat nor do they always involve blood. A birds lung and windpipe (So /that's/ why geese have such long necks!) is heat and air columns, the network in a dog's snout uses air to cool blood going to the brain. Humans and canids are cursive hunters by biological nature. A fit human can go farther "in the long run" than any animal but a canine. Dogs and humans have different mechanisms, both involving exchange. With canids, it is in the snout, as I wrote above. Humans cool by sweating, and counter-current exchange in the kidneys allows them to process the prodigious amounts of water sometimes required. (Our kidneys are inefficient at conserving water, a price paid for enabling rapid refill when supplies are low.)
This also explains how, for instance, gulls can stand around on ice which must be well below zero C without having their feet fall off.
That's the point: those feet must feel cold to the touch. Water birds' feet need heat exchangers not only to stand on ice, but to remain submerged for long times.
Obviously it's been WAY too long since I taught vertebrate physiology; I have lectures bubbling up out of me.
I eagerly await whatever insights you can provide.
Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
.
- References:
- Re: How do they survive?
- From: Howard Lester
- Re: How do they survive?
- From: ---MIKE---
- Re: How do they survive?
- From: maryann kolb
- Re: How do they survive?
- From: Josh Hayes
- Prev by Date: Re: So do they hear me fill the feeder
- Next by Date: Re: Hey Tammie!!!
- Previous by thread: Re: How do they survive?
- Next by thread: Re: How do they survive?
- Index(es):
|
http://newsgroups.derkeiler.com/Archive/Rec/rec.birds/2008-01/msg00433.html
|
crawl-002
|
refinedweb
| 495
| 72.87
|
std::time the POSIX specification and return a value of integral type holding 86400 times the number of calendar days since the Epoch plus the number of seconds that have passed since the last midnight UTC. Most notably, POSIX time does not (and can not) take leap seconds into account, so that this integral value is not equal to the number of S.I. seconds that have passed since the epoch, but rather is reduced with the number of leap seconds that have occurred since the epoch. Implementations in which std::time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.
[edit] Example
#include <ctime> #include <iostream> int main() { std::time_t result = std::time(nullptr); std::cout << std::asctime(std::localtime(&result)) << result << " seconds since the Epoch\n"; }
Possible output:
Wed Sep 21 10:27:52 2011 1316615272 seconds since the Epoch
|
https://en.cppreference.com/w/cpp/chrono/c/time
|
CC-MAIN-2021-21
|
refinedweb
| 147
| 53.14
|
On Fri, 26 May 2006, Paul Howarth wrote: > Would Smarty (in Extras as php-Smarty) be an PEAR or PECL package? It > doesn't appear to me to be either, Well, code-wise it could be considered pear, as the smarty engine is written in pure-php code. However, as the Smarty Template Engine is not part of pear, we can't really put it in as php-pear-Smarty, isn't it? So either we name it just "Smarty", as upstream does it, or we keep calling it "php-Smarty". After all, the pear and pecl packages should in case there's no collision do provide a php-%{name} package. So, php-Smarty sounds somewhat sane. But I guess Smarty is a somewhat border-case: Normal php-webapps should not be required to be kept in a php-prefixed namespace, but one could argue that Smarty is a base-package and thus should live in the php-namespace. bye, andreas
|
https://www.redhat.com/archives/fedora-extras-list/2006-May/msg00889.html
|
CC-MAIN-2014-10
|
refinedweb
| 161
| 59.03
|
Java. Scanner Class Keyboard Class. User Interaction . So far when we created a program there was no human interaction Our programs just simply showed one output In order for users to interact with our programs we need to use external classes. External.
Java
Scanner Class
Keyboard Class
Class
Creates an instance of the class
Constructor
Name
import java.util.Scanner;
class test {
public static void main (String args[]){
//Create an instance of the Scanner
Scanner s = new Scanner(System.in);
System.out.print("Enter your name : ");
//Since the name is a String the String
//has to be used
String name = s.next();
System.out.println("How old are you ? ");
//The age can be stored in a long
long age = s.nextLong();
System.out.println("You are "+name+" and you are "+age+" years old.");
}
}
class test {
public static void main (String args[]){
System.out.print("Enter your name : ");
String name = Keyboard.readString();
System.out.println("How old are you ? ");
long age = Keyboard.readLong();
System.out.println("You are "+name+" and you are "+age+" years old.");
}
}
|
http://www.slideserve.com/elda/java
|
CC-MAIN-2017-13
|
refinedweb
| 174
| 59.9
|
! 20160402 [ncurses.git] / NEWS.2589 2016/04/02 23:49 20160402 49 + regenerate HTML manpages. 50 + improve manual pages for utilities with respect to POSIX versus 51 X/Open Curses. 52 53 20160326 54 + regenerate HTML manpages. 55 + improve test/demo_menus.c, allowing mouse-click on the menu-headers 56 to switch the active menu. This requires a new extension option 57 O_MOUSE_MENU to tell the menu driver to put mouse events which do not 58 apply to the active menu back into the queue so that the application 59 can handle the event. 60 61 20160319 62 + improve description of tgoto parameters (report by Steffen Nurpmeso). 63 + amend workaround for Solaris line-drawing to restore a special case 64 that maps Unicode line-drawing characters into the acsc string for 65 non-Unicode locales (Debian #816888). 66 67 20160312 68 + modified test/filter.c to illustrate an alternative to getnstr, that 69 polls for input while updating a clock on the right margin as well 70 as responding to window size-changes. 71 72 20160305 73 + omit a redefinition of "inline" when traces are enabled, since this 74 does not work with gcc 5.3.x MinGW cross-compiling (cf: 20150912). 75 76 20160220 77 + modify test/configure script to check for pthread dependency of 78 ncursest or ncursestw library when building ncurses examples, e.g., 79 in case weak symbols are used. 80 + modify configure macro for shared-library rules to use -Wl,-rpath 81 rather than -rpath to work around a bug in scons (FreeBSD #178732, 82 cf: 20061021). 83 + double-width multibyte characters were not counted properly in 84 winsnstr and wins_nwstr (report/example by Eric Pruitt). 85 + update config.guess, config.sub from 86 87 88 20160213 89 + amend fix for _nc_ripoffline from 20091031 to make test/ditto.c work 90 in threaded configuration. 91 + move _nc_tracebits, _tracedump and _tracemouse to curses.priv.h, 92 since they are not part of the suggested ABI6. 93 94 20160206 95 + define WIN32_LEAN_AND_MEAN for MinGW port, making builds faster. 96 + modify test/ditto.c to allow $XTERM_PROG environment variable to 97 override "xterm" as the name of the program to run in the threaded 98 configuration. 99 100 20160130 101 + improve formatting of man/curs_refresh.3x and man/tset.1 manpages 102 + regenerate HTML manpages using newer man2html to eliminate some 103 unwanted blank lines. 104 105 20160123 106 + ifdef'd header-file definition of mouse_trafo() with NCURSES_NOMACROS 107 (report by Corey Minyard). 108 + fix some strict compiler-warnings in traces. 109 110 20160116 111 + tidy up comments about hardcoded 256color palette (report by 112 Leonardo Brondani Schenkel) -TD 113 + add putty-noapp entry, and amend putty entry to use application mode 114 for better consistency with xterm (report by Leonardo Brondani 115 Schenkel) -TD 116 + modify _nc_viscbuf2() and _tracecchar_t2() to trace wide-characters 117 as a whole rather than their multibyte equivalents. 118 + minor fix in wadd_wchnstr() to ensure that each cell has nonzero 119 width. 120 + move PUTC_INIT calls next to wcrtomb calls, to avoid carry-over of 121 error status when processing Unicode values which are not mapped. 122 123 20160102 124 + modify ncurses c/C color test-screens to take advantage of wide 125 screens, reducing the number of lines used for 88- and 256-colors. 126 + minor refinement to check versus ncv to ignore two parameters of 127 SGR 38 and 48 when those come from color-capabilities. 128 129 20151226 130 + add check in tic for use of bold, etc., video attributes in the 131 color capabilities, accounting whether the feature is listed in ncv. 132 + add check in tic for conflict between ritm, rmso, rmul versus sgr0. 133 134 20151219 135 + add a paragraph to curs_getch.3x discussing key naming (discussion 136 with James Crippen). 137 + amend workaround for Solaris vs line-drawing to take the configure 138 check into account. 139 + add a configure check for wcwidth() versus the ncurses line-drawing 140 characters, to use in special-casing systems such as Solaris. 141 142 20151212 143 + improve CF_XOPEN_CURSES macro used in test/configure, to define as 144 needed NCURSES_WIDECHAR for platforms where _XOPEN_SOURCE_EXTENDED 145 does not work. Also modified the test program to ensure that if 146 building with ncurses, that the cchar_t type is checked, since that 147 normally is since 20111030 ifdef'd depending on this test. 148 + improve 20121222 workaround for broken acs, letting Solaris "work" 149 in spite of its misconfigured wcwidth which marks all of the line 150 drawing characters as double-width. 151 152 20151205 153 + update form_cursor.3x, form_post.3x, menu_attributes.3x to list 154 function names in NAME section (patch by Jason McIntyre). 155 + minor fixes to manpage NAME/SYNOPSIS sections to consistently use 156 rule that either all functions which are prototyped in SYNOPSIS are 157 listed in the NAME section, or the manual-page name is the sole item 158 listed in the NAME section. The latter is used to reduce clutter, 159 e.g., for the top-level library manual pages as well as for certain 160 feature-pages such as SP-funcs and threading (prompted by patches by 161 Jason McIntyre). 162 163 20151128 164 + add option to preserve leading whitespace in form fields (patch by 165 Leon Winter). 166 + add missing assignment in lib_getch.c to make notimeout() work 167 (Debian #805618). 168 + add 't' toggle for notimeout() function in test/ncurses.c a/A screens 169 + add viewdata terminal description (Alexandre Montaron). 170 + fix a case in tic/infocmp for formatting capabilities where a 171 backslash at the end of a string was mishandled. 172 + fix some typos in curs_inopts.3x (Benno Schulenberg). 173 174 20151121 175 + fix some inconsistencies in the pccon* entries -TD 176 + add bold to pccon+sgr+acs and pccon-base (Tati Chevron). 177 + add keys f12-f124 to pccon+keys (Tati Chevron). 178 + add test/test_sgr.c program to exercise all combinations of sgr. 179 180 20151107 181 + modify tset's assignment to TERM in its output to reflect the name by 182 which the terminal description is found, rather than the primary 183 name. That was an unnecessary part from the initial conversion of 184 tset from termcap to terminfo. The termcap program in 4.3BSD did 185 this to avoid using the short 2-character name (report by Rich 186 Burridge). 187 + minor fix to configure script to ensure that rules for resulting.map 188 are only generated when needed (cf: 20151101). 189 + modify configure script to handle the case where tic-library is 190 renamed, but the --with-debug option is used by itself without 191 normal or shared libraries (prompted by comment in Debian #803482). 192 193 20151101 194 + amend change for pkg-config which allows build of pc-files when no 195 valid pkg-config library directory was configured to suppress the 196 actual install if it is not overridden to a valid directory at 197 install time (cf: 20150822). 198 + modify editing script which generates resulting.map to work with the 199 clang configuration on recent FreeBSD, which gives an error on an 200 empty "local" section. 201 + fix a spurious "(Part)" message in test/ncurses.c b/B tests due 202 to incorrect attribute-masking. 203 204 20151024 205 + modify MKexpanded.c to update the expansion of a temporary filename 206 to "expanded.c", for use in trace statements. 207 + modify layout of b/B tests in test/ncurses.c to allow for additional 208 annotation on the right margin; some terminals with partial support 209 did not display well. 210 + fix typo in curs_attr.3x (patch by Sven Joachim). 211 + fix typo in INSTALL (patch by Tomas Cech). 212 + improve configure check for setting WILDCARD_SYMS variable; on ppc64 213 the variable is in the Data section rather than Text (patch by Michel 214 Normand, Novell #946048). 215 + using configure option "--without-fallbacks" incorrectly caused 216 FALLBACK_LIST to be set to "no" (patch by Tomas Cech). 217 + updated minitel entries to fix kel problem with emacs, and add 218 minitel1b-nb (Alexandre Montaron). 219 + reviewed/updated nsterm entry Terminal.app in OSX -TD 220 + replace some dead URLs in comments with equivalents from the 221 Internet Archive -TD 222 + update config.guess, config.sub from 223 224 225 20151017 226 + modify ncurses/Makefile.in to sort keys.list in POSIX locale 227 (Debian #801864, patch by Esa Peuha). 228 + remove an early-return from _nc_do_color, which can interfere with 229 data needed by bkgd when ncurses is configured with extended colors 230 (patch by Denis Tikhomirov). 231 > fixes for OS/2 (patches by KO Myung-Hun) 232 + use button instead of kbuf[0] in EMX-specific part of lib_mouse.c 233 + support building with libtool on OS/2 234 + use stdc++ on OS/2 kLIBC 235 + clear cf_XOPEN_SOURCE on OS/2 236 237 20151010 238 + add configure check for openpty to test/configure script, for ditto. 239 + minor fixes to test/view.c in investigating Debian #790847. 240 + update autoconf patch to 2.52.20150926, incorporates a fix for Cdk. 241 + add workaround for breakage of POSIX makefiles by recent binutils 242 change. 243 + improve check for working poll() by using posix_openpt() as a 244 fallback in case there is no valid terminal on the standard input 245 (prompted by discussion on bug-ncurses mailing list, Debian #676461). 246 247 20150926 248 + change makefile rule for removing resulting.map to distclean rather 249 than clean. 250 + add /lib/terminfo to terminfo-dirs in ".deb" test-package. 251 + add note on portability of resizeterm and wresize to manual pages. 252 253 20150919 254 + clarify in resizeterm.3x how KEY_RESIZE is pushed onto the input 255 stream. 256 + clarify in curs_getch.3x that the keypad mode affects ability to 257 read KEY_MOUSE codes, but does not affect KEY_RESIZE. 258 + add overlooked build-fix needed with Cygwin for separate Ada95 259 configure script, cf: 20150606 (report by Nicolas Boulenguez) 260 261 20150912 262 + fixes for configure/build using clang on OSX (prompted by report by 263 William Gallafent). 264 + do not redefine "inline" in ncurses_cfg.h; this was originally to 265 solve a problem with gcc/g++, but is aggravated by clang's misuse 266 of symbols to pretend it is gcc. 267 + add braces to configure script to prevent unwanted add of 268 "-lstdc++" to the CXXLIBS symbol. 269 + improve/update test-program used for checking existence of stdc++ 270 library. 271 + if $CXXLIBS is set, the linkage test uses that in addition to $LIBS 272 273 20150905 274 + add note in curs_addch.3x about line-drawing when it depends upon 275 UTF-8. 276 + add tic -q option for consistency with infocmp, use it to suppress 277 all comments from the "tic -I" output. 278 + modify infocmp -q option to suppress the "Reconstructed from" 279 header. 280 + add infocmp/tic -Q option, which allows one to dump the compiled 281 form of the terminal entry, in hexadecimal or base64. 282 283 20150822 284 + sort options in usage message for infocmp, to make it simpler to 285 see unused letters. 286 + update usage message for tic, adding "-0" option. 287 + documented differences in ESCDELAY versus AIX's implementation. 288 + fix some compiler warnings from ports. 289 + modify --with-pkg-config-libdir option to make it possible to install 290 ".pc" files even if pkg-config is not found (adapted from patch by 291 Joshua Root). 292 293 20150815 294 + disallow "no" as a possible value for "--with-shlib-version" option, 295 overlooked in cleanup-changes for 20000708 (report by Tommy Alex). 296 + update release notes in INSTALL. 297 + regenerate llib-* files to help with review for release notes. 298 299 20150810 300 + workaround for Debian #65617, which was fixed in mawk's upstream 301 releases in 2009 (report by Sven Joachim). See 302 303 304 20150808 6.0 release for upload to 305 306 20150808 307 + build-fix for Ada95 on older platforms without stdint.h 308 + build-fix for Solaris, whose /bin/sh and /usr/bin/sed are non-POSIX. 309 + update release announcement, summarizing more than 800 changes across 310 more than 200 snapshots. 311 + minor fixes to manpages, etc., to simplify linking from announcement 312 page. 313 314 20150725 315 + updated llib-* files. 316 + build-fixes for ncurses library "test_progs" rule. 317 + use alternate workaround for gcc 5.x feature (adapted from patch by 318 Mikhail Peselnik). 319 + add status line to tmux via xterm+sl (patch by Nicholas Marriott). 320 + fixes for st 0.5 from testing with tack -TD 321 + review/improve several manual pages to break up wall-of-text: 322 curs_add_wch.3x, curs_attr.3x, curs_bkgd.3x, curs_bkgrnd.3x, 323 curs_getcchar.3x, curs_getch.3x, curs_kernel.3x, curs_mouse.3x, 324 curs_outopts.3x, curs_overlay.3x, curs_pad.3x, curs_termattrs.3x 325 curs_trace.3x, and curs_window.3x 326 327 20150719 328 + correct an old logic error for %A and %O in tparm (report by "zreed"). 329 + improve documentation for signal handlers by adding section in the 330 curs_initscr.3x page. 331 + modify logic in make_keys.c to not assume anything about the size 332 of strnames and strfnames variables, since those may be functions 333 in the thread- or broken-linker configurations (problem found by 334 Coverity). 335 + modify test/configure script to check for pthreads configuration, 336 e.g., ncursestw library. 337 338 20150711 339 + modify scripts to build/use test-packages for the pthreads 340 configuration of ncurses6. 341 + add references to ttytype and termcap symbols in demo_terminfo.c and 342 demo_termcap.c to ensure that when building ncursest.map, etc., that 343 the corresponding names such as _nc_ttytype are added to the list of 344 versioned symbols (report by Werner Fink) 345 + fix regression from 20150704 (report/patch by Werner Fink). 346 347 20150704 348 + fix a few problems reported by Coverity. 349 + fix comparison against "/usr/include" in misc/gen-pkgconfig.in 350 (report by Daiki Ueno, Debian #790548, cf: 20141213). 351 352 20150627 353 + modify configure script to remove deprecated ABI 5 symbols when 354 building ABI 6. 355 + add symbols _nc_Default_Field, _nc_Default_Form, _nc_has_mouse to 356 map-files, but marked as deprecated so that they can easily be 357 suppressed from ABI 6 builds (Debian #788610). 358 + comment-out "screen.xterm" entry, and inherit screen.xterm-256color 359 from xterm-new (report by Richard Birkett) -TD 360 + modify read_entry.c to set the error-return to -1 if no terminal 361 databases were found, as documented for setupterm. 362 + add test_setupterm.c to demonstrate normal/error returns from the 363 setupterm and restartterm functions. 364 + amend cleanup change from 20110813 which removed redundant definition 365 of ret_error, etc., from tinfo_driver.c, to account for the fact that 366 it should return a bool rather than int (report/analysis by Johannes 367 Schindelin). 368 369 20150613 370 + fix overflow warning for OSX with lib_baudrate.c (cf: 20010630). 371 + modify script used to generate map/sym files to mark 5.9.20150530 as 372 the last "5.9" version, and regenerated the files. That makes the 373 files not use ".current" for the post-5.9 symbols. This also 374 corrects the label for _nc_sigprocmask used in when weak symbols are 375 configured for the ncursest/ncursestw libraries (prompted by 376 discussion with Sven Joachim). 377 + fix typo in NEWS (report by Sven Joachim). 378 379 20150606 pre-release 380 + make ABI 6 the default by updates to dist.mk and VERSION, with the 381 intention that the existing ABI 5 should build as before using the 382 "--with-abi-version=5" option. 383 + regenerate ada- and man-html documentation. 384 + minor fixes to color- and util-manpages. 385 + fix a regression in Ada95/gen/Makefile.in, to handle special case of 386 Cygwin, which uses the broken-linker feature. 387 + amend fix for CF_NCURSES_CONFIG used in test/configure to assume that 388 ncurses package scripts work when present for cross-compiling, as the 389 lessor of two evils (cf: 20150530). 390 + add check in configure script to disallow conflicting options 391 "--with-termlib" and "--enable-term-driver". 392 + move defaults for "--disable-lp64" and "--with-versioned-syms" into 393 CF_ABI_DEFAULTS macro. 394 395 20150530 396 + change private type for Event_Mask in Ada95 binding to work when 397 mmask_t is set to 32-bits. 398 + remove spurious "%;" from st entry (report by Daniel Pitts) -TD 399 + add vte-2014, update vte to use that -TD 400 + modify tic and infocmp to "move" a diagnostic for tparm strings that 401 have a syntax error to tic's "-c" option (report by Daniel Pitts). 402 + fix two problems with configure script macros (Debian #786436, 403 cf: 20150425, cf: 20100529). 404 405 20150523 406 + add 'P' menu item to test/ncurses.c, to show pad in color. 407 + improve discussion in curs_color.3x about color rendering (prompted 408 by comment on Stack Overflow forum): 409 + remove screen-bce.mlterm, since mlterm does not do "bce" -TD 410 + add several screen.XXX entries to support the respective variations 411 for 256 colors -TD 412 + add putty+fnkeys* building-block entries -TD 413 + add smkx/rmkx to capabilities analyzed with infocmp "-i" option. 414 415 20150516 416 + amend change to ".pc" files to only use the extra loader flags which 417 may have rpath options (report by Sven Joachim, cf: 20150502). 418 + change versioning for dpkg's in test-packages for Ada95 and 419 ncurses-examples for consistency with Debian, to work with package 420 updates. 421 + regenerate html manpages. 422 + clarify handling of carriage return in waddch manual page; it was 423 discussed only in the portability section (prompted by comment on 424 Stack Overflow forum): 425 426 20150509 427 + add test-packages for cross-compiling ncurses-examples using the 428 MinGW test-packages. These are only the Debian packages; RPM later. 429 + cleanup format of debian/copyright files 430 + add pc-files to the MinGW cross-compiling test-packages. 431 + correct a couple of places in gen-pkgconfig.in to handle renaming of 432 the tinfo library. 433 434 20150502 435 + modify the configure script to allow different default values 436 for ABI 5 versus ABI 6. 437 + add wgetch-events to test-packages. 438 + add a note on how to build ncurses-examples to test/README. 439 + fix a memory leak in delscreen (report by Daniel Kahn Gillmor, 440 Debian #783486) -TD 441 + remove unnecessary ';' from E3 capabilities -TD 442 + add tmux entry, derived from screen (patch by Nicholas Marriott). 443 + split-out recent change to nsterm-bce as nsterm-build326, and add 444 nsterm-build342 to reflect changes with successive releases of OSX 445 (discussion with Leonardo B Schenkel) 446 + add xon, ich1, il1 to ibm3161 (patch by Stephen Powell, Debian 447 #783806) 448 + add sample "magic" file, to document ext-putwin. 449 + modify gen-pkgconfig.in to add explicit -ltinfo, etc., to the 450 generated ".pc" file when ld option "--as-needed" is used, or when 451 ncurses and tinfo are installed without using rpath (prompted by 452 discussion with Sylvain Bertrand). 453 + modify test-package for ncurses6 to omit rpath feature when installed 454 in /usr. 455 + add OSX's "*.dSYM" to clean-rules in makefiles. 456 + make extra-suffix work for OSX configuration, e.g., for shared 457 libraries. 458 + modify Ada95/configure script to work with pkg-config 459 + move test-package for ncurses6 to /usr, since filename-conflicts have 460 been eliminated. 461 + corrected build rules for Ada95/gen/generate; it does not depend on 462 the ncurses library aside from headers. 463 + reviewed man pages, fixed a few other spelling errors. 464 + fix a typo in curs_util.3x (Sven Joachim). 465 + use extra-suffix in some overlooked shared library dependencies 466 found by 20150425 changes for test-packages. 467 + update config.guess, config.sub from 468 469 470 20150425 471 + expanded description of tgetstr's area pointer in manual page 472 (report by Todd M Lewis). 473 + in-progress changes to modify test-packages to use ncursesw6 rather 474 than ncursesw, with updated configure scripts. 475 + modify CF_NCURSES_CONFIG in Ada95- and test-configure scripts to 476 check for ".pc" files via pkg-config, but add a linkage check since 477 frequently pkg-config configurations are broken. 478 + modify misc/gen-pkgconfig.in to include EXTRA_LDFLAGS, e.g., for the 479 rpath option. 480 + add 'dim' capability to screen entry (report by Leonardo B Schenkel) 481 + add several key definitions to nsterm-bce to match preconfigured 482 keys, e.g., with OSX 10.9 and 10.10 (report by Leonardo B Schenkel) 483 + fix repeated "extra-suffix" in ncurses-config.in (cf: 20150418). 484 + improve term_variables manual page, adding section on the terminfo 485 long-name symbols which are defined in the term.h header. 486 + fix bug in lib_tracebits.c introduced in const-fixes (cf: 20150404). 487 488 20150418 489 + avoid a blank line in output from tabs program by ending it with 490 a carriage return as done in FreeBSD (patch by James Clarke). 491 + build-fix for the "--enable-ext-putwin" feature when not using 492 wide characters (report by Werner Fink). 493 + modify autoconf macros to use scripting improvement from xterm. 494 + add -brtl option to compiler options on AIX 5-7, needed to link 495 with the shared libraries. 496 + add --with-extra-suffix option to help with installing nonconflicting 497 ncurses6 packages, e.g., avoiding header- and library-conflicts. 498 NOTE: as a side-effect, this renames 499 adacurses-config to adacurses5-config and 500 adacursesw-config to adacursesw5-config 501 + modify debian/rules test package to suffix programs with "6". 502 + clarify in curs_inopts.3x that window-specific settings do not 503 inherit into new windows. 504 505 20150404 506 + improve description of start_color() in the manual. 507 + modify several files in ncurses- and progs-directories to allow 508 const data used in internal tables to be put by the linker into the 509 readonly text segment. 510 511 20150329 512 + correct cut/paste error for "--enable-ext-putwin" that made it the 513 same as "--enable-ext-colors" (report by Roumen Petrov) 514 515 20150328 516 + add "-f" option to test/savescreen.c to help with testing/debugging 517 the extended putwin/getwin. 518 + add logic for writing/reading combining characters in the extended 519 putwin/getwin. 520 + add "--enable-ext-putwin" configure option to turn on the extended 521 putwin/getwin. 522 523 20150321 524 + in-progress changes to provide an extended version of putwin and 525 getwin which will be capable of reading screen-dumps between the 526 wide/normal ncurses configurations. These are text files, except 527 for a magic code at the beginning: 528 0 string \210\210 Screen-dump (ncurses) 529 530 20150307 531 + document limitations of getwin in manual page (prompted by discussion 532 with John S Urban). 533 + extend test/savescreen.c to demonstrate that color pair values 534 and graphic characters can be restored using getwin. 535 536 20150228 537 + modify win_driver.c to eliminate the constructor, to make it more 538 usable in an application which may/may not need the console window 539 (report by Grady Martin). 540 541 20150221 542 + capture define's related to -D_XOPEN_SOURCE from the configure check 543 and add those to the *-config and *.pc files, to simplify use for 544 the wide-character libraries. 545 + modify ncurses.spec to accommodate Fedora21's location of pkg-config 546 directory. 547 + correct sense of "--disable-lib-suffixes" configure option (report 548 by Nicolas Boos, cf: 20140426). 549 550 20150214 551 + regenerate html manpages using improved man2html from work on xterm. 552 + regenerated ".map" and ".sym" files using improved script, accounting 553 for the "--enable-weak-symbols" configure option (report by Werner 554 Fink). 555 556 20150131 557 + regenerated ".map" and ".sym" files using improved script, showing 558 the combinations of configure options used at each stage. 559 560 20150124 561 + add configure check to determine if "local: _*;" can be used in the 562 ".map" files to selectively omit symbols beginning with "_". On at 563 least recent FreeBSD, the wildcard applies to all "_" symbols. 564 + remove obsolete/conflicting rule for ncurses.map from 565 ncurses/Makefile.in (cf: 20130706). 566 567 20150117 568 + improve description in INSTALL of the --with-versioned-syms option. 569 + add combination of --with-hashed-db and --with-ticlib to 570 configurations for ".map" files (report by Werner Fink). 571 572 20150110 573 + add a step to generating ".map" files, to declare any remaining 574 symbols beginning with "_" as local, at the last version node. 575 + improve configure checks for pkg-config, addressing a variant found 576 with FreeBSD ports. 577 + modify win_driver.c to provide characters for special keys, like 578 ansi.sys, when keypad mode is off, rather than returning nothing at 579 all (discussion with Eli Zaretskii). 580 + add "broken_linker" and "hashed-db" configure options to combinations 581 use for generating the ".map" and ".sym" files. 582 + avoid using "ld" directly when creating shared library, to simplify 583 cross-compiles. Also drop "-Bsharable" option from shared-library 584 rules for FreeBSD and DragonFly (FreeBSD #196592). 585 + fix a memory leak in form library Free_RegularExpression_Type() 586 (report by Pavel Balaev). 587 588 20150103 589 + modify_nc_flush() to retry if interrupted (patch by Stian Skjelstad). 590 + change map files to make _nc_freeall a global, since it may be used 591 via the Ada95 binding when checking for memory leaks. 592 + improve sed script used in 20141220 to account for wide-, threaded- 593 variations in ABI 6. 594 595 20141227 596 + regenerate ".map" files, using step overlooked in 20141213 to use 597 the same patch-dates across each file to match ncurses.map (report by 598 Sven Joachim). 599 600 20141221 601 + fix an incorrect variable assignment in 20141220 changes (report by 602 Sven Joachim). 603 604 20141220 605 + updated Ada95/configure with macro changes from 20141213 606 + tie configure options --with-abi-version and --with-versioned-syms 607 together, so that ABI 6 libraries have distinct symbol versions from 608 the ABI 5 libraries. 609 + replace obsolete/nonworking link to man2html with current one, 610 regenerate html-manpages. 611 612 20141213 613 + modify misc/gen-pkgconfig.in to add -I option for include-directory 614 when using both --prefix and --disable-overwrite (report by Misty 615 De Meo). 616 + add configure option --with-pc-suffix to allow minor renaming of 617 ".pc" files and the corresponding library. Use this in the test 618 package for ncurses6. 619 + modify configure script so that if pkg-config is not installed, it 620 is still possible to install ".pc" files (report by Misty De Meo). 621 + updated ".sym" files, removing symbols which are marked as "local" 622 in the corresponding ".map" files. 623 + updated ".map" files to reflect move of comp_captab and comp_hash 624 from tic-library to tinfo-library in 20090711 (report by Sven 625 Joachim). 626 627 20141206 628 + updated ".map" files so that each symbol that may be shared across 629 the different library configurations has the same label. Some 630 review is needed to ensure these are really compatible. 631 + modify MKlib_gen.sh to work around change in development version of 632 gcc introduced here: 633 634 635 (reports by Marcus Shawcroft, Maohui Lei). 636 + improved configure macro CF_SUBDIR_PATH, from lynx changes. 637 638 20141129 639 + improved ".map" files by generating them with a script that builds 640 ncurses with several related configurations and merges the results. 641 A further refinement is planned, to make the tic- and tinfo-library 642 symbols use the same versions across each of the four configurations 643 which are represented (reports by Sven Joachim, Werner Fink). 644 645 20141115 646 + improve description of limits for color values and color pairs in 647 curs_color.3x (prompted by patch by Tim van der Molen). 648 + add VERSION file, using first field in that to record the ABI version 649 used for configure --with-libtool --disable-libtool-version 650 + add configure options for applying the ".map" and ".sym" files to 651 the ncurses, form, menu and panel libraries. 652 + add ".map" and ".sym" files to show exported symbols, e.g., for 653 symbol-versioning. 654 655 20141101 656 + improve strict compiler-warnings by adding a cast in TRACE_RETURN 657 and making a new TRACE_RETURN1 macro for cases where the cast does 658 not apply. 659 660 20141025 661 + in-progress changes to integrate the win32 console driver with the 662 msys2 configuration. 663 664 20141018 665 + reviewed terminology 0.6.1, add function key definitions. None of 666 the vt100-compatibility issues were improved -TD 667 + improve infocmp conversion of extended capabilities to termcap by 668 correcting the limit check against parametrized[], as well as filling 669 in a check if the string happens to have parameters, e.g., "xm" 670 in recent changes. 671 + add check for zero/negative dimensions for resizeterm and resize_term 672 (report by Mike Gran). 673 674 20141011 675 + add experimental support for xterm's 1005 mouse mode, to use in a 676 demonstration of its limitations. 677 + add experimental support for "%u" format to terminfo. 678 + modify test/ncurses.c to also show position reports in 'a' test. 679 + minor formatting fixes to _nc_trace_mmask_t, make this function 680 exported to help with debugging mouse changes. 681 + improve behavior of wheel-mice for xterm protocol, noting that there 682 are only button-presses for buttons "4" and "5", so there is no need 683 to wait to combine events into double-clicks (report/analysis by 684 Greg Field). 685 + provide examples xterm-1005 and xterm-1006 terminfo entries -TD 686 + implement decoder for xterm SGR 1006 mouse mode. 687 688 20140927 689 + implement curs_set in win_driver.c 690 + implement flash in win_driver.c 691 + fix an infinite loop in win_driver.c if the command-window loses 692 focus. 693 + improve the non-buffered mode, i.e., NCURSES_CONSOLE2, of 694 win_driver.c by temporarily changing the buffer-size to match the 695 window-size to eliminate the scrollback. Also enforce a minimum 696 screen-size of 24x80 in the non-buffered mode. 697 + modify generated misc/Makefile to suppress install.data from the 698 dependencies if the --disable-db-install option is used, compensating 699 for the top-level makefile changes used to add ncurses*-config in the 700 20140920 changes (report by Steven Honeyman). 701 702 20140920 703 + add ncurses*-config to bin-directory of sample package-scripts. 704 + add check to ensure that getopt is available; this is a problem in 705 some older cross-compiler environments. 706 + expanded on the description of --disable-overwrite in INSTALL 707 (prompted by reports by Joakim Tjernlund, Thomas Klausner). 708 See Gentoo #522586 and NetBSD #49200 for examples. 709 which relates to the clarified guidelines. 710 + remove special logic from CF_INCLUDE_DIRS which adds the directory 711 for the --includedir from the build (report by Joakim Tjernlund). 712 + add case for Unixware to CF_XOPEN_SOURCE, from lynx changes. 713 + update config.sub from 714 715 716 20140913 717 + add a configure check to ignore some of the plethora of non-working 718 C++ cross-compilers. 719 + build-fixes for Ada95 with gnat 4.9 720 721 20140906 722 + build-fix and other improvements for port of ncurses-examples to 723 NetBSD. 724 + minor compiler-warning fixes. 725 726 20140831 727 + modify test/demo_termcap.c and test/demo_terminfo.c to make their 728 options more directly comparable, and add "-i" option to specify 729 a terminal description filename to parse for names to lookup. 730 731 20140823 732 + fix special case where double-width character overwrites a single- 733 width character in the first column (report by Egmont Koblinger, 734 cf: 20050813). 735 736 20140816 737 + fix colors in ncurses 'b' test which did not work after changing 738 it to put the test-strings in subwindows (cf: 20140705). 739 + merge redundant SEE-ALSO sections in form and menu manpages. 740 741 20140809 742 + modify declarations for user-data pointers in C++ binding to use 743 reinterpret_cast to facilitate converting typed pointers to void* 744 in user's application (patch by Adam Jiang). 745 + regenerated html manpages. 746 + add note regarding cause and effect for TERM in ncurses manpage, 747 having noted clueless verbiage in Terminal.app's "help" file 748 which reverses cause/effect. 749 + remove special fallback definition for NCURSES_ATTR_T, since macros 750 have resolved type-mismatches using casts (cf: 970412). 751 + fixes for win_driver.c: 752 + handle repainting on endwin/refresh combination. 753 + implement beep(). 754 + minor cleanup. 755 756 20140802 757 + minor portability fixes for MinGW: 758 + ensure WINVER is defined in makefiles rather than using headers 759 + add check for gnatprep "-T" option 760 + work around bug introduced by gcc 4.8.1 in MinGW which breaks 761 "trace" feature: 762 763 + fix most compiler warnings for Cygwin ncurses-examples. 764 + restore "redundant" -I options in test/Makefile.in, since they are 765 typically needed when building the derived ncurses-examples package 766 (cf: 20140726). 767 768 20140726 769 + eliminate some redundant -I options used for building libraries, and 770 ensure that ${srcdir} is added to the include-options (prompted by 771 discussion with Paul Gilmartin). 772 + modify configure script to work with Minix3.2 773 + add form library extension O_DYNAMIC_JUSTIFY option which can be 774 used to override the different treatment of justification for static 775 versus dynamic fields (adapted from patch by Leon Winter). 776 + add a null pointer check in test/edit_field.c (report/analysis by 777 Leon Winter, cf: 20130608). 778 779 20140719 780 + make workarounds for compiling test-programs with NetBSD curses. 781 + improve configure macro CF_ADD_LIBS, to eliminate repeated -l/-L 782 options, from xterm changes. 783 784 20140712 785 + correct Charable() macro check for A_ALTCHARSET in wide-characters. 786 + build-fix for position-debug code in tty_update.c, to work with or 787 without sp-funcs. 788 789 20140705 790 + add w/W toggle to ncurses.c 'B' test, to demonstrate permutation of 791 video-attributes and colors with double-width character strings. 792 793 20140629 794 + correct check in win_driver.c for saving screen contents, e.g., when 795 NCURSES_CONSOLE2 is set (cf: 20140503). 796 + reorganize b/B menu items in ncurses.c, putting the test-strings into 797 subwindows. This is needed for a planned change to use Unicode 798 fullwidth characters in the test-screens. 799 + correct update to form status for _NEWTOP, broken by fixes for 800 compiler warnings (patch by Leon Winter, cf: 20120616). 801 802 20140621 803 + change shared-library suffix for AIX 5 and 6 to ".so", avoiding 804 conflict with the static library (report by Ben Lentz). 805 + document RPATH_LIST in INSTALLATION file, as part of workarounds for 806 upgrading an ncurses library using the "--with-shared" option. 807 + modify test/ncurses.c c/C tests to cycle through subsets of the 808 total number of colors, to better illustrate 8/16/88/256-colors by 809 providing directly comparable screens. 810 + add test/dots_curses.c, for comparison with the low-level examples. 811 812 20140614 813 + fix dereference before null check found by Coverity in tic.c 814 (cf: 20140524). 815 + fix sign-extension bug in read_entry.c which prevented "toe" from 816 reading empty "screen+italics" entry. 817 + modify sgr for screen.xterm-new to support dim capability -TD 818 + add dim capability to nsterm+7 -TD 819 + cancel dim capability for iterm -TD 820 + add dim, invis capabilities to vte-2012 -TD 821 + add sitm/ritm to konsole-base and mlterm3 -TD 822 823 20140609 824 > fix regression in screen terminfo entries (reports by Christian 825 Ebert, Gabriele Balducci) -TD 826 + revert the change to screen; see notes for why this did not work -TD 827 + cancel sitm/ritm for entries which extend "screen", to work around 828 screen's hardcoded behavior for SGR 3 -TD 829 830 20140607 831 + separate masking for sgr in vidputs from sitm/ritm, which do not 832 overlap with sgr functionality. 833 + remove unneeded -i option from adacurses-config; put -a in the -I 834 option for consistency (patch by Pascal Pignard). 835 + update xterm-new terminfo entry to xterm patch #305 -TD 836 + change format of test-scripts for Debian Ada95 and ncurses-examples 837 packages to quilted to work around Debian #700177 (cf: 20130907). 838 + build fix for form_driver_w.c as part of ncurses-examples package for 839 older ncurses than 20131207. 840 + add Hello World example to adacurses-config manpage. 841 + remove unused --enable-pc-files option from Ada95/configure. 842 + add --disable-gnat-projects option for testing. 843 + revert changes to Ada95 project-files configuration (cf: 20140524). 844 + corrected usage message in adacurses-config. 845 846 20140524 847 + fix typo in ncurses manpage for the NCURSES_NO_MAGIC_COOKIE 848 environment variable. 849 + improve discussion of input-echoing in curs_getch.3x 850 + clarify discussion in curs_addch.3x of wrapping. 851 + modify parametrized.h to make fln non-padded. 852 + correct several entries which had termcap-style padding used in 853 terminfo: adm21, aj510, alto-h19, att605-pc, x820 -TD 854 + correct syntax for padding in some entries: dg211, h19 -TD 855 + correct ti924-8 which had confused padding versus octal escapes -TD 856 + correct padding in sbi entry -TD 857 + fix an old bug in the termcap emulation; "%i" was ignored in tparm() 858 because the parameters to be incremented were already on the internal 859 stack (report by Corinna Vinschen). 860 + modify tic's "-c" option to take into account the "-C" option to 861 activate additional checks which compare the results from running 862 tparm() on the terminfo expressions versus the translated termcap 863 expressions. 864 + modify tic to allow it to read from FIFOs (report by Matthieu Fronton, 865 cf: 20120324). 866 > patches by Nicolas Boulenguez: 867 + explicit dereferences to suppress some style warnings. 868 + when c_varargs_to_ada.c includes its header, use double quotes 869 instead of <>. 870 + samples/ncurses2-util.adb: removed unused with clause. The warning 871 was removed by an obsolete pragma. 872 + replaced Unreferenced pragmas with Warnings (Off). The latter, 873 available with older GNATs, needs no configure test. This also 874 replaces 3 untested Unreferenced pragmas. 875 + simplified To_C usage in trace handling. Using two parameters allows 876 some basic formatting, and avoids a warning about security with some 877 compiler flags. 878 + for generated Ada sources, replace many snippets with one pure 879 package. 880 + removed C_Chtype and its conversions. 881 + removed C_AttrType and its conversions. 882 + removed conversions between int, Item_Option_Set, Menu_Option_Set. 883 + removed int, Field_Option_Set, Item_Option_Set conversions. 884 + removed C_TraceType, Attribute_Option_Set conversions. 885 + replaced C.int with direct use of Eti_Error, now enumerated. As it 886 was used in a case statement, values were tested by the Ada compiler 887 to be consecutive anyway. 888 + src/Makefile.in: remove duplicate stanza 889 + only consider using a project for shared libraries. 890 + style. Silent gnat-4.9 warning about misplaced "then". 891 + generate shared library project to honor ADAFLAGS, LDFLAGS. 892 893 20140510 894 + cleanup recently introduced compiler warnings for MingW port. 895 + workaround for ${MAKEFLAGS} configure check versus GNU make 4.0, 896 which introduces more than one gratuitous incompatibility. 897 898 20140503 899 + add vt520ansi terminfo entry (patch by Mike Gran) 900 + further improve MinGW support for the scenario where there is an 901 ANSI-escapes handler such as ansicon running in the console window 902 (patch by Juergen Pfeifer). 903 904 20140426 905 + add --disable-lib-suffixes option (adapted from patch by Juergen 906 Pfeifer). 907 + merge some changes from Juergen Pfeifer's work with MSYS2, to 908 simplify later merging: 909 + use NC_ISATTY() macro for isatty() in library 910 + add _nc_mingw_isatty() and related functions to windows-driver 911 + rename terminal driver entrypoints to simplify grep's 912 + remove a check in the sp-funcs flavor of newterm() which allowed only 913 the first call to newterm() to succeed (report by Thomas Beierlein, 914 cf: 20090927). 915 916 20140419 917 + update config.guess, config.sub from 918 919 920 20140412 921 + modify configure script: 922 + drop the -no-gcc option from Intel compiler, from lynx changes. 923 + extend the --with-hashed-db configure option to simplify building 924 with different versions of Berkeley database using FreeBSD ports. 925 + improve initialization for MinGW port (Juergen Pfeifer): 926 + enforce Windows-style path-separator if cross-compiling, 927 + add a driver-name method to each of the drivers, 928 + allow the Windows driver name to match "unknown", ignoring case, 929 + lengthen the built-in name for the Windows console driver to 930 "#win32console", and 931 + move the comparison of driver-names allowing abbreviation, e.g., 932 to "#win32con" into the Windows console driver. 933 934 20140329 935 + add check in tic for mismatch between ccc and initp/initc 936 + cancel ccc in putty-256color and konsole-256color for consistency 937 with the cancelled initc capability (patch by Sven Zuhlsdorf). 938 + add xterm+256setaf building block for various terminals which only 939 get the 256-color feature half-implemented -TD 940 + updated "st" entry (leaving the 0.1.1 version as "simpleterm") to 941 0.4.1 -TD 942 943 20140323 944 + fix typo in "mlterm" entry (report by Gabriele Balducci) -TD 945 946 20140322 947 + use types from <stdint.h> in sample build-scripts for chtype, etc. 948 + modify configure script and curses.h.in to allow the types specified 949 using --with-chtype and related options to be defined in <stdint.h> 950 + add terminology entry -TD 951 + add mlterm3 entry, use that as "mlterm" -TD 952 + inherit mlterm-256color from mlterm -TD 953 954 20140315 955 + modify _nc_New_TopRow_and_CurrentItem() to ensure that the menu's 956 top-row is adjusted as needed to ensure that the current item is 957 on the screen (patch by Johann Klammer). 958 + add wgetdelay() to retrieve _delay member of WINDOW if it happens to 959 be opaque, e.g., in the pthread configuration (prompted by patch by 960 Soren Brinkmann). 961 962 20140308 963 + modify ifdef in read_entry.c to handle the case where 964 NCURSES_USE_DATABASE is not defined (patch by Xin Li). 965 + add cast in form_driver_w() to fix ARM build (patch by Xin Li). 966 + add logic to win_driver.c to save/restore screen contents when not 967 allocating a console-buffer (cf: 20140215). 968 969 20140301 970 + clarify error-returns from newwin (report by Ruslan Nabioullin). 971 972 20140222 973 + fix some compiler warnings in win_driver.c 974 + updated notes for wsvt25 based on tack and vttest -TD 975 + add teken entry to show actual properties of FreeBSD's "xterm" 976 console -TD 977 978 20140215 979 + in-progress changes to win_driver.c to implement output without 980 allocating a console-buffer. This uses a pre-existing environment 981 variable NCGDB used by Juergen Pfeifer for debugging (prompted by 982 discussion with Erwin Waterlander regarding Console2, which hangs 983 when reading in an allocated console-buffer). 984 + add -t option to gdc.c, and modify to accept "S" to step through the 985 scrolling-stages. 986 + regenerate NCURSES-Programming-HOWTO.html to fix some of the broken 987 html emitted by docbook. 988 989 20140209 990 + modify CF_XOPEN_SOURCE macro to omit followup check to determine if 991 _XOPEN_SOURCE can/should be defined. g++ 4.7.2 built on Solaris 10 992 has some header breakage due to its own predefinition of this symbol 993 (report by Jean-Pierre Flori, Sage #15796). 994 995 20140201 996 + add/use symbol NCURSES_PAIRS_T like NCURSES_COLOR_T, to illustrate 997 which "short" types are for color pairs and which are color values. 998 + fix build for s390x, by correcting field bit offsets in generated 999 representation clauses when int=32 long=64 and endian=big, or at 1000 least on s390x (patch by Nicolas Boulenguez). 1001 + minor cleanup change to test/form_driver_w.c (patch by Gaute Hope). 1002 1003 20140125 1004 + remove unnecessary ifdef's in Ada95/gen/gen.c, which reportedly do 1005 not work as is with gcc 4.8 due to fixes using chtype cast made for 1006 new compiler warnings by gcc 4.8 in 20130824 (Debian #735753, patch 1007 by Nicolas Boulenguez). 1008 1009 20140118 1010 + apply includesubdir variable which was introduced in 20130805 to 1011 gen-pkgconfig.in (Debian #735782). 1012 1013 20131221 1014 + further improved man2html, used this to fix broken links in html 1015 manpages. See 1016 1017 1018 20131214 1019 + modify configure-script/ifdef's to allow OLD_TTY feature to be 1020 suppressed if the type of ospeed is configured using the option 1021 --with-ospeed to not be a short. By default, it is a short for 1022 termcap-compatibility (adapted from suggestion by Christian 1023 Weisgerber). 1024 + correct a typo in _nc_baudrate() (patch by Christian Weisgerber, 1025 cf: 20061230). 1026 + fix a few -Wlogical-op warnings. 1027 + updated llib-l* files. 1028 1029 20131207 1030 + add form_driver_w() entrypoint to wide-character forms library, as 1031 well as test program form_driver_w (adapted from patch by Gaute 1032 Hope). 1033 1034 20131123 1035 + minor fix for CF_GCC_WARNINGS to special-case options which are not 1036 recognized by clang. 1037 1038 20131116 1039 + add special case to configure script to move _XOPEN_SOURCE_EXTENDED 1040 definition from CPPFLAGS to CFLAGS if it happens to be needed for 1041 Solaris, because g++ errors with that definition (report by 1042 Jean-Pierre Flori, Sage #15268). 1043 + correct logic in infocmp's -i option which was intended to ignore 1044 strings which correspond to function-keys as candidates for piecing 1045 together initialization- or reset-strings. The problem dates to 1046 1.9.7a, but was overlooked until changes in -Wlogical-op warnings for 1047 gcc 4.8 (report by David Binderman). 1048 + updated CF_GCC_WARNINGS to documented options for gcc 4.9.0, moving 1049 checks for -Wextra and -Wdeclaration-after-statement into the macro, 1050 and adding checks for -Wignored-qualifiers, -Wlogical-op and 1051 -Wvarargs 1052 + updated CF_CURSES_UNCTRL_H and CF_SHARED_OPTS macros from ongoing 1053 work on cdk. 1054 + update config.sub from 1055 1056 1057 20131110 1058 + minor cleanup of terminfo.tail 1059 1060 20131102 1061 + use TS extension to describe xterm's title-escapes -TD 1062 + modify terminator and nsterm-s to use xterm+sl-twm building block -TD 1063 + update hurd.ti, add xenl to reflect 2011-03-06 change in 1064 1065 (Debian #727119). 1066 + simplify pfkey expression in ansi.sys -TD 1067 1068 20131027 1069 + correct/simplify ifdef's for cur_term versus broken-linker and 1070 reentrant options (report by Jean-Pierre Flori, cf: 20090530). 1071 + modify release/version combinations in test build-scripts to make 1072 them more consistent with other packages. 1073 1074 20131019 1075 + add nc_mingw.h to installed headers for MinGW port; needed for 1076 compiling ncurses-examples. 1077 + add rpm-script for testing cross-compile of ncurses-examples. 1078 1079 20131014 1080 + fix new typo in CF_ADA_INCLUDE_DIRS macro (report by Roumen Petrov). 1081 1082 20131012 1083 + fix a few compiler warnings in progs and test. 1084 + minor fix to package/debian-mingw/rules, do not strip dll's. 1085 + minor fixes to configure script for empty $prefix, e.g., when doing 1086 cross-compiles to MinGW. 1087 + add script for building test-packages of binaries cross-compiled to 1088 MinGW using NSIS. 1089 1090 20131005 1091 + minor fixes for ncurses-example package and makefile. 1092 + add scripts for test-builds of cross-compiler packages for ncurses6 1093 to MinGW. 1094 1095 20130928 1096 + some build-fixes for ncurses-examples with NetBSD-6.0 curses, though 1097 it lacks some common functions such as use_env() which is not yet 1098 addressed. 1099 + build-fix and some compiler warning fixes for ncurses-examples with 1100 OpenBSD 5.3 1101 + fix a possible null-pointer reference in a trace message from newterm. 1102 + quiet a few warnings from NetBSD 6.0 namespace pollution by 1103 nonstandard popcount() function in standard strings.h header. 1104 + ignore g++ 4.2.1 warnings for "-Weffc++" in c++/cursesmain.cc 1105 + fix a few overlooked places for --enable-string-hacks option. 1106 1107 20130921 1108 + fix typo in curs_attr.3x (patch by Sven Joachim, cf: 20130831). 1109 + build-fix for --with-shared option for DragonFly and FreeBSD (report 1110 by Rong-En Fan, cf: 20130727). 1111 1112 20130907 1113 + build-fixes for MSYS for two test-programs (patches by Ray Donnelly, 1114 Alexey Pavlov). 1115 + revert change to two of the dpkg format files, to work with dpkg 1116 before/after Debian #700177. 1117 + fix gcc -Wconversion warning in wattr_get() macro. 1118 + add msys and msysdll to known host/configuration types (patch by 1119 Alexey Pavlov). 1120 + modify CF_RPATH_HACK configure macro to not rely upon "-u" option 1121 of sort, improving portability. 1122 + minor improvements for test-programs from reviewing Solaris port. 1123 + update config.guess, config.sub from 1124 1125 1126 20130831 1127 + modify test/ncurses.c b/B tests to display lines only for the 1128 attributes which a given terminal supports, to make room for an 1129 italics test. 1130 + completed ncv table in terminfo.tail; it did not list the wide 1131 character codes listed in X/Open Curses issue 7. 1132 + add A_ITALIC extension (prompted by discussion with Egmont Koblinger). 1133 1134 20130824 1135 + fix some gcc 4.8 -Wconversion warnings. 1136 + change format of dpkg test-scripts to quilted to work around bug 1137 introduced by Debian #700177. 1138 + discard cached keyname() values if meta() is changed after a value 1139 was cached using (report by Kurban Mallachiev). 1140 1141 20130816 1142 + add checks in tic to warn about terminals which lack cursor 1143 addressing, capabilities or having those, are marked as hard_copy or 1144 generic_type. 1145 + use --without-progs in mingw-ncurses rpm. 1146 + split out _nc_init_termtype() from alloc_entry.c to use in MinGW 1147 port when tic and other programs are not needed. 1148 1149 20130805 1150 + minor fixes to the --disable-overwrite logic, to ensure that the 1151 configured $(includedir) is not cancelled by the mingwxx-filesystem 1152 rpm macros. 1153 + add --disable-db-install configure option, to simplify building 1154 cross-compile support packages. 1155 + add mingw-ncurses.spec file, for testing cross-compiles. 1156 1157 20130727 1158 + improve configure macros from ongoing work on cdk, dialog, xterm: 1159 + CF_ADD_LIB_AFTER - fix a problem with -Wl options 1160 + CF_RPATH_HACK - add missing result-message 1161 + CF_SHARED_OPTS - modify to use $rel_builddir in cygwin and mingw 1162 dll symbols (which can be overridden) rather than explicit "../". 1163 + CF_SHARED_OPTS - modify NetBSD and DragonFly symbols to use ${CC} 1164 rather than ${LD} to improve rpath support. 1165 + CF_SHARED_OPTS - add a symbol to denote the temporary files that 1166 are created by the macro, to simplify clean-rules. 1167 + CF_X_ATHENA - trim extra libraries to work with -Wl,--as-needed 1168 + fix a regression in hashed-database support for NetBSD, which uses 1169 the key-size differently from other implementations (cf: 20121229). 1170 1171 20130720 1172 + further improvements for setupterm manpage, clarifying the 1173 initialization of cur_term. 1174 1175 20130713 1176 + improve manpages for initscr and setupterm. 1177 + minor compiler-warning fixes 1178 1179 20130706 1180 + add fallback defs for <inttypes.h> and <stdint.h> (cf: 20120225). 1181 + add check for size of wchar_t, use that to suppress a chunk of 1182 wcwidth.h in MinGW port. 1183 + quiet linker warnings for MinGW cross-compile with dll's using the 1184 --enable-auto-import flag. 1185 + add ncurses.map rule to ncurses/Makefile to help diagnose symbol 1186 table issues. 1187 1188 20130622 1189 + modify the clear program to take into account the E3 extended 1190 capability to clear the terminal's scrollback buffer (patch by 1191 Miroslav Lichvar, Redhat #815790). 1192 + clarify in resizeterm manpage that LINES and COLS are updated. 1193 + updated ansi example in terminfo.tail, correct misordered example 1194 of sgr. 1195 + fix other doclifter warnings for manpages 1196 + remove unnecessary ".ta" in terminfo.tail, add missing ".fi" 1197 (patch by Eric Raymond). 1198 1199 20130615 1200 + minor changes to some configure macros to make them more reusable. 1201 + fixes for tabs program (prompted by report by Nick Andrik). 1202 + corrected logic in command-line parsing of -a and -c predefined 1203 tab-lists options. 1204 + allow "-0" and "-8" options to be combined with others, e.g.,"-0d". 1205 + make warning messages more consistent with the other utilities by 1206 not printing the full pathname of the program. 1207 + add -V option for consistency with other utilities. 1208 + fix off-by-one in columns for tabs program when processing an option 1209 such as "-5" (patch by Nick Andrik). 1210 1211 20130608 1212 + add to test/demo_forms.c examples of using the menu-hooks as well 1213 as showing how the menu item user-data can be used to pass a callback 1214 function pointer. 1215 + add test/dots_termcap.c 1216 + remove setupterm call from test/demo_termcap.c 1217 + build-fix if --disable-ext-funcs configure option is used. 1218 + modified test/edit_field.c and test/demo_forms.c to move the lengths 1219 into a user-data structure, keeping the original string for later 1220 expansion to free-format input/out demo. 1221 + modified test/demo_forms.c to load data from file. 1222 + added note to clarify Terminal.app's non-emulation of the various 1223 terminal types listed in the preferences dialog -TD 1224 + fix regression in error-reporting in lib_setup.c (Debian #711134, 1225 cf: 20121117). 1226 + build-fix for a case where --enable-broken_linker and 1227 --enable-reentrant options are combined (report by George R Goffe). 1228 1229 20130525 1230 + modify mvcur() to distinguish between internal use by the ncurses 1231 library, and external callers, preventing it from reading the content 1232 of the screen which is only nonblank when curses calls have updated 1233 it. This makes test/dots_mvcur.c avoid painting colored cells in 1234 the left margin of the display. 1235 + minor fix to test/dots_mvcur.c 1236 + move configured symbols USE_DATABASE and USE_TERMCAP to term.h as 1237 NCURSES_USE_DATABASE and NCURSES_USE_TERMCAP to allow consistent 1238 use of these symbols in term_entry.h 1239 1240 20130518 1241 + corrected ifdefs in test/testcurs.c to allow comparison of mouse 1242 interface versus pdcurses (cf: 20130316). 1243 + add pow() to configure-check for math library, needed since 1244 20121208 for test/hanoi (Debian #708056). 1245 + regenerated html manpages. 1246 + update doctype used for html documentation. 1247 1248 20130511 1249 + move nsterm-related entries out of "obsolete" section to more 1250 plausible "ansi consoles" -TD 1251 + additional cleanup of table-of-contents by reordering -TD 1252 + revise fix for check for 8-bit value in _nc_insert_ch(); prior fix 1253 prevented inserts when video attributes were attached to the data 1254 (cf: 20121215) (Redhat #959534). 1255 1256 20130504 1257 + fixes for issues found by Coverity: 1258 + correct FNKEY() macro in progs/dump_entry.c, allowing kf11-kf63 to 1259 display when infocmp's -R option is used for HP or AIX subsets. 1260 + fix dead-code issue with test/movewindow.c 1261 + improve limited-checking in _nc_read_termtype(). 1262 1263 20130427 1264 + fix clang 3.2 warning in progs/dump_entry.c 1265 + drop AC_TYPE_SIGNAL check; ncurses relies on c89 and later. 1266 1267 20130413 1268 + add MinGW to cases where ncurses installs by default into /usr 1269 (prompted by discussion with Daniel Silva Ferreira). 1270 + add -D option to infocmp's usage-message (patch by Miroslav Lichvar). 1271 + add a missing 'int' type for main function in configure check for 1272 type of bool variable, to work with clang 3.2 (report by Dmitri 1273 Gribenko). 1274 + improve configure check for static_cast, to work with clang 3.2 1275 (report by Dmitri Gribenko). 1276 + re-order rule for demo.o and macros defining header dependencies in 1277 c++/Makefile.in to accommodate gmake (report by Dmitri Gribenko). 1278 1279 20130406 1280 + improve parameter checking in copywin(). 1281 + modify configure script to work around OS X's "libtool" program, to 1282 choose glibtool instead. At the same time, chance the autoconf macro 1283 to look for a "tool" rather than a "prog", to help with potential use 1284 in cross-compiling. 1285 + separate the rpath usage for c++ library from demo program 1286 (Redhat #911540) 1287 + update/correct header-dependencies in c++ makefile (report by Werner 1288 Fink). 1289 + add --with-cxx-shared to dpkg-script, as done for rpm-script. 1290 1291 20130324 1292 + build-fix for libtool configuration (reports by Daniel Silva Ferreira 1293 and Roumen Petrov). 1294 1295 20130323 1296 + build-fix for OS X, to handle changes for --with-cxx-shared feature 1297 (report by Christian Ebert). 1298 + change initialization for vt220, similar entries for consistency 1299 with cursor-key strings (NetBSD #47674) -TD 1300 + further improvements to linux-16color (Benjamin Sittler) 1301 1302 20130316 1303 + additional fix for tic.c, to allocate missing buffer space. 1304 + eliminate configure-script warnings for gen-pkgconfig.in 1305 + correct typo in sgr string for sun-color, 1306 add bold for consistency with sgr, 1307 change smso for consistency with sgr -TD 1308 + correct typo in sgr string for terminator -TD 1309 + add blink to the attributes masked by ncv in linux-16color (report 1310 by Benjamin Sittler) 1311 + improve warning message from post-load checking for missing "%?" 1312 operator by tic/infocmp by showing the entry name and capability. 1313 + minor formatting improvement to tic/infocmp -f option to ensure 1314 line split after "%;". 1315 + amend scripting for --with-cxx-shared option to handle the debug 1316 library "libncurses++_g.a" (report by Sven Joachim). 1317 1318 20130309 1319 + amend change to toe.c for reading from /dev/zero, to ensure that 1320 there is a buffer for the temporary filename (cf: 20120324). 1321 + regenerated html manpages. 1322 + fix typo in terminfo.head (report by Sven Joachim, cf: 20130302). 1323 + updated some autoconf macros: 1324 + CF_ACVERSION_CHECK, from byacc 1.9 20130304 1325 + CF_INTEL_COMPILER, CF_XOPEN_SOURCE from luit 2.0-20130217 1326 + add configure option --with-cxx-shared to permit building 1327 libncurses++ as a shared library when using g++, e.g., the same 1328 limitations as libtool but better integrated with the usual build 1329 configuration (Redhat #911540). 1330 + modify MKkey_defs.sh to filter out build-path which was unnecessarily 1331 shown in curses.h (Debian #689131). 1332 1333 20130302 1334 + add section to terminfo manpage discussing user-defined capabilities. 1335 + update manpage description of NCURSES_NO_SETBUF, explaining why it 1336 is obsolete. 1337 + add a check in waddch_nosync() to ensure that tab characters are 1338 treated as control characters; some broken locales claim they are 1339 printable. 1340 + add some traces to the Windows console driver. 1341 + initialize a temporary array in _nc_mbtowc, needed for some cases 1342 of raw input in MinGW port. 1343 1344 20130218 1345 + correct ifdef on change to lib_twait.c (report by Werner Fink). 1346 + update config.guess, config.sub 1347 1348 20130216 1349 + modify test/testcurs.c to work with mouse for ncurses as it does for 1350 pdcurses. 1351 + modify test/knight.c to work with mouse for pdcurses as it does for 1352 ncurses. 1353 + modify internal recursion in wgetch() which handles cooked mode to 1354 check if the call to wgetnstr() returned an error. This can happen 1355 when both nocbreak() and nodelay() are set, for instance (report by 1356 Nils Christopher Brause) (cf: 960418). 1357 + fixes for issues found by Coverity: 1358 + add a check for valid position in ClearToEOS() 1359 + fix in lib_twait.c when --enable-wgetch-events is used, pointer 1360 use after free. 1361 + improve a limit-check in make_hash.c 1362 + fix a memory leak in hashed_db.c 1363 1364 20130209 1365 + modify test/configure script to make it simpler to override names 1366 of curses-related libraries, to help with linking with pdcurses in 1367 MinGW environment. 1368 + if the --with-terminfo-dirs configure option is not used, there is 1369 no corresponding compiled-in value for that. Fill in "no default 1370 value" for that part of the manpage substitution. 1371 1372 20130202 1373 + correct initialization in knight.c which let it occasionally make 1374 an incorrect move (cf: 20001028). 1375 + improve documentation of the terminfo/termcap search path. 1376 1377 20130126 1378 + further fixes to mvcur to pass callback function (cf: 20130112), 1379 needed to make test/dots_mvcur work. 1380 + reduce calls to SetConsoleActiveScreenBuffer in win_driver.c, to 1381 help reduce flicker. 1382 + modify configure script to omit "+b" from linker options for very 1383 old HP-UX systems (report by Dennis Grevenstein) 1384 + add HP-UX workaround for missing EILSEQ on old HP-UX systems (patch 1385 by Dennis Grevenstein). 1386 + restore memmove/strdup support for antique systems (request by 1387 Dennis Grevenstein). 1388 + change %l behavior in tparm to push the string length onto the stack 1389 rather than saving the formatted length into the output buffer 1390 (report by Roy Marples, cf: 980620). 1391 1392 20130119 1393 + fixes for issues found by Coverity: 1394 + fix memory leak in safe_sprintf.c 1395 + add check for return-value in tty_update.c 1396 + correct initialization for -s option in test/view.c 1397 + add check for numeric overflow in lib_instr.c 1398 + improve error-checking in copywin 1399 + add advice in infocmp manpage for termcap users (Debian #698469). 1400 + add "-y" option to test/demo_termcap and test/demo_terminfo to 1401 demonstrate behavior with/without extended capabilities. 1402 + updated termcap manpage to document legacy termcap behavior for 1403 matching capability names. 1404 + modify name-comparison for tgetstr, etc., to accommodate legacy 1405 applications as well as to improve compatbility with BSD 4.2 1406 termcap implementations (Debian #698299) (cf: 980725). 1407 1408 20130112 1409 + correct prototype in manpage for vid_puts. 1410 + drop ncurses/tty/tty_display.h, ncurses/tty/tty_input.h, since they 1411 are unused in the current driver model. 1412 + modify mvcur to use stdout except when called within the ncurses 1413 library. 1414 + modify vidattr and vid_attr to use stdout as documented in manpage. 1415 + amend changes made to buffering in 20120825 so that the low-level 1416 putp() call uses stdout rather than ncurses' internal buffering. 1417 The putp_sp() call does the same, for consistency (Redhat #892674). 1418 1419 20130105 1420 + add "-s" option to test/view.c to allow it to start in single-step 1421 mode, reducing size of trace files when it is used for debugging 1422 MinGW changes. 1423 + revert part of 20121222 change to tinfo_driver.c 1424 + add experimental logic in win_driver.c to improve optimization of 1425 screen updates. This does not yet work with double-width characters, 1426 so it is ifdef'd out for the moment (prompted by report by Erwin 1427 Waterlander regarding screen flicker). 1428 1429 20121229 1430 + fix coverity warnings regarding copying into fixed-size buffers. 1431 + add throw-declarations in the c++ binding per Coverity warning. 1432 + minor changes to new-items for consistent reference to bug-report 1433 numbers. 1434 1435 20121222 1436 + add *.dSYM directories to clean-rule in ncurses directory makefile, 1437 for Mac OS builds. 1438 + add a configure check for gcc option -no-cpp-precomp, which is not 1439 available in all Mac OS X configurations (report by Andras Salamon, 1440 cf: 20011208). 1441 + improve 20021221 workaround for broken acs, handling a case where 1442 that ACS_xxx character is not in the acsc string but there is a known 1443 wide-character which can be used. 1444 1445 20121215 1446 + fix several warnings from clang 3.1 --analyze, includes correcting 1447 a null-pointer check in _nc_mvcur_resume. 1448 + correct display of double-width characters with MinGW port (report 1449 by Erwin Waterlander). 1450 + replace MinGW's wcrtomb(), fixing a problem with _nc_viscbuf 1451 > fixes based on Coverity report: 1452 + correct coloring in test/bs.c 1453 + correct check for 8-bit value in _nc_insert_ch(). 1454 + remove dead code in progs/tset.c, test/linedata.h 1455 + add null-pointer checks in lib_tracemse.c, panel.priv.h, and some 1456 test-programs. 1457 1458 20121208 1459 + modify test/knight.c to show the number of choices possible for 1460 each position in automove option, e.g., to allow user to follow 1461 Warnsdorff's rule to solve the puzzle. 1462 + modify test/hanoi.c to show the minimum number of moves possible for 1463 the given number of tiles (prompted by patch by Lucas Gioia). 1464 > fixes based on Coverity report: 1465 + remove a few redundant checks. 1466 + correct logic in test/bs.c, when randomly placing a specific type of 1467 ship. 1468 + check return value from remove/unlink in tic. 1469 + check return value from sscanf in test/ncurses.c 1470 + fix a null dereference in c++/cursesw.cc 1471 + fix two instances of uninitialized variables when configuring for the 1472 terminal driver. 1473 + correct scope of variable used in SetSafeOutcWrapper macro. 1474 + set umask when calling mkstemp in tic. 1475 + initialize wbkgrndset() temporary variable when extended-colors are 1476 used. 1477 1478 20121201 1479 + also replace MinGW's wctomb(), fixing a problem with setcchar(). 1480 + modify test/view.c to load UTF-8 when built with MinGW by using 1481 regular win32 API because the MinGW functions mblen() and mbtowc() 1482 do not work. 1483 1484 20121124 1485 + correct order of color initialization versus display in some of the 1486 test-programs, e.g., test_addstr.c 1487 > fixes based on Coverity report: 1488 + delete windows on exit from some of the test-programs. 1489 1490 20121117 1491 > fixes based on Coverity report: 1492 + add missing braces around FreeAndNull in two places. 1493 + various fixes in test/ncurses.c 1494 + improve limit-checks in tinfo/make_hash.c, tinfo/read_entry.c 1495 + correct malloc size in progs/infocmp.c 1496 + guard against negative array indices in test/knight.c 1497 + fix off-by-one limit check in test/color_name.h 1498 + add null-pointer check in progs/tabs.c, test/bs.c, test/demo_forms.c, 1499 test/inchs.c 1500 + fix memory-leak in tinfo/lib_setup.c, progs/toe.c, 1501 test/clip_printw.c, test/demo_menus.c 1502 + delete unused windows in test/chgat.c, test/clip_printw.c, 1503 test/insdelln.c, test/newdemo.c on error-return. 1504 1505 20121110 1506 + modify configure macro CF_INCLUDE_DIRS to put $CPPFLAGS after the 1507 local -I include options in case someone has set conflicting -I 1508 options in $CPPFLAGS (prompted by patch for ncurses/Makefile.in by 1509 Vassili Courzakis). 1510 + modify the ncurses*-config scripts to eliminate relative paths from 1511 the RPATH_LIST variable, e.g., "../lib" as used in installing shared 1512 libraries or executables. 1513 1514 20121102 1515 + realign these related pages: 1516 curs_add_wchstr.3x 1517 curs_addchstr.3x 1518 curs_addstr.3x 1519 curs_addwstr.3x 1520 and fix a long-ago error in curs_addstr.3x which said that a -1 1521 length parameter would only write as much as fit onto one line 1522 (report by Reuben Thomas). 1523 + remove obsolete fallback _nc_memmove() for memmove()/bcopy(). 1524 + remove obsolete fallback _nc_strdup() for strdup(). 1525 + cancel any debug-rpm in package/ncurses.spec 1526 + reviewed vte-2012, reverted most of the change since it was incorrect 1527 based on testing with tack -TD 1528 + un-cancel the initc in vte-256color, since this was implemented 1529 starting with version 0.20 in 2009 -TD 1530 1531 20121026 1532 + improve malloc/realloc checking (prompted by discussion in Redhat 1533 #866989). 1534 + add ncurses test-program as "ncurses6" to the rpm- and dpkg-scripts. 1535 + updated configure macros CF_GCC_VERSION and CF_WITH_PATHLIST. The 1536 first corrects pattern used for Mac OS X's customization of gcc. 1537 1538 20121017 1539 + fix change to _nc_scroll_optimize(), which incorrectly freed memory 1540 (Redhat #866989). 1541 1542 20121013 1543 + add vte-2012, gnome-2012, making these the defaults for vte/gnome 1544 (patch by Christian Persch). 1545 1546 20121006 1547 + improve CF_GCC_VERSION to work around Debian's customization of gcc 1548 --version message. 1549 + improve configure macros as done in byacc: 1550 + drop 2.13 compatibility; use 2.52.xxxx version only since EMX port 1551 has used that for a while. 1552 + add 3rd parameter to AC_DEFINE's to allow autoheader to run, i.e., 1553 for experimental use. 1554 + remove unused configure macros. 1555 + modify configure script and makefiles to quiet new autoconf warning 1556 for LIBS_TO_MAKE variable. 1557 + modify configure script to show $PATH_SEPARATOR variable. 1558 + update config.guess, config.sub 1559 1560 20120922 1561 + modify setupterm to set its copy of TERM to "unknown" if configured 1562 for the terminal driver and TERM was null or empty. 1563 + modify treatment of TERM variable for MinGW port to allow explicit 1564 use of the windows console driver by checking if $TERM is set to 1565 "#win32con" or an abbreviation of that. 1566 + undo recent change to fallback definition of vsscanf() to build with 1567 older Solaris compilers (cf: 20120728). 1568 1569 20120908 1570 + add test-screens to test/ncurses to show 256-characters at a time, 1571 to help with MinGW port. 1572 1573 20120903 1574 + simplify varargs logic in lib_printw.c; va_copy is no longer needed 1575 there. 1576 + modifications for MinGW port to make wide-character display usable. 1577 1578 20120902 1579 + regenerate configure script (report by Sven Joachim, cf: 20120901). 1580 1581 20120901 1582 + add a null-pointer check in _nc_flush (cf: 20120825). 1583 + fix a case in _nc_scroll_optimize() where the _oldnums_list array 1584 might not be allocated. 1585 + improve comparisons in configure.in for unset shell variables. 1586 1587 20120826 1588 + increase size of ncurses' output-buffer, in case of very small 1589 initial screen-sizes. 1590 + fix evaluation of TERMINFO and TERMINFO_DIRS default values as needed 1591 after changes to use --datarootdir (reports by Gabriele Balducci, 1592 Roumen Petrov). 1593 1594 20120825 1595 + change output buffering scheme, using buffer maintained by ncurses 1596 rather than stdio, to avoid problems with SIGTSTP handling (report 1597 by Brian Bloniarz). 1598 1599 20120811 1600 + update autoconf patch to 2.52.20120811, adding --datarootdir 1601 (prompted by discussion with Erwin Waterlander). 1602 + improve description of --enable-reentrant option in README and the 1603 INSTALL file. 1604 + add nsterm-256color, make this the default nsterm -TD 1605 + remove bw from nsterm-bce, per testing with tack -TD 1606 1607 20120804 1608 + update test/configure, adding check for tinfo library. 1609 + improve limit-checks for the getch fifo (report by Werner Fink). 1610 + fix a remaining mismatch between $with_echo and the symbols updated 1611 for CF_DISABLE_ECHO affecting parameters for mk-2nd.awk (report by 1612 Sven Joachim, cf: 20120317). 1613 + modify followup check for pkg-config's library directory in the 1614 --enable-pc-files option to validate syntax (report by Sven Joachim, 1615 cf: 20110716). 1616 1617 20120728 1618 + correct path for ncurses_mingw.h in include/headers, in case build 1619 is done outside source-tree (patch by Roumen Petrov). 1620 + modify some older xterm entries to align with xterm source -TD 1621 + separate "xterm-old" alias from "xterm-r6" -TD 1622 + add E3 extended capability to xterm-basic and putty -TD 1623 + parenthesize parameters of other macros in curses.h -TD 1624 + parenthesize parameter of COLOR_PAIR and PAIR_NUMBER in curses.h 1625 in case it happens to be a comma-expression, etc. (patch by Nick 1626 Black). 1627 1628 20120721 1629 + improved form_request_by_name() and menu_request_by_name(). 1630 + eliminate two fixed-size buffers in toe.c 1631 + extend use_tioctl() to have expected behavior when use_env(FALSE) and 1632 use_tioctl(TRUE) are called. 1633 + modify ncurses test-program, adding -E and -T options to demonstrate 1634 use_env() versus use_tioctl(). 1635 1636 20120714 1637 + add use_tioctl() function (adapted from patch by Werner Fink, 1638 Novell #769788): 1639 1640 20120707 1641 + add ncurses_mingw.h to installed headers (prompted by patch by 1642 Juergen Pfeifer). 1643 + clarify return-codes from wgetch() in response to SIGWINCH (prompted 1644 by Novell #769788). 1645 + modify resizeterm() to always push a KEY_RESIZE onto the fifo, even 1646 if screensize is unchanged. Modify _nc_update_screensize() to push a 1647 KEY_RESIZE if there was a SIGWINCH, even if it does not call 1648 resizeterm(). These changes eliminate the case where a SIGWINCH is 1649 received, but ERR returned from wgetch or wgetnstr because the screen 1650 dimensions did not change (Novell #769788). 1651 1652 20120630 1653 + add --enable-interop to sample package scripts (suggested by Juergen 1654 Pfeifer). 1655 + update CF_PATH_SYNTAX macro, from mawk changes. 1656 + modify mk-0th.awk to allow for generating llib-ltic, etc., though 1657 some work is needed on cproto to work with lib_gen.c to update 1658 llib-lncurses. 1659 + remove redundant getenv() cal in database-iterator leftover from 1660 cleanup in 20120622 changes (report by Sven Joachim). 1661 1662 20120622 1663 + add -d, -e and -q options to test/demo_terminfo and test/demo_termcap 1664 + fix caching of environment variables in database-iterator (patch by 1665 Philippe Troin, Redhat #831366). 1666 1667 20120616 1668 + add configure check to distinguish clang from gcc to eliminate 1669 warnings about unused command-line parameters when compiler warnings 1670 are enabled. 1671 + improve behavior when updating terminfo entries which are hardlinked 1672 by allowing for the possibility that an alias has been repurposed to 1673 a new primary name. 1674 + fix some strict compiler warnings based on package scripts. 1675 + further fixes for configure check for working poll (Debian #676461). 1676 1677 20120608 1678 + fix an uninitialized variable in -c/-n logic for infocmp changes 1679 (cf: 20120526). 1680 + corrected fix for building c++ binding with clang 3.0 (report/patch 1681 by Richard Yao, Gentoo #417613, cf: 20110409) 1682 + correct configure check for working poll, fixing the case where stdin 1683 is redirected, e.g., in rpm/dpkg builds (Debian #676461). 1684 + add rpm- and dpkg-scripts, to test those build-environments. 1685 The resulting packages are used only for testing. 1686 1687 20120602 1688 + add kdch1 aka "Remove" to vt220 and vt220-8 entries -TD 1689 + add kdch1, etc., to qvt108 -TD 1690 + add dl1/il1 to some entries based on dl/il values -TD 1691 + add dl to simpleterm -TD 1692 + add consistency-checks in tic for insert-line vs delete-line 1693 controls, and insert/delete-char keys 1694 + correct no-leaks logic in infocmp when doing comparisons, fixing 1695 duplicate free of entries given via the command-line, and freeing 1696 entries loaded from the last-but-one of files specified on the 1697 command-line. 1698 + add kdch1 to wsvt25 entry from NetBSD CVS (reported by David Lord, 1699 analysis by Martin Husemann). 1700 + add cnorm/civis to wsvt25 entry from NetBSD CVS (report/analysis by 1701 Onno van der Linden). 1702 1703 20120526 1704 + extend -c and -n options of infocmp to allow comparing more than two 1705 entries. 1706 + correct check in infocmp for number of terminal names when more than 1707 two are given. 1708 + correct typo in curs_threads.3x (report by Yanhui Shen on 1709 freebsd-hackers mailing list). 1710 1711 20120512 1712 + corrected 'op' for bterm (report by Samuel Thibault) -TD 1713 + modify test/background.c to demonstrate a background character 1714 holding a colored ACS_HLINE. The behavior differs from SVr4 due to 1715 the thick- and double-line extension (cf: 20091003). 1716 + modify handling of acs characters in PutAttrChar to avoid mapping an 1717 unmapped character to a space with A_ALTCHARSET set. 1718 + rewrite vt520 entry based on vt420 -TD 1719 1720 20120505 1721 + remove p6 (bold) from opus3n1+ for consistency -TD 1722 + remove acs stuff from env230 per clues in Ingres termcap -TD 1723 + modify env230 sgr/sgr0 to match other capabilities -TD 1724 + modify smacs/rmacs in bq300-8 to match sgr/sgr0 -TD 1725 + make sgr for dku7202 agree with other caps -TD 1726 + make sgr for ibmpc agree with other caps -TD 1727 + make sgr for tek4107 agree with other caps -TD 1728 + make sgr for ndr9500 agree with other caps -TD 1729 + make sgr for sco-ansi agree with other caps -TD 1730 + make sgr for d410 agree with other caps -TD 1731 + make sgr for d210 agree with other caps -TD 1732 + make sgr for d470c, d470c-7b agree with other caps -TD 1733 + remove redundant AC_DEFINE for NDEBUG versus Makefile definition. 1734 + fix a back-link in _nc_delink_entry(), which is needed if ncurses is 1735 configured with --enable-termcap and --disable-getcap. 1736 1737 20120428 1738 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 1739 + add eslok flag to dec+sl -TD 1740 + dec+sl applies to vt320 and up -TD 1741 + drop wsl width from xterm+sl -TD 1742 + reuse xterm+sl in putty and nsca-m -TD 1743 + add ansi+tabs to vt520 -TD 1744 + add ansi+enq to vt220-vt520 -TD 1745 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 1746 + added paragraph in keyname manpage telling how extended capabilities 1747 are interpreted as key definitions. 1748 + modify tic's check of conflicting key definitions to include extended 1749 capability strings in addition to the existing check on predefined 1750 keys. 1751 1752 20120421 1753 + improve cleanup of temporary files in tic using atexit(). 1754 + add msgr to vt420, similar DEC vtXXX entries -TD 1755 + add several missing vt420 capabilities from vt220 -TD 1756 + factor out ansi+pp from several entries -TD 1757 + change xterm+sl and xterm+sl-twm to include only the status-line 1758 capabilities and not "use=xterm", making them more generally useful 1759 as building-blocks -TD 1760 + add dec+sl building block, as example -TD 1761 1762 20120414 1763 + add XT to some terminfo entries to improve usefulness for other 1764 applications than screen, which would like to pretend that xterm's 1765 title is a status-line. -TD 1766 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 1767 of ordering and overrides -TD 1768 + add consistency check in tic for screen's "XT" capability. 1769 + add section in terminfo.src summarizing the user-defined capabilities 1770 used in that file -TD 1771 1772 20120407 1773 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 1774 non-standard capabilities, while infocmp was ignoring only the user 1775 definable capabilities. 1776 + improve special case in tic parsing of description to allow it to be 1777 followed by terminfo capabilities. Previously the description had to 1778 be the last field on an input line to allow tic to distinguish 1779 between termcap and terminfo format while still allowing commas to be 1780 embedded in the description. 1781 + correct variable name in gen_edit.sh which broke configurability of 1782 the --with-xterm-kbs option. 1783 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 1784 + further amend 20110910 change, providing for configure-script 1785 override of the "linux" terminfo entry to install and changing the 1786 default for that to "linux2.2" (Debian #665959). 1787 1788 20120331 1789 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 1790 + correct order of use-clauses in st-256color -TD 1791 + modify configure script to look for gnatgcc if the Ada95 binding 1792 is built, in preference to the default gcc/cc (suggested by 1793 Nicolas Boulenguez). 1794 + modify configure script to ensure that the same -On option used for 1795 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 1796 using "-O3" (suggested by Nicolas Boulenguez) 1797 1798 20120324 1799 + amend an old fix so that next_char() exits properly for empty files, 1800 e.g., from reading /dev/null (cf: 20080804). 1801 + modify tic so that it can read from the standard input, or from 1802 a character device. Because tic uses seek's, this requires writing 1803 the data to a temporary file first (prompted by remark by Sven 1804 Joachim) (cf: 20000923). 1805 1806 20120317 1807 + correct a check made in lib_napms.c, so that terminfo applications 1808 can again use napms() (cf: 20110604). 1809 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 1810 (cf: 20040327). 1811 + correct scripting for --disable-echo option in test/configure. 1812 + amend check for missing c++ compiler to work when no error is 1813 reported, and no variables set (cf: 20021206). 1814 + add/use configure macro CF_DISABLE_ECHO. 1815 1816 20120310 1817 + fix some strict compiler warnings for abi6 and 64-bits. 1818 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 1819 + improve a limit-check in infocmp.c (Werner Fink): 1820 1821 20120303 1822 + minor tidying of terminfo.tail, clarify reason for limitation 1823 regarding mapping of \0 to \200 1824 + minor improvement to _nc_copy_termtype(), using memcpy to replace 1825 loops. 1826 + fix no-leaks checking in test/demo_termcap.c to account for multiple 1827 calls to setupterm(). 1828 + modified the libgpm change to show previous load as a problem in the 1829 debug-trace. 1830 > merge some patches from OpenSUSE rpm (Werner Fink): 1831 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 1832 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 1833 runtime linker 1834 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 1835 fallback entries 1836 1837 20120228 1838 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 1839 1840 20120225 1841 + modify configure script to allow creating dll's for MinGW when 1842 cross-compiling. 1843 + add --enable-string-hacks option to control whether strlcat and 1844 strlcpy may be used. The same issue applies to OpenBSD's warnings 1845 about snprintf, noting that this function is weakly standardized. 1846 + add configure checks for strlcat, strlcpy and snprintf, to help 1847 reduce bogus warnings with OpenBSD builds. 1848 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 1849 (cf:20111231) 1850 + update config.guess, config.sub 1851 1852 20120218 1853 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 1854 the first success (patch by Pierre Labastie). 1855 + improve configure macro CF_MKSTEMP by moving existence-check for 1856 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 1857 + improve configure macro CF_FUNC_POLL from luit changes to detect 1858 broken implementations, e.g., with Mac OS X. 1859 + add configure option --with-tparm-arg 1860 + build-fix for MinGW cross-compiling, so that make_hash does not 1861 depend on TTY definition (cf: 20111008). 1862 1863 20120211 1864 + make sgr for xterm-pcolor agree with other caps -TD 1865 + make sgr for att5425 agree with other caps -TD 1866 + make sgr for att630 agree with other caps -TD 1867 + make sgr for linux entries agree with other caps -TD 1868 + make sgr for tvi9065 agree with other caps -TD 1869 + make sgr for ncr260vt200an agree with other caps -TD 1870 + make sgr for ncr160vt100pp agree with other caps -TD 1871 + make sgr for ncr260vt300an agree with other caps -TD 1872 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 1873 + make sgr for cygwin, cygwinDBG agree with other caps -TD 1874 + add configure option --with-xterm-kbs to simplify configuration for 1875 Linux versus most other systems. 1876 1877 20120204 1878 + improved tic -D option, avoid making target directory and provide 1879 better diagnostics. 1880 1881 20120128 1882 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 1883 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 1884 + make sgr for sun-color agree with smso -TD 1885 + make sgr for prism9 agree with other caps -TD 1886 + make sgr for icl6404 agree with other caps -TD 1887 + make sgr for ofcons agree with other caps -TD 1888 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 1889 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 1890 + make sgr for avt-ns agree with other caps -TD 1891 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 1892 give verbose warnings (cf: 20110730). 1893 + modify misc/gen-edit.sh to make the location of the tabset directory 1894 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 1895 (Debian #653435, patch by Sven Joachim). 1896 1897 20120121 1898 + add --with-lib-prefix option to allow configuring for old/new flavors 1899 of OS/2 EMX. 1900 + modify check for gnat version to allow for year, as used in FreeBSD 1901 port. 1902 + modify check_existence() in db_iterator.c to simply check if the 1903 path is a directory or file, according to the need. Checking for 1904 directory size also gives no usable result with OS/2 (cf: 20120107). 1905 + support OS/2 kLIBC (patch by KO Myung-Hun). 1906 1907 20120114 1908 + several improvements to test/movewindow.c (prompted by discussion on 1909 Linux Mint forum): 1910 + modify movement commands to make them continuous 1911 + rewrote the test for mvderwin 1912 + rewrote the test for recursive mvwin 1913 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 1914 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 1915 and OpenBSD. 1916 + regenerated html manpages. 1917 1918 20120107 1919 + various improvments for MinGW (Juergen Pfeifer): 1920 + modify stat() calls to ignore the st_size member 1921 + drop mk-dlls.sh script. 1922 + change recommended regular expression library. 1923 + modify rain.c to allow for threaded configuraton. 1924 + modify tset.c to allow for case when size-change logic is not used. 1925 1926 20111231 1927 + modify toe's report when -a and -s options are combined, to add 1928 a column showing which entries belong to a given database. 1929 + add -s option to toe, to sort its output. 1930 + modify progs/toe.c, simplifying use of db-iterator results to use 1931 caching improvements from 20111001 and 20111126. 1932 + correct generation of pc-files when ticlib or termlib options are 1933 given to rename the corresponding tic- or tinfo-libraries (report 1934 by Sven Joachim). 1935 1936 20111224 1937 + document a portability issue with tput, i.e., that scripts which work 1938 with ncurses may fail in other implementations that do no parameter 1939 analysis. 1940 + add putty-sco entry -TD 1941 1942 20111217 1943 + review/fix places in manpages where --program-prefix configure option 1944 was not being used. 1945 + add -D option to infocmp, to show the database locations that it 1946 could use. 1947 + fix build for the special case where term-driver, ticlib and termlib 1948 are all enabled. The terminal driver depends on a few features in 1949 the base ncurses library, so tic's dependencies include both ncurses 1950 and termlib. 1951 + fix build work for term-driver when --enable-wgetch-events option is 1952 enabled. 1953 + use <stdint.h> types to fix some questionable casts to void*. 1954 1955 20111210 1956 + modify configure script to check if thread library provides 1957 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 1958 + modify configure script to suppress check to define _XOPEN_SOURCE 1959 for IRIX64, since its header files have a conflict versus 1960 _SGI_SOURCE. 1961 + modify configure script to add ".pc" files for tic- and 1962 tinfo-libraries, which were omitted in recent change (cf: 20111126). 1963 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 1964 1965 20111203 1966 + modify configure-check for etip.h dependencies, supplying a temporary 1967 copy of ncurses_dll.h since it is a generated file (prompted by 1968 Debian #646977). 1969 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 1970 1971 20111126 1972 + correct database iterator's check for duplicate entries 1973 (cf: 20111001). 1974 + modify database iterator to ignore $TERMCAP when it is not an 1975 absolute pathname. 1976 + add -D option to tic, to show the database locations that it could 1977 use. 1978 + improve description of database locations in tic manpage. 1979 + modify the configure script to generate a list of the ".pc" files to 1980 generate, rather than deriving the list from the libraries which have 1981 been built (patch by Mike Frysinger). 1982 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 1983 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 1984 from patch by Mike Frysinger). 1985 1986 20111119 1987 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 1988 from curses.priv.h, fixing a regression with IRIX64 and Tru64 1989 (cf: 20110416) 1990 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 1991 i.e., the database iterator was not initialized. This case is needed 1992 to when tic is translating to termcap, rather than loading the 1993 database (cf: 20111001). 1994 1995 20111112 1996 + add pccon entries for OpenBSD console (Alexei Malinin). 1997 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 1998 600 to work around inconsistent ifdef'ing of wcstof between C and 1999 C++ header files. 2000 + modify capconvert script to accept more than exact match on "xterm", 2001 e.g., the "xterm-*" variants, to exclude from the conversion (patch 2002 by Robert Millan). 2003 + add -lc_r as alternative for -lpthread, allows build of threaded code 2004 in older FreeBSD machines. 2005 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 2006 _POSIX_SOURCE are defined. 2007 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 2008 2009 20111030 2010 + modify make_db_path() to allow creating "terminfo.db" in the same 2011 directory as an existing "terminfo" directory. This fixes a case 2012 where switching between hashed/filesystem databases would cause the 2013 new hashed database to be installed in the next best location - 2014 root's home directory. 2015 + add variable cf_cv_prog_gnat_correct to those passed to 2016 config.status, fixing a problem with Ada95 builds (cf: 20111022). 2017 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 2018 accommodate broken implementations for _XPG6. 2019 + eliminate usage of NULL symbol from etip.h, to reduce header 2020 interdependencies. 2021 + add configure check to decide when to add _XOPEN_SOURCE define to 2022 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 2023 This is a workaround for gcc 4.6, which fails to build the c++ 2024 binding if that symbol is defined by the application, due to 2025 incorrectly combining the corresponding feature test macros 2026 (report by Peter Kruse). 2027 2028 20111022 2029 + correct logic for discarding mouse events, retaining the partial 2030 events used to build up click, double-click, etc, until needed 2031 (cf: 20110917). 2032 + fix configure script to avoid creating unused Ada95 makefile when 2033 gnat does not work. 2034 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 2035 internal functions of libncurses. The external interface of courses 2036 uses bool, which still produces these warnings. 2037 2038 20111015 2039 + improve description of --disable-tic-depends option to make it 2040 clear that it may be useful whether or not the --with-termlib 2041 option is also given (report by Sven Joachim). 2042 + amend termcap equivalent for set_pglen_inch to use the X/Open 2043 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 2044 + improve manpage for tgetent differences from termcap library. 2045 2046 20111008 2047 + moved static data from db_iterator.c to lib_data.c 2048 + modify db_iterator.c for memory-leak checking, fix one leak. 2049 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 2050 of ncurses rather than Requires, as well as Libs.private for the 2051 other library dependencies (prompted by Debian #644728). 2052 2053 20111001 2054 + modify tic "-K" option to only set the strict-flag rather than force 2055 source-output. That allows the same flag to control the parser for 2056 input and output of termcap source. 2057 + modify _nc_getent() to ignore backslash at the end of a comment line, 2058 making it consistent with ncurses' parser. 2059 + restore a special-case check for directory needed to make termcap 2060 text files load as if they were databases (cf: 20110924). 2061 + modify tic's resolution/collision checking to attempt to remove the 2062 conflicting alias from the second entry in the pair, which is 2063 normally following in the source file. Also improved the warning 2064 message to make it simpler to see which alias is the problem. 2065 + improve performance of the database iterator by caching search-list. 2066 2067 20110925 2068 + add a missing "else" in changes to _nc_read_tic_entry(). 2069 2070 20110924 2071 + modify _nc_read_tic_entry() so that hashed-database is checked before 2072 filesystem. 2073 + updated CF_CURSES_LIBS check in test/configure script. 2074 + modify configure script and makefiles to split TIC_ARGS and 2075 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 2076 to help separate searches for tic- and tinfo-libraries (patch by Nick 2077 Alcock aka "Nix"). 2078 + build-fix for lib_mouse.c changes (cf: 20110917). 2079 2080 20110917 2081 + fix compiler warning for clang 2.9 2082 + improve merging of mouse events (integrated patch by Damien 2083 Guibouret). 2084 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 2085 (patch by Damien Guibouret). 2086 2087 20110910 2088 + modify misc/gen_edit.sh to select a "linux" entry which works with 2089 the current kernel rather than assuming it is always "linux3.0" 2090 (cf: 20110716). 2091 + revert a change to getmouse() which had the undesirable side-effect 2092 of suppressing button-release events (report by Damien Guibouret, 2093 cf: 20100102). 2094 + add xterm+kbs fragment from xterm #272 -TD 2095 + add configure option --with-pkg-config-libdir to provide control over 2096 the actual directory into which pc-files are installed, do not use 2097 the pkg-config environment variables (discussion with Frederic L W 2098 Meunier). 2099 + add link to mailing-list archive in announce.html.in, as done in 2100 FAQ (prompted by question by Andrius Bentkus). 2101 + improve manpage install by adjusting the "#include" examples to 2102 show the ncurses-subdirectory used when --disable-overwrite option 2103 is used. 2104 + install an alias for "curses" to the ncurses manpage, tied to the 2105 --with-curses-h configure option (suggested by Reuben Thomas). 2106 2107 20110903 2108 + propagate error-returns from wresize, i.e., the internal 2109 increase_size and decrease_size functions through resize_term (report 2110 by Tim van der Molen, cf: 20020713). 2111 + fix typo in tset manpage (patch by Sven Joachim). 2112 2113 20110820 2114 + add a check to ensure that termcap files which might have "^?" do 2115 not use the terminfo interpretation as "\177". 2116 + minor cleanup of X-terminal emulator section of terminfo.src -TD 2117 + add terminator entry -TD 2118 + add simpleterm entry -TD 2119 + improve wattr_get macros by ensuring that if the window pointer is 2120 null, then the attribute and color values returned will be zero 2121 (cf: 20110528). 2122 2123 20110813 2124 + add substitution for $RPATH_LIST to misc/ncurses-config.in 2125 + improve performance of tic with hashed-database by caching the 2126 database connection, using atexit() to cleanup. 2127 + modify treatment of 2-character aliases at the beginning of termcap 2128 entries so they are not counted in use-resolution, since these are 2129 guaranteed to be unique. Also ignore these aliases when reporting 2130 the primary name of the entry (cf: 20040501) 2131 + double-check gn (generic) flag in terminal descriptions to 2132 accommodate old/buggy termcap databases which misused that feature. 2133 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 2134 error-return. 2135 2136 20110807 2137 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 2138 variable is defined in the makefiles which use it. 2139 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 2140 + build-fixes for NetBSD 5.1 with termcap support enabled. 2141 + corrected k9 in dg460-ansi, add other features based on manuals -TD 2142 + improve trimming of whitespace at the end of terminfo/termcap output 2143 from tic/infocmp. 2144 + when writing termcap source, ensure that colons in the description 2145 field are translated to a non-delimiter, i.e., "=". 2146 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 2147 use a single line. 2148 + add a null-pointer check when handling the $CC variable. 2149 2150 20110730 2151 + modify configure script and makefiles in c++ and progs to allow the 2152 directory used for rpath option to be overridden, e.g., to work 2153 around updates to the variables used by tic during an install. 2154 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 2155 for termcap output. 2156 + add _nc_strict_bsd variable in tic library which controls the 2157 "strict" BSD termcap compatibility from 20110723, plus these 2158 features: 2159 + allow escapes such as "\8" and "\9" when reading termcap 2160 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 2161 termcap files, passing through "a", "e", etc. 2162 + expand "\:" as "\072" on output. 2163 + modify _nc_get_token() to reset the token's string value in case 2164 there is a string-typed token lacking the "=" marker. 2165 + fix a few memory leaks in _nc_tgetent. 2166 + fix a few places where reading from a termcap file could refer to 2167 freed memory. 2168 + add an overflow check when converting terminfo/termcap numeric 2169 values, since terminfo stores those in a short, and they must be 2170 positive. 2171 + correct internal variables used for translating to termcap "%>" 2172 feature, and translating from termcap %B to terminfo, needed by 2173 tctest (cf: 19991211). 2174 + amend a minor fix to acsc when loading a termcap file to separate it 2175 from warnings needed for tic (cf: 20040710) 2176 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 2177 a termcap file to be handled via TERMINFO_DIRS. 2178 + modify _nc_infotocap() to include non-mandatory padding when 2179 translating to termcap. 2180 + modify _nc_read_termcap_entry(), passing a flag in the case where 2181 getcap is used, to reduce interactive warning messages. 2182 2183 20110723 2184 + add a check in start_color() to limit color-pairs to 256 when 2185 extended colors are not supported (patch by David Benjamin). 2186 + modify setcchar to omit no-longer-needed OR'ing of color pair in 2187 the SetAttr() macro (patch by David Benjamin). 2188 + add kich1 to sun terminfo entry (Yuri Pankov) 2189 + use bold rather than reverse for smso in sun-color terminfo entry 2190 (Yuri Pankov). 2191 + improve generation of termcap using tic/infocmp -C option, e.g., 2192 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 2193 regarding Schilling's test program): 2194 + translate %02 and %03 to %2 and %3 respectively. 2195 + suppress string capabilities which use %s, not supported by tgoto 2196 + use \040 rather than \s 2197 + expand null characters as \200 rather than \0 2198 + modify configure script to support shared libraries for DragonFlyBSD. 2199 2200 20110716 2201 + replace an assert() in _nc_Free_Argument() with a regular null 2202 pointer check (report/analysis by Franjo Ivancic). 2203 + modify configure --enable-pc-files option to take into account the 2204 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 2205 + add/use xterm+tmux chunk from xterm #271 -TD 2206 + resync xterm-new entry from xterm #271 -TD 2207 + add E3 extended capability to linux-basic (Miroslav Lichvar) 2208 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 2209 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 2210 + fix inconsistent tabset path in pcmw (Todd C. Miller). 2211 + remove a backslash which continued comment, obscuring altos3 2212 definition with OpenBSD toolset (Nicholas Marriott). 2213 2214 20110702 2215 + add workaround from xterm #271 changes to ensure that compiler flags 2216 are not used in the $CC variable. 2217 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 2218 7.1 with both gcc 4.2.4 and cc. 2219 + modify configure checks for AIX to include release 7.x 2220 + add loader flags/libraries to libtool options so that dynamic loading 2221 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 2222 at gentoo prefix repository (patch by Michael Haubenwallner). 2223 2224 20110626 2225 + move include of nc_termios.h out of term_entry.h, since the latter 2226 is installed, e.g., for tack while the former is not (report by 2227 Sven Joachim). 2228 2229 20110625 2230 + improve cleanup() function in lib_tstp.c, using _exit() rather than 2231 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 2232 comments forwarded by Nicholas Marriott). 2233 + reduce name pollution from term.h, moving fallback #define's for 2234 tcgetattr(), etc., to new private header nc_termios.h (report by 2235 Sergio NNX). 2236 + two minor fixes for tracing (patch by Vassili Courzakis). 2237 + improve trace initialization by starting it in use_env() and 2238 ripoffline(). 2239 + review old email, add details for some changelog entries. 2240 2241 20110611 2242 + update minix entry to minix 3.2 (Thomas Cort). 2243 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 2244 2245 20110604 2246 + fixes for MirBSD port: 2247 + set default prefix to /usr. 2248 + add support for shared libraries in configure script. 2249 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 2250 + add a few more checks based on ncurses/link_test. 2251 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 2252 2253 20110528 2254 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 2255 + used ncurses/link_test to check for behavior when the terminal has 2256 not been initialized and when an application passes null pointers 2257 to the library. Added checks to cover this (prompted by Redhat 2258 #707344). 2259 + modify MKlib_gen.sh to make its main() function call each function 2260 with zero parameters, to help find inconsistent checking for null 2261 pointers, etc. 2262 2263 20110521 2264 + fix warnings from clang 2.7 "--analyze" 2265 2266 20110514 2267 + compiler-warning fixes in panel and progs. 2268 + modify CF_PKG_CONFIG macro, from changes to tin -TD 2269 + modify CF_CURSES_FUNCS configure macro, used in test directory 2270 configure script: 2271 + work around (non-optimizer) bug in gcc 4.2.1 which caused 2272 test-expression to be omitted from executable. 2273 + force the linker to see a link-time expression of a symbol, to 2274 help work around weak-symbol issues. 2275 2276 20110507 2277 + update discussion of MKfallback.sh script in INSTALL; normally the 2278 script is used automatically via the configured makefiles. However 2279 there are still occasions when it might be used directly by packagers 2280 (report by Gunter Schaffler). 2281 + modify misc/ncurses-config.in to omit the "-L" option from the 2282 "--libs" output if the library directory is /usr/lib. 2283 + change order of tests for curses.h versus ncurses.h headers in the 2284 configure scripts for Ada95 and test-directories, to look for 2285 ncurses.h, from fixes to tin -TD 2286 + modify ncurses/tinfo/access.c to account for Tandem's root uid 2287 (report by Joachim Schmitz). 2288 2289 20110430 2290 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 2291 is not used when building a static library (report by Nicolas 2292 Boulenguez): 2293 + Ada95 build-fix for big-endian architectures such as sparc. This 2294 undoes one of the fixes from 20110319, which added an "Unused" member 2295 to representation clauses, replacing that with pragmas to suppress 2296 warnings about unused bits (patch by Nicolas Boulenguez). 2297 2298 20110423 2299 + add check in test/configure for use_window, use_screen. 2300 + add configure-checks for getopt's variables, which may be declared 2301 as different types on some Unix systems. 2302 + add check in test/configure for some legacy curses types of the 2303 function pointer passed to tputs(). 2304 + modify init_pair() to accept -1's for color value after 2305 assume_default_colors() has been called (Debian #337095). 2306 + modify test/background.c, adding commmand-line options to demonstrate 2307 assume_default_colors() and use_default_colors(). 2308 2309 20110416 2310 + modify configure script/source-code to only define _POSIX_SOURCE if 2311 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 2312 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 2313 + update config.guess, config.sub 2314 2315 20110409 2316 + fixes to build c++ binding with clang 3.0 (patch by Alexander 2317 Kolesen). 2318 + add check for unctrl.h in test/configure, to work around breakage in 2319 some ncurses packages. 2320 + add "--disable-widec" option to test/configure script. 2321 + add "--with-curses-colr" and "--with-curses-5lib" options to the 2322 test/configure script to address testing with very old machines. 2323 2324 20110404 5.9 release for upload to 2325 2326 20110402 2327 + various build-fixes for the rpm/dpkg scripts. 2328 + add "--enable-rpath-link" option to Ada95/configure, to allow 2329 packages to suppress the rpath feature which is normally used for 2330 the in-tree build of sample programs. 2331 + corrected definition of libdir variable in Ada95/src/Makefile.in, 2332 needed for rpm script. 2333 + add "--with-shared" option to Ada95/configure script, to allow 2334 making the C-language parts of the binding use appropriate compiler 2335 options if building a shared library with gnat. 2336 2337 20110329 2338 > portability fixes for Ada95 binding: 2339 + add configure check to ensure that SIGINT works with gnat. This is 2340 needed for the "rain" sample program. If SIGINT does not work, omit 2341 that sample program. 2342 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 2343 + add ncurses_compat.c, to supply functions used in the Ada95 binding 2344 which were added in 5.7 and later. 2345 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 2346 upon GNU sed. 2347 2348 20110326 2349 + add special check in Ada95/configure script for ncurses6 reentrant 2350 code. 2351 + regen Ada html documentation. 2352 + build-fix for Ada shared libraries versus the varargs workaround. 2353 + add rpm and dpkg scripts for Ada95 and test directories, for test 2354 builds. 2355 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 2356 CF_X_ATHENA_LIBS. 2357 + add configure check to determine if gnat's project feature supports 2358 libraries, i.e., collections of .ali files. 2359 + make all dereferences in Ada95 samples explicit. 2360 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 2361 + add configure check for, ifdef's for math.h which is in a separate 2362 package on Solaris and potentially not installed (report by Petr 2363 Pavlu). 2364 > fixes for Ada95 binding (Nicolas Boulenguez): 2365 + improve type-checking in Ada95 by eliminating a few warning-suppress 2366 pragmas. 2367 + suppress unreferenced warnings. 2368 + make all dereferences in binding explicit. 2369 2370 20110319 2371 + regen Ada html documentation. 2372 + change order of -I options from ncurses*-config script when the 2373 --disable-overwrite option was used, so that the subdirectory include 2374 is listed first. 2375 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 2376 + modify configure script to provide value for HTML_DIR in 2377 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 2378 distributed separately (report by Nicolas Boulenguez). 2379 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 2380 CFLAGS for the build has these options. 2381 + amend change from 20070324, to not add 1 to the result of getmaxx 2382 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 2383 thread in comp.lang.ada). 2384 + build-fix Ada95/samples for gnat 4.5 2385 + spelling fixes for Ada95/samples/explain.txt 2386 > fixes for Ada95 binding (Nicolas Boulenguez): 2387 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 2388 + add workaround for binding to set_field_type(), which uses varargs. 2389 The original binding from 990220 relied on the prevalent 2390 implementation of varargs which did not support or need va_copy(). 2391 + add dependency on gen/Makefile.in needed for *-panels.ads 2392 + add Library_Options to library.gpr 2393 + add Languages to library.gpr, for gprbuild 2394 2395 20110307 2396 + revert changes to limit-checks from 20110122 (Debian #616711). 2397 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 2398 + corrected a minor sign error in a field of Low_Level_Field_Type, to 2399 conform to form.h. 2400 + replaced C_Int by Curses_Bool as return type for some callbacks, see 2401 fieldtype(3FORM). 2402 + modify samples/sample-explain.adb to provide explicit message when 2403 explain.txt is not found. 2404 2405 20110305 2406 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 2407 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 2408 for compiler warnings (report by Nicolas Boulenguez). 2409 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 2410 needed to compile when chtype is 64-bits using gnat 4.4.5 2411 2412 20110226 5.8 release for upload to 2413 2414 20110226 2415 + update release notes, for 5.8. 2416 + regenerated html manpages. 2417 + change open() in _nc_read_file_entry() to fopen() for consistency 2418 with write_file(). 2419 + modify misc/run_tic.in to create parent directory, in case this is 2420 a new install of hashed database. 2421 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 2422 2423 20110220 2424 + configure script rpath fixes from xterm #269. 2425 + workaround for cygwin's non-functional features.h, to force ncurses' 2426 configure script to define _XOPEN_SOURCE_EXTENDED when building 2427 wide-character configuration. 2428 + build-fix in run_tic.sh for OS/2 EMX install 2429 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 2430 2431 20110212 2432 + regenerated html manpages. 2433 + use _tracef() in show_where() function of tic, to work correctly with 2434 special case of trace configuration. 2435 2436 20110205 2437 + add xterm-utf8 entry as a demo of the U8 feature -TD 2438 + add U8 feature to denote entries for terminal emulators which do not 2439 support VT100 SI/SO when processing UTF-8 encoding -TD 2440 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 2441 extended terminfo capability U8 (prompted by mailing list 2442 discussion). 2443 2444 20110122 2445 + start documenting interface changes for upcoming 5.8 release. 2446 + correct limit-checks in derwin(). 2447 + correct limit-checks in newwin(), to ensure that windows have nonzero 2448 size (report by Garrett Cooper). 2449 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 2450 Alcock). 2451 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 2452 by discussion with Kevin Martin). 2453 2454 20110115 2455 + modify Ada95/configure script to make the --with-curses-dir option 2456 work without requiring the --with-ncurses option. 2457 + modify test programs to allow them to be built with NetBSD curses. 2458 + document thick- and double-line symbols in curs_add_wch.3x manpage. 2459 + document WACS_xxx constants in curs_add_wch.3x manpage. 2460 + fix some warnings for clang 2.6 "--analyze" 2461 + modify Ada95 makefiles to make html-documentation with the project 2462 file configuration if that is used. 2463 + update config.guess, config.sub 2464 2465 20110108 2466 + regenerated html manpages. 2467 + minor fixes to enable lint when trace is not enabled, e.g., with 2468 clang --analyze. 2469 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 2470 + update ncurses/llib-lncurses* 2471 2472 20110101 2473 + fix remaining strict compiler warnings in ncurses library ABI=5, 2474 except those dealing with function pointers, etc. 2475 2476 20101225 2477 + modify nc_tparm.h, adding guards against repeated inclusion, and 2478 allowing TPARM_ARG to be overridden. 2479 + fix some strict compiler warnings in ncurses library. 2480 2481 20101211 2482 + suppress ncv in screen entry, allowing underline (patch by Alejandro 2483 R Sedeno). 2484 + also suppress ncv in konsole-base -TD 2485 + fixes in wins_nwstr() and related functions to ensure that special 2486 characters, i.e., control characters are handled properly with the 2487 wide-character configuration. 2488 + correct a comparison in wins_nwstr() (Redhat #661506). 2489 + correct help-messages in some of the test-programs, which still 2490 referred to quitting with 'q'. 2491 2492 20101204 2493 + add special case to _nc_infotocap() to recognize the setaf/setab 2494 strings from xterm+256color and xterm+88color, and provide a reduced 2495 version which works with termcap. 2496 + remove obsolete emacs "Local Variables" section from documentation 2497 (request by Sven Joachim). 2498 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 2499 (report by Sven Joachim). 2500 2501 20101128 2502 + modify test/configure and test/Makefile.in to handle this special 2503 case of building within a build-tree (Debian #34182): 2504 mkdir -p build && cd build && ../test/configure && make 2505 2506 20101127 2507 + miscellaneous build-fixes for Ada95 and test-directories when built 2508 out-of-tree. 2509 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 2510 + fix typo in rmso for tek4106 entry -Goran Weinholt 2511 2512 20101120 2513 + improve checks in test/configure for X libraries, from xterm #267 2514 changes. 2515 + modify test/configure to allow it to use the build-tree's libraries 2516 e.g., when using that to configure the test-programs without the 2517 rpath feature (request by Sven Joachim). 2518 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 2519 for compatibility, but generally deprecating those since the VTE 2520 library is what actually defines the behavior of "gnome", etc., 2521 since 2003 -TD 2522 2523 20101113 2524 + compiler warning fixes for test programs. 2525 + various build-fixes for test-programs with pdcurses. 2526 + updated configure checks for X packages in test/configure from xterm 2527 #267 changes. 2528 + add configure check to gnatmake, to accommodate cygwin. 2529 2530 20101106 2531 + correct list of sub-directories needed in Ada95 tree for building as 2532 a separate package. 2533 + modify scripts in test-directory to improve builds as a separate 2534 package. 2535 2536 20101023 2537 + correct parsing of relative tab-stops in tabs program (report by 2538 Philip Ganchev). 2539 + adjust configure script so that "t" is not added to library suffix 2540 when weak-symbols are used, allowing the pthread configuration to 2541 more closely match the non-thread naming (report by Werner Fink). 2542 + modify configure check for tic program, used for fallbacks, to a 2543 warning if not found. This makes it simpler to use additonal 2544 scripts to bootstrap the fallbacks code using tic from the build 2545 tree (report by Werner Fink). 2546 + fix several places in configure script using ${variable-value} form. 2547 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 2548 which do not support selectively linking against static libraries 2549 (report by John P. Hartmann) 2550 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 2551 2552 20101009 2553 + correct comparison used for setting 16-colors in linux-16color 2554 entry (Novell #644831) -TD 2555 + improve linux-16color entry, using "dim" for color-8 which makes it 2556 gray rather than black like color-0 -TD 2557 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 2558 external package "cindent". 2559 2560 20101002 2561 + improve linkages in html manpages, adding references to the newer 2562 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 2563 + add checks in tic for inconsistent cursor-movement controls, and for 2564 inconsistent printer-controls. 2565 + fill in no-parameter forms of cursor-movement where a parameterized 2566 form is available -TD 2567 + fill in missing cursor controls where the form of the controls is 2568 ANSI -TD 2569 + fix inconsistent punctuation in form_variables manpage (patch by 2570 Sven Joachim). 2571 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 2572 > patch by Juergen Pfeifer: 2573 + document how to build 32-bit libraries in README.MinGW 2574 + fixes to filename computation in mk-dlls.sh.in 2575 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 2576 Joachim). 2577 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 2578 distinguish between 32-bit and 64-bit hosts. The result is stored 2579 in mingw_arch 2580 2581 20100925 2582 + add "XT" capability to entries for terminals that support both 2583 xterm-style mouse- and title-controls, for "screen" which 2584 special-cases TERM beginning with "xterm" or "rxvt" -TD 2585 > patch by Juergen Pfeifer: 2586 + use 64-Bit MinGW toolchain (recommended package from TDM, see 2587 README.MinGW). 2588 + support pthreads when using the TDM MinGW toolchain 2589 2590 20100918 2591 + regenerated html manpages. 2592 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 2593 + add manpage for sp-funcs. 2594 + add sp-funcs to test/listused.sh, for documentation aids. 2595 2596 20100911 2597 + add manpages for summarizing public variables of curses-, terminfo- 2598 and form-libraries. 2599 + minor fixes to manpages for consistency (patch by Jason McIntyre). 2600 + modify tic's -I/-C dump to reformat acsc strings into canonical form 2601 (sorted, unique mapping) (cf: 971004). 2602 + add configure check for pthread_kill(), needed for some old 2603 platforms. 2604 2605 20100904 2606 + add configure option --without-tests, to suppress building test 2607 programs (request by Frederic L W Meunier). 2608 2609 20100828 2610 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 2611 + add check in terminfo source-reader to provide more informative 2612 message when someone attempts to run tic on a compiled terminal 2613 description (prompted by Debian #593920). 2614 + note in infotocap and captoinfo manpages that they read terminal 2615 descriptions from text-files (Debian #593920). 2616 + improve acsc string for vt52, show arrow keys (patch by Benjamin 2617 Sittler). 2618 2619 20100814 2620 + document in manpages that "mv" functions first use wmove() to check 2621 the window pointer and whether the position lies within the window 2622 (suggested by Poul-Henning Kamp). 2623 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 2624 by Tim van der Molen). 2625 + modify configure script to transform library names for tic- and 2626 tinfo-libraries so that those build properly with Mac OS X shared 2627 library configuration. 2628 + modify configure script to ensure that it removes conftest.dSYM 2629 directory leftover on checks with Mac OS X. 2630 + modify configure script to cleanup after check for symbolic links. 2631 2632 20100807 2633 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 2634 (cf: 20100724) 2635 + improve configure checks for location of tic and infocmp programs 2636 used for installing database and for generating fallback data, 2637 e.g., for cross-compiling. 2638 + add Markus Kuhn's wcwidth function for compiling MinGW 2639 + add special case to CF_REGEX for cross-compiling to MinGW target. 2640 2641 20100731 2642 + modify initialization check for win32con driver to eliminate need for 2643 special case for TERM "unknown", using terminal database if available 2644 (prompted by discussion with Roumen Petrov). 2645 + for MinGW port, ensure that terminal driver is setup if tgetent() 2646 is called (patch by Roumen Petrov). 2647 + document tabs "-0" and "-8" options in manpage. 2648 + fix Debian "lintian" issues with manpages reported in 2649 2650 2651 20100724 2652 + add a check in tic for missing set_tab if clear_all_tabs given. 2653 + improve use of symbolic links in makefiles by using "-f" option if 2654 it is supported, to eliminate temporary removal of the target 2655 (prompted by) 2656 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 2657 after exit from 'm' main-menu command. 2658 + improved ncu-indent, from mawk changes, allows more than one of 2659 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 2660 2661 20100717 2662 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 2663 the alternate character set (patch by Nicholas Marriott) 2664 + remove tar-copy.sh and related configure/Makefile chunks, since the 2665 Ada95 binding is now installed using rules in Ada95/src. 2666 2667 20100703 2668 + continue integrating changes to use gnatmake project files in Ada95 2669 + add/use configure check to turn on project rules for Ada95/src. 2670 + revert the vfork change from 20100130, since it does not work. 2671 2672 20100626 2673 + continue integrating changes to use gnatmake project files in Ada95 2674 + old gnatmake (3.15) does not produce libraries using project-file; 2675 work around by adding script to generate alternate makefile. 2676 2677 20100619 2678 + continue integrating changes to use gnatmake project files in Ada95 2679 + add configure --with-ada-sharedlib option, for the test_make rule. 2680 + move Ada95-related logic into aclocal.m4, since additional checks 2681 will be needed to distinguish old/new implementations of gnat. 2682 2683 20100612 2684 + start integrating changes to use gnatmake project files in Ada95 tree 2685 + add test_make / test_clean / test_install rules in Ada95/src 2686 + change install-path for adainclude directory to /usr/share/ada (was 2687 /usr/lib/ada). 2688 + update Ada95/configure. 2689 + add mlterm+256color entry, for mlterm 3.0.0 -TD 2690 + modify test/configure to use macros to ensure consistent order 2691 of updating LIBS variable. 2692 2693 20100605 2694 + change search order of options for Solaris in CF_SHARED_OPTS, to 2695 work with 64-bit compiles. 2696 + correct quoting of assignment in CF_SHARED_OPTS case for aix 2697 (cf: 20081227) 2698 2699 20100529 2700 + regenerated html documentation. 2701 + modify test/configure to support pkg-config for checking X libraries 2702 used by PDCurses. 2703 + add/use configure macro CF_ADD_LIB to force consistency of 2704 assignments to $LIBS, etc. 2705 + fix configure script for combining --with-pthread 2706 and --enable-weak-symbols options. 2707 2708 20100522 2709 + correct cross-compiling configure check for CF_MKSTEMP macro, by 2710 adding a check cache variable set by AC_CHECK_FUNC (report by 2711 Pierre Labastie). 2712 + simplify include-dependencies of make_hash and make_keys, to reduce 2713 the need for setting BUILD_CPPFLAGS in cross-compiling when the 2714 build- and target-machines differ. 2715 + repair broken-linker configuration by restoring a definition of SP 2716 variable to curses.priv.h, and adjusting for cases where sp-funcs 2717 are used. 2718 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 2719 variable to override (prompted by report by Pablo Cazallas). 2720 2721 20100515 2722 + add configure option --enable-pthreads-eintr to control whether the 2723 new EINTR feature is enabled. 2724 + modify logic in pthread configuration to allow EINTR to interrupt 2725 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 2726 + drop mkdirs.sh, use "mkdir -p". 2727 + add configure option --disable-libtool-version, to use the 2728 "-version-number" feature which was added in libtool 1.5 (report by 2729 Peter Haering). The default value for the option uses the newer 2730 feature, which makes libraries generated using libtool compatible 2731 with the standard builds of ncurses. 2732 + updated test/configure to match configure script macros. 2733 + fixes for configure script from lynx changes: 2734 + improve CF_FIND_LINKAGE logic for the case where a function is 2735 found in predefined libraries. 2736 + revert part of change to CF_HEADER (cf: 20100424) 2737 2738 20100501 2739 + correct limit-check in wredrawln, accounting for begy/begx values 2740 (patch by David Benjamin). 2741 + fix most compiler warnings from clang. 2742 + amend build-fix for OpenSolaris, to ensure that a system header is 2743 included in curses.h before testing feature symbols, since they 2744 may be defined by that route. 2745 2746 20100424 2747 + fix some strict compiler warnings in ncurses library. 2748 + modify configure macro CF_HEADER_PATH to not look for variations in 2749 the predefined include directories. 2750 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 2751 with gcc 4.x's c89 alias, which gives warning messages for cases 2752 where older versions would produce an error. 2753 2754 20100417 2755 + modify _nc_capcmp() to work with cancelled strings. 2756 + correct translation of "^" in _nc_infotocap(), used to transform 2757 terminfo to termcap strings 2758 + add configure --disable-rpath-hack, to allow disabling the feature 2759 which adds rpath options for libraries in unusual places. 2760 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 2761 directory was already added. 2762 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 2763 directories (which will be ignored). 2764 2765 20100410 2766 + improve win_driver.c handling of mouse: 2767 + discard motion events 2768 + avoid calling _nc_timed_wait when there is a mouse event 2769 + handle 4th and "rightmost" buttons. 2770 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 2771 cases where there are embedded blanks in the rpath option. 2772 2773 20100403 2774 + add configure check for exctags vs ctags, to work around pkgsrc. 2775 + simplify logic in _nc_get_screensize() to make it easier to see how 2776 environment variables may override system- and terminfo-values 2777 (prompted by discussion with Igor Bujna). 2778 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 2779 + improve handling of color-pairs embedded in attributes for the 2780 extended-colors configuration. 2781 + modify MKlib_gen.sh to build link_test with sp-funcs. 2782 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 2783 configuration as well as for rpath feature in *-config scripts. 2784 2785 20100327 2786 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 2787 reusable. 2788 + improve configure CF_REGEX, similar fixes. 2789 + improve configure CF_FIND_LINKAGE, adding add check between system 2790 (default) and explicit paths, where we can find the entrypoint in the 2791 given library. 2792 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 2793 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 2794 Ensure that Gpm_Close() is called in this case. 2795 2796 20100320 2797 + rename atari and st52 terminfo entries to atari-old, st52-old, use 2798 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 2799 Hourihane). 2800 2801 20100313 2802 + modify install-rule for manpages so that *-config manpages will 2803 install when building with --srcdir (report by Sven Joachim). 2804 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 2805 option is not the same as --disable-leaks (GenToo #305889). 2806 + modify #define's for build-compiler to suppress cchar_t symbol from 2807 compile of make_hash and make_keys, improving cross-compilation of 2808 ncursesw (report by Bernhard Rosenkraenzer). 2809 + modify CF_MAN_PAGES configure macro to replace all occurrences of 2810 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 2811 Kaseorg). 2812 2813 20100306 2814 + generate manpages for the *-config scripts, adapted from help2man 2815 (suggested by Sven Joachim). 2816 + use va_copy() in _nc_printf_string() to avoid conflicting use of 2817 va_list value in _nc_printf_length() (report by Wim Lewis). 2818 2819 20100227 2820 + add Ada95/configure script, to use in tar-file created by 2821 Ada95/make-tar.sh 2822 + fix typo in wresize.3x (patch by Tim van der Molen). 2823 + modify screen-bce.XXX entries to exclude ech, since screen's color 2824 model does not clear with color for that feature -TD 2825 2826 20100220 2827 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 2828 making those separately distributable. 2829 + build-fix for static libraries without dlsym (Debian #556378). 2830 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 2831 Schwarze). 2832 2833 20100213 2834 + add several screen-bce.XXX entries -TD 2835 2836 20100206 2837 + update mrxvt terminfo entry -TD 2838 + modify win_driver.c to support mouse single-clicks. 2839 + correct name for termlib in ncurses*-config, e.g., if it is renamed 2840 to provide a single file for ncurses/ncursesw libraries (patch by 2841 Miroslav Lichvar). 2842 2843 20100130 2844 + use vfork in test/ditto.c if available (request by Mike Frysinger). 2845 + miscellaneous cleanup of manpages. 2846 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 2847 + build-fix for --srcdir (patch by Miroslav Lichvar). 2848 2849 20100123 2850 + for term-driver configuration, ensure that the driver pointer is 2851 initialized in setupterm so that terminfo/termcap programs work. 2852 + amend fix for Debian #542031 to ensure that wattrset() returns only 2853 OK or ERR, rather than the attribute value (report by Miroslav 2854 Lichvar). 2855 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 2856 _nc_screen_of() compatible between normal/wide libraries again (patch 2857 by Miroslav Lichvar) 2858 + review/fix include-dependencies in modules files (report by Miroslav 2859 Lichvar). 2860 2861 20100116 2862 + modify win_driver.c to initialize acs_map for win32 console, so 2863 that line-drawing works. 2864 + modify win_driver.c to initialize TERMINAL struct so that programs 2865 such as test/lrtest.c and test/ncurses.c which test string 2866 capabilities can run. 2867 + modify term-driver modules to eliminate forward-reference 2868 declarations. 2869 2870 20100109 2871 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 2872 consistently to add new -D's while removing duplicates. 2873 + modify a few configure macros to consistently put new options 2874 before older in the list. 2875 + add tiparm(), based on review of X/Open Curses Issue 7. 2876 + minor documentation cleanup. 2877 + update config.guess, config.sub from 2878 2879 (caveat - its maintainer put 2010 copyright date on files dated 2009) 2880 2881 20100102 2882 + minor improvement to tic's checking of similar SGR's to allow for the 2883 most common case of SGR 0. 2884 + modify getmouse() to act as its documentation implied, returning on 2885 each call the preceding event until none are left. When no more 2886 events remain, it will return ERR. 2887 2888 20091227 2889 + change order of lookup in progs/tput.c, looking for terminfo data 2890 first. This fixes a confusion between termcap "sg" and terminfo 2891 "sgr" or "sgr0", originally from 990123 changes, but exposed by 2892 20091114 fixes for hashing. With this change, only "dl" and "ed" are 2893 ambiguous (Mandriva #56272). 2894 2895 20091226 2896 + add bterm terminfo entry, based on bogl 0.1.18 -TD 2897 + minor fix to rxvt+pcfkeys terminfo entry -TD 2898 + build-fixes for Ada95 tree for gnat 4.4 "style". 2899 2900 20091219 2901 + remove old check in mvderwin() which prevented moving a derived 2902 window whose origin happened to coincide with its parent's origin 2903 (report by Katarina Machalkova). 2904 + improve test/ncurses.c to put mouse droppings in the proper window. 2905 + update minix terminfo entry -TD 2906 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 2907 2908 20091212 2909 + correct transfer of multicolumn characters in multirow 2910 field_buffer(), which stopped at the end of the first row due to 2911 filling of unused entries in a cchar_t array with nulls. 2912 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 2913 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 2914 nulls. 2915 + use strdup() in set_menu_mark(), restore .marklen struct member on 2916 failure. 2917 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 2918 tset.c per 2919 2920 (patch by Nicholas Marriott). 2921 + replace a malloc in tic.c with strdup, checking for failure (patch by 2922 Nicholas Marriott). 2923 + update config.guess, config.sub from 2924 2925 2926 20091205 2927 + correct layout of working window used to extract data in 2928 wide-character configured by set_field_buffer (patch by Rafael 2929 Garrido Fernandez) 2930 + improve some limit-checks related to filename length in reading and 2931 writing terminfo entries. 2932 + ensure that filename is always filled in when attempting to read 2933 a terminfo entry, so that infocmp can report the filename (patch 2934 by Nicholas Marriott). 2935 2936 20091128 2937 + modify mk-1st.awk to allow tinfo library to be built when term-driver 2938 is enabled. 2939 + add error-check to configure script to ensure that sp-funcs is 2940 enabled if term-driver is, since some internal interfaces rely upon 2941 this. 2942 2943 20091121 2944 + fix case where progs/tput is used while sp-funcs is configure; this 2945 requires save/restore of out-character function from _nc_prescreen 2946 rather than the SCREEN structure (report by Charles Wilson). 2947 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 2948 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 2949 2950 20091114 2951 2952 + updated man/curs_trace.3x 2953 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 2954 + change a variable name in lib_newwin.c to make it clearer which 2955 value is being freed on error (patch by Nicholas Marriott). 2956 2957 20091107 2958 + improve test/ncurses.c color-cycling test by reusing attribute- 2959 and color-cycling logic from the video-attributes screen. 2960 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 2961 library which help make it compatible with interop applications 2962 (patch by Juergen Pfeifer). 2963 + add configure option --enable-interop, for integrating changes 2964 for generic/interop support to form-library by Juergen Pfeifer 2965 2966 20091031 2967 + modify use of $CC environment variable which is defined by X/Open 2968 as a curses feature, to ignore it if it is not a single character 2969 (prompted by discussion with Benjamin C W Sittler). 2970 + add START_TRACE in slk_init 2971 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 2972 soft-keys, broken in 20090927 merging. 2973 + change initialization of "hidden" flag for soft-keys from true to 2974 false, broken in 20090704 merging (Ubuntu #464274). 2975 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 2976 discussion with Fabian Groffen in GenToo #206201). 2977 + add test/xterm-256color.dat 2978 2979 20091024 2980 + quiet some pedantic gcc warnings. 2981 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 2982 SIGWINCH, and discard that value, to avoid confusing application 2983 (patch by Eygene Ryabinkin, FreeBSD #136223). 2984 2985 20091017 2986 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 2987 a possibly colon-separated list (Debian #550716). 2988 2989 20091010 2990 + supply a null-terminator to buffer in _nc_viswibuf(). 2991 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 2992 + minor fixes to error-returns in default function for tputs, as well 2993 as in lib_screen.c 2994 2995 20091003 2996 + add WACS_xxx definitions to wide-character configuration for thick- 2997 and double-lines (discussion with Slava Zanko). 2998 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 2999 + add ccc and initc capabilities to xterm-16color -TD 3000 > patch by Benjamin C W Sittler: 3001 + add linux-16color 3002 + correct initc capability of linux-c-nc end-of-range 3003 + similar change for dg+ccc and dgunix+ccc 3004 3005 20090927 3006 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 3007 that module since 20090711 is in libtinfo. 3008 + add configure option --enable-term-driver, to allow compiling with 3009 terminal-driver. That is used in MinGW port, and (being somewhat 3010 more complicated) is an experimental alternative to the conventional 3011 termlib internals. Currently, it requires the sp-funcs feature to 3012 be enabled. 3013 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 3014 library (some work remains for forms library). 3015 3016 20090919 3017 + document return code from define_key (report by Mike Gran). 3018 + make some symbolic links in the terminfo directory-tree shorter 3019 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 3020 + fix some groff warnings in terminfo.5, etc., from recent Debian 3021 changes. 3022 + change ncv and op capabilities in sun-color terminfo entry to match 3023 Sun's entry for this (report by Laszlo Peter). 3024 + improve interix smso terminfo capability by using reverse rather than 3025 bold (report by Kristof Zelechovski). 3026 3027 20090912 3028 + add some test programs (and make these use the same special keys 3029 by sharing linedata.h functions): 3030 test/test_addstr.c 3031 test/test_addwstr.c 3032 test/test_addchstr.c 3033 test/test_add_wchstr.c 3034 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 3035 inserting wide characters, since the wins_wch() function that it used 3036 did not update the cursor position (report by Ciprian Craciun). 3037 3038 20090906 3039 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 3040 work. 3041 + add null-pointer checks to other opaque-functions. 3042 + add is_pad() and is_subwin() functions for opaque access to WINDOW 3043 (discussion with Mark Dickinson). 3044 + correct merge to lib_newterm.c, which broke when sp-funcs was 3045 enabled. 3046 3047 20090905 3048 + build-fix for building outside source-tree (report by Sven Joachim). 3049 + fix Debian lintian warning for man/tabs.1 by making section number 3050 agree with file-suffix (report by Sven Joachim). 3051 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3052 3053 20090829 3054 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 3055 amd64 (Debian #542031). 3056 + fix typo in curs_mouse.3x (Debian #429198). 3057 3058 20090822 3059 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3060 3061 20090815 3062 + correct use of terminfo capabilities for initializing soft-keys, 3063 broken in 20090510 merging. 3064 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 3065 in non-blocking mode (patch by Clemens Ladisch). 3066 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 3067 help with builds on non-Unix platforms such as OS/2 EMX. 3068 + modify scripting for misc/run_tic.sh to test configure script's 3069 $cross_compiling variable directly rather than comparing host/build 3070 compiler names (prompted by comment in GenToo #249363). 3071 + fix configure script option --with-database, which was coded as an 3072 enable-type switch. 3073 + build-fixes for --srcdir (report by Frederic L W Meunier). 3074 3075 20090808 3076 + separate _nc_find_entry() and _nc_find_type_entry() from 3077 implementation details of hash function. 3078 3079 20090803 3080 + add tabs.1 to man/man_db.renames 3081 + modify lib_addch.c to compensate for removal of wide-character test 3082 from unctrl() in 20090704 (Debian #539735). 3083 3084 20090801 3085 + improve discussion in INSTALL for use of system's tic/infocmp for 3086 cross-compiling and building fallbacks. 3087 + modify test/demo_termcap.c to correspond better to options in 3088 test/demo_terminfo.c 3089 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3090 + fix logic for 'V' in test/ncurses.c tests f/F. 3091 3092 20090728 3093 + correct logic in tigetnum(), which caused tput program to treat all 3094 string capabilities as numeric (report by Rajeev V Pillai, 3095 cf: 20090711). 3096 3097 20090725 3098 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3099 3100 20090718 3101 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 3102 20090704 changes. 3103 + modify _nc_find_type_entry() to use hashing. 3104 + make CCHARW_MAX value configurable, noting that changing this would 3105 change the size of cchar_t, and would be ABI-incompatible. 3106 + modify test-programs, e.g,. test/view.c, to address subtle 3107 differences between Tru64/Solaris and HPUX/AIX getcchar() return 3108 values. 3109 + modify length returned by getcchar() to count the trailing null 3110 which is documented in X/Open (cf: 20020427). 3111 + fixes for test programs to build/work on HPUX and AIX, etc. 3112 3113 20090711 3114 + improve performance of tigetstr, etc., by using hashing code from tic. 3115 + minor fixes for memory-leak checking. 3116 + add test/demo_terminfo, for comparison with demo_termcap 3117 3118 20090704 3119 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 3120 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 3121 unctrl(). 3122 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3123 3124 20090627 3125 + update llib-lncurses[wt] to use sp-funcs. 3126 + various code-fixes to build/work with --disable-macros configure 3127 option. 3128 + add several new files from Juergen Pfeifer which will be used when 3129 integration of "sp-funcs" is complete. This includes a port to 3130 MinGW. 3131 3132 20090613 3133 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 3134 make includes of term.h without curses.h work (report by "Nix"). 3135 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3136 3137 20090607 3138 + fix a regression in lib_tputs.c, from ongoing merges. 3139 3140 20090606 3141 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3142 3143 20090530 3144 + fix an infinite recursion when adding a legacy-coding 8-bit value 3145 using insch() (report by Clemens Ladisch). 3146 + free home-terminfo string in del_curterm() (patch by Dan Weber). 3147 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3148 3149 20090523 3150 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3151 3152 20090516 3153 + work around antique BSD game's manipulation of stdscr, etc., versus 3154 SCREEN's copy of the pointer (Debian #528411). 3155 + add a cast to wattrset macro to avoid compiler warning when comparing 3156 its result against ERR (adapted from patch by Matt Kraii, Debian 3157 #528374). 3158 3159 20090510 3160 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3161 3162 20090502 3163 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3164 + add vwmterm terminfo entry (patch by Bryan Christ). 3165 3166 20090425 3167 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3168 3169 20090419 3170 + build fix for _nc_free_and_exit() change in 20090418 (report by 3171 Christian Ebert). 3172 3173 20090418 3174 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3175 3176 20090411 3177 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3178 This change finishes merging for menu and panel libraries, does 3179 part of the form library. 3180 3181 20090404 3182 + suppress configure check for static/dynamic linker flags for gcc on 3183 Darwin (report by Nelson Beebe). 3184 3185 20090328 3186 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 3187 function key definitions from emx-base for consistency -TD 3188 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 3189 by Kalle Olavi Niemitalo). 3190 + improve test/ncurses.c 'F' test, show combining characters in color. 3191 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 3192 a temporary variable. 3193 + use _nc_doalloc() rather than realloc() in a few places in ncurses 3194 library to avoid leak in out-of-memory condition (reports by William 3195 Egert and Martin Ettl based on cppcheck tool). 3196 + add --with-ncurses-wrap-prefix option to test/configure (discussion 3197 with Charles Wilson). 3198 + use ncurses*-config scripts if available for test/configure. 3199 + update test/aclocal.m4 and test/configure 3200 > patches by Charles Wilson: 3201 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 3202 version numbers (e.g. which include alphabetic chars, as well as 3203 digits, after the final '.'). 3204 + improve use of -no-undefined option for libtool by setting an 3205 intermediate variable LT_UNDEF in the configure script, and then 3206 using that in the libtool link-commands. 3207 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 3208 from 20090321 changes. 3209 + improve mk-1st.awk script by writing separate cases for the 3210 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 3211 termlib) is to be linked. 3212 + modify configure.in to allow broken-linker configurations, not just 3213 enable-reentrant, to set public wrap prefix. 3214 3215 20090321 3216 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 3217 build with tic and term libraries (patch by Charles Wilson). 3218 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 3219 (report by Charles Wilson). 3220 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 3221 the form, menu or panel libraries (patch by Charles Wilson). 3222 + add configure option --with-wrap-prefix to allow setting the prefix 3223 for functions used to wrap global variables to something other than 3224 "_nc_" (discussion with Charles Wilson). 3225 3226 20090314 3227 + modify scripts to generate ncurses*-config and pc-files to add 3228 dependency for tinfo library (patch by Charles Wilson). 3229 + improve comparison of program-names when checking for linked flavors 3230 such as "reset" by ignoring the executable suffix (reports by Charles 3231 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 3232 list). 3233 + suppress configure check for static/dynamic linker flags for gcc on 3234 Solaris 10, since gcc is confused by absence of static libc, and 3235 does not switch back to dynamic mode before finishing the libraries 3236 (reports by Joel Bertrand, Alan Pae). 3237 + minor fixes to Intel compiler warning checks in configure script. 3238 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 3239 + modify set_curterm() to make broken-linker configuration work with 3240 changes from 20090228 (report by Charles Wilson). 3241 3242 20090228 3243 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3244 + modify declaration of cur_term when broken-linker is used, but 3245 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 3246 3247 20090221 3248 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3249 3250 20090214 3251 + add configure script --enable-sp-funcs to enable the new set of 3252 extended functions. 3253 + start integrating patches by Juergen Pfeifer: 3254 + add extended functions which specify the SCREEN pointer for several 3255 curses functions which use the global SP (these are incomplete; 3256 some internals work is needed to complete these). 3257 + add special cases to configure script for MinGW port. 3258 3259 20090207 3260 + update several configure macros from lynx changes 3261 + append (not prepend) to CFLAGS/CPPFLAGS 3262 + change variable from PATHSEP to PATH_SEPARATOR 3263 + improve install-rules for pc-files (patch by Miroslav Lichvar). 3264 + make it work with $DESTDIR 3265 + create the pkg-config library directory if needed. 3266 3267 20090124 3268 + modify init_pair() to allow caller to create extra color pairs beyond 3269 the color_pairs limit, which use default colors (request by Emanuele 3270 Giaquinta). 3271 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 3272 + fix typo "==" where "=" is needed in ncurses-config.in and 3273 gen-pkgconfig.in files (Debian #512161). 3274 3275 20090117 3276 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 3277 *BSD's, without which "main" might be one of the shared library's 3278 dependencies (report/analysis by Ken Dickey). 3279 + modify waddch_literal(), updating line-pointer after a multicolumn 3280 character is found to not fit on the current row, and wrapping is 3281 done. Since the line-pointer was not updated, the wrapped 3282 multicolumn character was written to the beginning of the current row 3283 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 3284). 3285 3286 20090110 3287 + add screen.Eterm terminfo entry (GenToo #124887) -TD 3288 + modify adacurses-config to look for ".ali" files in the adalib 3289 directory. 3290 + correct install for Ada95, which omitted libAdaCurses.a used in 3291 adacurses-config 3292 + change install for adacurses-config to provide additional flavors 3293 such as adacursesw-config, for ncursesw (GenToo #167849). 3294 3295 20090105 3296 + remove undeveloped feature in ncurses-config.in for setting 3297 prefix variable. 3298 + recent change to ncurses-config.in did not take into account the 3299 --disable-overwrite option, which sets $includedir to the 3300 subdirectory and using just that for a -I option does not work - fix 3301 (report by Frederic L W Meunier). 3302 3303 20090104 3304 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 3305 deciding whether to add $LIBS to --libs output; that should be shown 3306 for the ncurses and tinfo libraries without taking rpath into 3307 account. 3308 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 3309 used in static libraries (report by Marty Jack). 3310 3311 20090103 3312 + add a configure-time check to pick a suitable value for 3313 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 3314 + add configure --with-pkg-config and --enable-pc-files options, along 3315 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 3316 for pkg-config (request by Jan Engelhardt). 3317 + use $includedir symbol in misc/ncurses-config.in, add --includedir 3318 option. 3319 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 3320 configure check to detect whether a "-" is needed before "ar" 3321 options. 3322 + update config.guess, config.sub from 3323 3324 3325 20081227 3326 + modify mk-1st.awk to work with extra categories for tinfo library. 3327 + modify configure script to allow building shared libraries with gcc 3328 on AIX 5 or 6 (adapted from patch by Lital Natan). 3329 3330 20081220 3331 + modify to omit the opaque-functions from lib_gen.o when 3332 --disable-ext-funcs is used. 3333 + add test/clip_printw.c to illustrate how to use printw without 3334 wrapping. 3335 + modify ncurses 'F' test to demo wborder_set() with colored lines. 3336 + modify ncurses 'f' test to demo wborder() with colored lines. 3337 3338 20081213 3339 + add check for failure to open hashed-database needed for db4.6 3340 (GenToo #245370). 3341 + corrected --without-manpages option; previous change only suppressed 3342 the auxiliary rules install.man and uninstall.man 3343 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 3344 GenToo #250454). 3345 + fixes from NetBSD port at 3346 3347 patch-ac (build-fix for DragonFly) 3348 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 3349 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 3350 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 3351 search-lists. 3352 + correct title string for keybound manpage (patch by Frederic Culot, 3353 OpenBSD documentation/6019), 3354 3355 20081206 3356 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 3357 work for progs/clear, progs/tabs, etc. 3358 + correct buffer-size after internal resizing of wide-character 3359 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 3360 + add "-i" option to test/filter.c to tell it to use initscr() rather 3361 than newterm(), to investigate report on comp.unix.programmer that 3362 ncurses would clear the screen in that case (it does not - the issue 3363 was xterm's alternate screen feature). 3364 + add check in mouse-driver to disable connection if GPM returns a 3365 zero, indicating that the connection is closed (Debian #506717, 3366 adapted from patch by Samuel Thibault). 3367 3368 20081129 3369 + improve a workaround in adding wide-characters, when a control 3370 character is found. The library (cf: 20040207) uses unctrl() to 3371 obtain a printable version of the control character, but was not 3372 passing color or video attributes. 3373 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 3374 display meta-characters. 3375 + turn on _XOPEN_CURSES definition in curses.h 3376 + add eterm-color entry (report by Vincent Lefevre) -TD 3377 + correct use of key_name() in test/ncurses.c 'A' test, which only 3378 displays wide-characters, not key-codes since 20070612 (report by 3379 Ricardo Cantu). 3380 3381 20081122 3382 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 3383 Ada95 (patch by Juergen Pfeifer). 3384 + document in TO-DO an issue with Cygwin's package for GNAT (report 3385 by Mike Dennison). 3386 + improve error-checking of command-line options in "tabs" program. 3387 3388 20081115 3389 + change several terminfo entries to make consistent use of ANSI 3390 clear-all-tabs -TD 3391 + add "tabs" program (prompted by Debian #502260). 3392 + add configure --without-manpages option (request by Mike Frysinger). 3393 3394 20081102 5.7 release for upload to 3395 3396 20081025 3397 + add a manpage to discuss memory leaks. 3398 + add support for shared libraries for QNX (other than libtool, which 3399 does not work well on that platform). 3400 + build-fix for QNX C++ binding. 3401 3402 20081018 3403 + build-fixes for OS/2 EMX. 3404 + modify form library to accept control characters such as newline 3405 in set_field_buffer(), which is compatible with Solaris (report by 3406 Nit Khair). 3407 + modify configure script to assume --without-hashed-db when 3408 --disable-database is used. 3409 + add "-e" option in ncurses/Makefile.in when generating source-files 3410 to force earlier exit if the build environment fails unexpectedly 3411 (prompted by patch by Adrian Bunk). 3412 + change configure script to use CF_UTF8_LIB, improved variant of 3413 CF_LIBUTF8. 3414 3415 20081012 3416 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 3417 original to teraterm2.3 -TD 3418 + update "gnome" terminfo to 2.22.3 -TD 3419 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 3420 + add "aterm" terminfo -TD 3421 + add "linux2.6.26" terminfo -TD 3422 + add logic to tic for cancelling strings in user-defined capabilities, 3423 overlooked til now. 3424 3425 20081011 3426 + regenerated html documentation. 3427 + add -m and -s options to test/keynames.c and test/key_names.c to test 3428 the meta() function with keyname() or key_name(), respectively. 3429 + correct return value of key_name() on error; it is null. 3430 + document some unresolved issues for rpath and pthreads in TO-DO. 3431 + fix a missing prototype for ioctl() on OpenBSD in tset.c 3432 + add configure option --disable-tic-depends to make explicit whether 3433 tic library depends on ncurses/ncursesw library, amends change from 3434 20080823 (prompted by Debian #501421). 3435 3436 20081004 3437 + some build-fixes for configure --disable-ext-funcs (incomplete, but 3438 works for C/C++ parts). 3439 + improve configure-check for awks unable to handle large strings, e.g. 3440 AIX 5.1 whose awk s
|
http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=4cb156fe3b8974a83b9b9036196f427f97deb2e0;hb=092f1e4b79bca1d1cd3e24baa7abc3ad4cea8420
|
CC-MAIN-2022-33
|
refinedweb
| 25,508
| 66.03
|
S
sabeer6870-gmail-com
@sabeer6870-gmail-com
7Reputation
6Posts
135Profile views
0Followers
0Following
Posts made by sabeer6870-gmail-com
- RE: Design push notification posted in Interview Questions
- RE: Design push notification
- yes its should be same as the one you might have been seen in your phone/email.
2.you are right event would be coming from promotions team and push notification have to recieve it and send the notification to users
- App doing ping to the server would not be the efficient way(but I can be wrong), instead it should be initiated from the server. Task of push notification would be to just send the message to the user through app/email, how notification is handled would depend on the device.
- RE: Design Q&A application as in Amazon has it for each product
@randome_coder thanks for the reply.
Using map is the better way to go.
Do we need to go into this much detail in actual interview? Any thoughts?
But code is always good.
Another issue ... what if there are millions of question? Do we need to store them in db and write code for that too?
- RE: Design a Snake Ladder game
public interface Board { public void reset(); } // Should be singleton public class SnakeBoard implements Board { List<Snake> snakes; List<Ladder> ladders; List<SnakeGamePlayer> players; Cell[][] cells; SixFaceDice dice = new SixFaceDice(); SnakeBoard() { // initialize snakes, ladder, cells & players } SnakeBoard(List<SnakeGamePlayer> snakes, List<Ladders> ladders, List<Player> players, int row, int column) { // assign member vars this.cell = new Cell[row][column] } boolean winner() { // return null until no one won the game } public void reset() {} public void bite() { // if new position of player is same as anyone of the snake head pos, update player position } public void moveUpTheLadder() {} } class Ladder { private int start; private end; } class Snake { private int[] head;//x,y co-ordinate of player private int[] tail; } public interface Player { public int score(); public int move(); } public SnakeGamePlayer implements Player { private int pid; // implement score and move method public int[] currPositionOnBoard() { // return x,y co-ordinate of player in board } } public interface Dice { public int roll(); } public class SixFaceDice implements Dice { private static SixFaceDice s; public int roll() { // generate random number between 1 to 6 } static private SixFaceDice() { } public void getSixFaceDice() { if (s == null) { return new SixFaceDice()} else { return this.s; } } } public class Cell { List<Integer> ids; // players ids who are in the cell int num; int row; int column; }
- Design push notification
Design push notification :
- Which sends the notification to the registered users
- Which receives an event from promotions team
- Sends notification to iOS, android or sends an email or all three
- Design Q&A application as in Amazon has it for each product
Design Q&A as you see in Amazon, Walmart website for an item.
How about this?
public class QnA { ArrayList<Question> questions; ArrayList<Answers> answers; ArrayList<Items> items; QnA() { this.questions = new ArrayList<Question>(); this.answers = new ArrayList<Answers>(); this.items = new ArrayList<Answers>(); } boolean PostQuestion(int itemId, String ques) { try { Item item = items.get(itemId); Question q = new Question(itemId, ques); this.questions.add(q); } catch (IndexOutOfBoundException) { return false; } return true; } boolean PostAnswer(int qid, int itemId, String ques) { try { Question q = questions.get(qid); Answer a = new Answer(qid, itemId, ques); this.questions.add(q); q.ans.add(a); } catch (IndexOutOfBoundException) { return false; } return true; } // upvoteQuestion() // upvoteAnswer() } class Question { int itemId; string ques; static int qctr = 0; int id; ArrayList<Answers> ans; Question(int itemId, String ques) { this.id = qctr++; this.ques = ques; this.itemId = itemId; this.ans = new ArrayList<Answers>(); } } class Answer { int qid; int id; String ans; static int actr = 0; int itemId; Answer(int qid, int itemId, String ans) { this.id = actr++; this.itemId = itemId; this.ans = ans; } } class Item { ArrayList<Question> qs; } main() { QnA qna = new QnA(); if(!qna.postQuestion(1, "what is the product price")) { System.out.println("Invalid Item Id"); } if (!qna.postAnswer(1, 1, "its 19.4")) { System.out.println("Invalid Question Id"); } }
Inputs will be greatly appreciated.
Thanks in advance.
|
https://discuss.leetcode.com/user/sabeer6870-gmail-com
|
CC-MAIN-2017-39
|
refinedweb
| 670
| 53.31
|
1 Nov 05:42 2006
request:get-data() returns root elem not doc (documentation bug?)
Robert Koberg <rob <at> koberg.com>
2006-11-01 04:42:49 GMT
2006-11-01 04:42:49 GMT
Hi, The documentation for the request:get-data functions says: "Returns the content of a POST request as an XML document or a string representaion. Returns an empty sequence if there is no data." But, the get-data function returns the root elem/node: "return (NodeValue)doc.getDocumentElement();" I don't have a preference either way, but it did cause me some confusion when I went to use the result of get-data. best, -Rob ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
|
http://blog.gmane.org/gmane.text.xml.exist/month=20061101
|
CC-MAIN-2015-22
|
refinedweb
| 149
| 58.79
|
SYNOPSIS
nvme format <device> [--namespace-id=<nsid> | -n <nsid>]
[--lbaf=<lbaf> | -l <lbaf>]
[--ses=<ses> | -s <ses>]
[--pil=<pil> | -p <pil>]
[--pi=<pi> | -i <pi>]
[--ms=<ms> | -m <ms>]
DESCRIPTION
For the NVMe device given, send an nvme Format Namespace admin command and provides the results.
The <device> parameter is mandatory and may be either the NVMe character device (ex: /dev/nvme0), or a namespace block device (ex: /dev/nvme0n1). If the character device is given, the namespace identifier will default to 0xffffffff to send the format to all namespace, but can be overridden to any namespace with the namespace-id option. If the block device is given, the namespace identifier will default to the namespace id of the block device given, but can be overridden with the same option.
On success, the program will automatically issue BLKRRPART ioctl to force rescanning the namespaces. If the driver is recent enough, this will automatically update the physical block size. If it is not recent enough, you will need to remove and rescan your device some other way for the new block size to be visible.
OPTIONS
-n <nsid>, --namespace-id=<nsid>
- Send the format command for the specified nsid. This can be used to override the default value for either character device (0xffffffff) or the block device (result from NVME_IOCTL_ID).
-l <lbaf>, --lbaf=<lbaf>
- LBA Format: This field specifies the LBA format to apply to the NVM media. This corresponds to the LBA formats indicated in the Identify Namespace command. Defaults to 0.
-s <ses>, --ses=<ses>
- Secure Erase Settings: This field specifies whether a secure erase should be performed as part of the format and the type of the secure erase operation. The erase applies to all user data, regardless of location (e.g., within an exposed LBA, within a cache, within deallocated LBAs, etc). Defaults to 0.
-p <pil>, --pil=<pil>
- Protection Information Location: If set to '1' and protection information is enabled, then protection information is transferred as the first eight bytes of metadata. If cleared to '0' and protection information is enabled, then protection information is transferred as the last eight bytes of metadata. Defaults to 0.
-i <pi>, --pi=<pi>
- Protection Information: This field specifies whether end-to-end data protection is enabled and the type of protection information. Defaults to 0.
-m <ms>, --ms=<ms>
- Metadata Settings: This field is set to '1' if the metadata is transferred as part of an extended data LBA. This field is cleared to '0' if the metadata is transferred as part of a separate buffer. The metadata may include protection information, based on the Protection Information (PI) field. Defaults to 0.
EXAMPLES
- • Format the device using all defaults:
# nvme format /dev/nvme0n1
- • Format namespace 1 with user data secure erase settings and protection information:
# nvme format /dev/nvme0 --namespace-id=1 --ses=1 --pi=1
NVME
Part of the nvme-user suite
|
http://manpages.org/nvme-format
|
CC-MAIN-2018-17
|
refinedweb
| 484
| 53.61
|
Subject: Re: [Boost-users] [mpl]... is there an mpl::string
From: Tor Brede Vekterli (vekterli_at_[hidden])
Date: 2009-04-17 01:37:11
On Mon, Apr 13, 2009 at 5:45 PM, Noah Roberts <roberts.noah_at_[hidden]> wrote:
> Eric Niebler wrote:
>>
>> Eric Niebler wrote:
>>>
>>> Eric Niebler wrote:
>>>>
>>>> There's another consideration that I've been glossing over. mpl::string
>>>> isn't *really* random access. Since mpl::string<'a','b','c'> designates the
>>>> same character sequence as mpl::string<'abc'>, it takes O(N) template
>>>> instantiations to find the N-th element in the sequence, at least in the
>>>> current implementation. I'd like to fix that, but I don't know how (yet).
>>>
>>> Now this is really bothering me. The right thing to do is replace the
>>> current implementation with one in which mpl::string is a front- and
>>> back-extensible bidirectional sequence, and give up with the random access
>>> pretense. :-(
>>
>> I've made this change. mpl::string is no longer random access (it never
>> really was). Also, I've changed c_str to be a separate metafunction that
>> works with any forward sequence.
>>
>> Thanks particularly to Noah Roberts for the good feedback.
>>
> No problem.
>
> Now someone just needs to make format<> :P
>
Although a tongue-in-cheek remark, the notion of a compile-time
formatting library somehow appeals to me, if for nothing else for its
sheer absurdity ;)
Due to the limits of preprocessor stringization when dealing with
metaprogramming integers, I suspect it might have its use cases
wherever runtime conversion overhead is best avoided. With mpl::string
it's trivial to create integer-to-string conversion metafunctions, and
it should also not be overly complicated to glue these together into
something bigger if so should desired. I whipped up a quick (and not
very pretty) compile-time _itoa equivalent as a small proof of concept
just for fun (only tested on visual c++ 2008)
#include <iostream>
#include <boost/mpl/string.hpp>
#include <boost/mpl/vector_c.hpp>
#include <boost/mpl/at.hpp>
#include <boost/mpl/if.hpp>
#include <boost/mpl/int.hpp>
#include <boost/mpl/bool.hpp>
#include <boost/mpl/identity.hpp>
#include <boost/mpl/push_back.hpp>
namespace mpl = boost::mpl;
struct itoa_ct
{
// radix for _itoa() goes up to 36, but only bother with 16 here
typedef mpl::vector_c<char
,'0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f'
> radix_t;
template <int Radix, unsigned int Quotient>
struct radix_convert
{
typedef typename mpl::push_back<
typename radix_convert<Radix, Quotient / Radix>::type
, mpl::char_<mpl::at_c<radix_t, Quotient % Radix>::type::value>
>::type type;
};
template <int Radix>
struct radix_convert<Radix, 0>
{
typedef mpl::string<> type;
};
template <int I, int Radix = 10>
struct apply
{
// All bases != 10 consider I as unsigned
typedef typename radix_convert<
Radix, static_cast<unsigned int>((Radix == 10 && I < 0) ? -I : I)
>::type converted_t;
// Prefix with '-' if negative and base 10
typedef typename mpl::if_<
mpl::bool_<(Radix == 10 && I < 0)>
, mpl::push_front<converted_t, mpl::char_<'-'> >
, mpl::identity<converted_t>
>::type::type type;
};
};
int main(int argc, char* argv[])
{
std::cout << mpl::c_str<itoa_ct::apply<12345>::type>::value << "\n";
std::cout << mpl::c_str<itoa_ct::apply<-98765>::type>::value << "\n";
std::cout << mpl::c_str<itoa_ct::apply<2009, 2>::type>::value << "\n";
std::cout << mpl::c_str<itoa_ct::apply<0xb0057, 16>::type>::value << "\n";
std::cout << mpl::c_str<itoa_ct::apply<0xffffffff, 16>::type>::value << "\n";
return 0;
}
which outputs, as expected:
12345
-98765
11111011001
b0057
ffffffff
I'm swamped with thesis-work until June, so I cannot really do much
else with this for now, but I just wanted to throw it out there. Any
thoughts? :)
Regards,
Tor Brede Vekterli
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
|
http://lists.boost.org/boost-users/2009/04/47326.php
|
CC-MAIN-2014-41
|
refinedweb
| 626
| 55.24
|
Managing Data in iOS Apps with SQLite
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
This article was peer reviewed by Aleksander Koko . Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
Almost all apps will need to store data of some form. Maybe you need to save user preferences, progress in a game, or offline data so your app can work without a network connection. Developers have a lot of options for managing data in iOS apps, from Core Data to cloud based storage, but one elegant and reliable local storage option is SQLite.
In this tutorial I will show you how to add SQLite support to your app. You can find the final source code on GitHub.
Getting Started
The SQLite library is written in C, and all queries happen as calls to C functions. This makes it challenging to use, as you have to be aware of pointers and data types etc. To help, you can make use of Objective-C or Swift wrappers to serve as an adapter layer.
A popular choice is FMDB, an Objective-C wrapper around SQLite. Its easy to use, but personally I prefer to not use hard-coded SQL (Structured Query Language) commands. For this tutorial, I will use SQLite.swift to create a basic contact list.
First, create a new single view project in Xcode (SQLite.swift requires Swift 2 and Xcode 7 or greater). I created a
ViewController in Main.storyboard that looks like the below. Create your own similar layout, or download the storyboard files here.
At the bottom is a
TableView which will hold the contacts.
Installation
You can install SQLite.swift with Carthage, CocoaPods, or manually.
The Model
Create a new Swift file / class named Contact.swift, it contains three properties and initializers to keep it simple.
import Foundation class Contact { let id: Int64? var name: String var phone: String var address: String init(id: Int64) { self.id = id name = "" phone = "" address = "" } init(id: Int64, name: String, phone: String, address: String) { self.id = id self.name = name self.phone = phone self.address = address } }
The
id is required as a parameter when creating an object, so you can reference it in the database later.
Connecting the User Interface
In ViewController.swift make the class implement
UITableViewDelegate and
UITableViewSource protocols.
class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate { ... }
Connect the following
IOutlets with their corresponding views by dragging or manually adding them in code.
@IBOutlet weak var nameTextField: UITextField! @IBOutlet weak var phoneTextField: UITextField! @IBOutlet weak var addressTextField: UITextField! @IBOutlet weak var contactsTableView: UITableView!
Now you will need a list of contacts, and an index for the contact selected from the list.
private var contacts = [Contact]() private var selectedContact: Int?
Link the DataSource and Delegate of the
UITableView with the
UIViewController in the storyboard.
Or by adding the following lines into the
viewDidLoad() method of ViewController.swift.
contactsTableView.dataSource = self contactsTableView.delegate = self
To insert, update and remove elements from the
UITableView you need to implement three basic methods from the protocols mentioned above.
The first will fill the
UITextFields with the corresponding contact information from a selected contact. Yt will then save the row that represents this contact in the table.
func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) { nameTextField.text = contacts[indexPath.row].name phoneTextField.text = contacts[indexPath.row].phone addressTextField.text = contacts[indexPath.row].address selectedContact = indexPath.row }
The next function tells the
UITableViewDataSource how many cells of data it should load. For now, it will be zero since the array is empty.
func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count }
The last function returns a specific
UITableViewCell for each row. First get the cell using the identifier, then its child views using their tag. Make sure that the identifiers match your element names.
func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("ContactCell")! var label: UILabel label = cell.viewWithTag(1) as! UILabel // Name label label.text = contacts[indexPath.row].name label = cell.viewWithTag(2) as! UILabel // Phone label label.text = contacts[indexPath.row].phone return cell }
The app can now run, but there is no ability to add or edit contacts yet. To do this link the following
IBActions with the corresponding buttons.
@IBAction func addButtonClicked() { let name = nameTextField.text ?? "" let phone = phoneTextField.text ?? "" let address = addressTextField.text ?? "" let contact = Contact(id: 0, name: name, phone: phone, address: address) contacts.append(contact) contactsTableView.insertRowsAtIndexPaths([NSIndexPath(forRow: contacts.count-1, inSection: 0)], withRowAnimation: .Fade) }
Here you take the values of the
UITextFields, and create an object which is added to the
contacts list. The
id is set to 0, since you haven’t implemented the database yet. The function
insertRowsAtIndexPaths() takes as arguments an array of indexes of the rows that will be affected, and the animation to perform with the change.
@IBAction func updateButtonClicked() { if selectedContact != nil { let id = contacts[selectedContact].id! let contact = Contact( id: id, name: nameTextField.text ?? "", phone: phoneTextField.text ?? "", address: addressTextField.text ?? "") contacts.removeAtIndex(selectedContact!) contacts.insert(contact, atIndex: selectedContact!) contactsTableView.reloadData() } else { print("No item selected") } }
In this function you create a new
Contact, and delete and re-insert in the same index of the list to make the replacement. The function doesn’t currently check to see if the data has changed.
@IBAction func deleteButtonClicked() { if selectedContact != nil { contacts.removeAtIndex(selectedContact) contactsTableView.deleteRowsAtIndexPaths([NSIndexPath(forRow: selectedContact, inSection: 0)], withRowAnimation: .Fade) } else { print("No item selected") } }
The last function removes the contact selected and refreshes the table.
At this point the application works, but will lose all changes when relaunched.
Creating a Database
Now time to manage the database. Create a new swift file / class named StephencelisDB.swift and import the SQLite library.
import SQLite class StephencelisDB { }
First, initialize an instance of the class, using the ‘Singleton’ pattern. Then, declare an object of type
Connection, which is the actual database object you will call.
static let instance = StephencelisDB() private let db: Connection?
The other declarations are the table of contacts, and its column with a specific type.
private let contacts = Table("contacts") private let id = Expression<Int64>("id") private let name = Expression<String?>("name") private let phone = Expression<String>("phone") private let address = Expression<String>("address")
The constructor tries to open a connection with the database which has a specified name, and a path to the application data, and then creates the tables.
private init() { let path = NSSearchPathForDirectoriesInDomains( .DocumentDirectory, .UserDomainMask, true ).first! do { db = try Connection("\(path)/Stephencelis.sqlite3") } catch { db = nil print ("Unable to open database") } createTable() } func createTable() { do { try db!.run(contacts.create(ifNotExists: true) { table in table.column(id, primaryKey: true) table.column(name) table.column(phone, unique: true) table.column(address) }) } catch { print("Unable to create table") } }
Notice there is no SQL code to create the table and columns. This is the power of the wrapper used. With a few lines of code you have the database ready.
CRUD Operations
For those unfamiliar with the term, ‘CRUD’ is an acronym for Create-Read-Update-Delete. Next, add the four methods to the database class that perform these operations.
func addContact(cname: String, cphone: String, caddress: String) -> Int64? { do { let insert = contacts.insert(name <- cname, phone <- cphone, address <- caddress) let id = try db!.run(insert) return id } catch { print("Insert failed") return -1 } }
The
<- operator assigns values to the corresponding columns as you would in a normal query. The
run method will execute these queries and statements. The
id of the row inserted is returned from the method.
Add
print(insert.asSQL()) to see the executed query itself:
INSERT INTO "contacts" ("name", "phone", "address") VALUES ('Deivi Taka', '+355 6X XXX XXXX', 'Tirana, Albania')
If you want to undertake further debugging you can use a method instead. The
prepare method returns a list of all the rows in the specified table. You loop through these rows and create an array of
Contact objects with the column content as parameters. If this operation fails, an empty list is returned.
func getContacts() -> [Contact] { var contacts = [Contact]() do { for contact in try db!.prepare(self.contacts) { contacts.append(Contact( id: contact[id], name: contact[name]!, phone: contact[phone], address: contact[address])) } } catch { print("Select failed") } return contacts }
For deleting items, find the item with a given
id, and remove it from the table.
func deleteContact(cid: Int64) -> Bool { do { let contact = contacts.filter(id == cid) try db!.run(contact.delete()) return true } catch { print("Delete failed") } return false }
You can delete more than one item at once by filtering results to a certain column value.
Updating has similar logic.
func updateContact(cid:Int64, newContact: Contact) -> Bool { let contact = contacts.filter(id == cid) do { let update = contact.update([ name <- newContact.name, phone <- newContact.phone, address <- newContact.address ]) if try db!.run(update) > 0 { return true } } catch { print("Update failed: \(error)") } return false }
Final Changes
After setting up the database managing class, there are some remaining changes needed to Viewcontroller.swift.
First, when the view is loaded get the previously saved contacts.
contacts = StephencelisDB.instance.getContacts()
The tableview methods you prepared earlier will display the saved contacts without adding anything else.
Inside
addButtonClicked, call the method to add a contact to the database. Then update the tableview only if the method returned a valid
id.
if let id = StephencelisDB.instance.addContact(name, cphone: phone, caddress: address) { // Add contact in the tableview ... }
In a similar way, call these methods inside
updateButtonClicked and
deleteButtonClicked.
... StephencelisDB.instance.updateContact(id, newContact: contact) ... StephencelisDB.instance.deleteContact(contacts[selectedContact].id!) ...
Run the app and try to perform some actions. Below are two screenshots of how it should look. To update or delete a contact it must first be selected.
Any Queries?
SQLite is a good choice for working with local data, and is used by many apps and games. Wrappers like SQLite.swift make the implementation easier by avoiding the use of hardcoded SQL queries. If you need to store data in your app and don’t want to have to handle more complex options then SQLite i worth considering.
May the Code be with you!
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS!
|
https://www.sitepoint.com/managing-data-in-ios-apps-with-sqlite/
|
CC-MAIN-2021-04
|
refinedweb
| 1,700
| 52.15
|
Previous Chapter: Creating dynamic websites with WSGI
Next Chapter: Dynamic websites with Pylons
Next Chapter: Dynamic websites with Pylons
Creating dynamic websites with Python with mod_python and WSGI
IntroductionPlease notice:
Work on this topic is under process. (August 2014)
mod_python is an Apache HTTP Server module. It's purpose is to integrate Python programming with the Apache web server, or in other words a Python language binding for the Apache HTTP Server. The official website of mod_python says that it possible to write "with mod_python web-based applications in Python that run many times faster than traditional CGI and will have access to advanced features such as ability to retain database connections and other data between hits and access to Apache internals." mod_python has been pronounced dead some years ago. So it didn't look to be a good idea to use it for new projects. It never died, it was only "sleeping". It came to life again in 2013!
Python and mod_pythonIf we want to use Python on am Apache web server, we need the mod_python module for Apache. This module provides a Python language binding so that we can integrate Python. It's a more efficient approach than using CGI, because CGI will start a new Python process for every request.
Mod_python consists of two components: The dynamically loadable module mod_python.so for Apache and the Python package mod_python. If you are using Debian or Ubuntu Linux, it's satisfying to install the package libapache2-mod-python for this purpose, assuming apache2 is already installed:
sudo apt-get install libapache2-mod-pythonIf apache2 has to be installed as well, do the following installation first:
sudo apt-get install apache2You have to add the following lines into /etc/apache2/sites-enabled/000-default:
AddHandler mod_python .py PythonHandler mod_python.publisher PythonDebug OnIt may look like this:
sudo /etc/init.d/apache2 restart
A Simple Dynamic Page with mod_pythonWe will create a subdirectory "tests" in the documents root of the Apache server. In case of Debian and Ubuntu, this will be /var/www/html/. We save the following Python program as "hello.py" in the previously created subdirectory:
def index(): return "Hello Python!"We have to start a browser and go to the location "localhost/tests/hello.py/index". It works with "localhost/tests/hello.py" as well. We get the following output in the browser window:
Another more "Useful" webpageWe save the following website as timesite.py. It will print out the current date and time, as well as the timezone:
import time def index(): html = """ <html><head> <title>mod_python.publisher first html page</title> </head> <body> <h1>This page was generated by mod_python.publisher</h1><hr> The local time of this server is: %s <br>The timezone of this server is : %s </body> </html> """ % (time.ctime(time.time()), time.timezone/3600) return html </pre>
We get the following output:
Another Page NameSo far we used index as the default website name. We can also define other functions and by doing so create websites with other names. We write the get_time function in following example and modify the index function:
import time def index(): return "index().. nothing here, but you will find some info at get_time .." def get_time(): html = """ <html><head> <title>get_time function</title> </head> <body> <h1>get_time function</h1> <hr> The local time of this server is: %s <br> The timezone of this server is : %s <br> </body> </html>""" % (time.ctime(time.time()), time.timezone/3600) return html
Calling the location "", returns the following output:
Using the address "", supplies this:
Using FormsHTML forms are used to pass data to a server. We can do this with mod_python as well. The following html form inside our Python program "form.py" contains fields for the first name, last name email address and radio buttons for the gender:
def index(): return """ <html><head> <title>Formular</title> </head> <body> <FORM value="form" action="get_info" method="post"> <P> <LABEL for="firstname">First Name: </LABEL> <INPUT type="text" name="firstname"><BR> <LABEL for="lastname">Last Name: </LABEL> <INPUT type="text" name="lastname"><BR> <LABEL for="email">email: </LABEL> <INPUT type="text" name="email"><BR> <INPUT type="radio" name="gender" value="Male">Male<BR> <INPUT type="radio" name="gender" value="Female">Female<BR> <INPUT type="submit" value="Send"> <INPUT type="reset"> </P> </FORM> </body> </html> """ def get_info(req): info = req.form first = info['firstname'] last = info['lastname'] email = info['email'] gender = info['gender'] return """ <html><head> <title>POST method using mod_python</title> </head> <body> <h1>POST Method using mod_python</h1> <hr> Thanks for using our service:<br> Your first name: %s <br> Your last name: %s <br> Your email address: %s <br> Your gender: %s <br> </body> </html> """ %(first, last.upper(), email, gender.lower())Calling the above program with the URL "" gives us the following entry form:
To see the result page, i.e. the result of the function get_info, we have to push the "send" button:
WSGI
What is WSGI."
Previous Chapter: Creating dynamic websites with WSGI
Next Chapter: Dynamic websites with Pylons
Next Chapter: Dynamic websites with Pylons
|
http://python-course.eu/dynamic_websites.php
|
CC-MAIN-2017-09
|
refinedweb
| 845
| 53.21
|
#include <avr/sleep.h>#include <avr/wdt.h>const byte LED = 9;void flash () { pinMode (LED, OUTPUT); for (byte i = 0; i < 10; i++) { digitalWrite (LED, HIGH); delay (50); digitalWrite (LED, LOW); delay (50); } pinMode (LED, INPUT); } // end of flash // watchdog interruptISR (WDT_vect) { wdt_disable(); // disable watchdog} // end of WDT_vect void setup () { }void loop () { flash (); //
Am i wrong to say sleep mode or is it that i need to say power saving mode is there a such thing and if so what is the difference between power saving mode and sleep mode?
The SAM D21 devices have two software-selectable sleep modes, idle and standby. In idle modethe CPU is stopped while all other functions can be kept running. In standby all clocks andfunctions are stopped expect those selected to continue running. The device supportsSleepWalking. This feature allows the peripheral to wake up from sleep based on predefinedconditions, and thus allows the CPU to wake up only when needed, e.g. when a threshold iscrossed or a result is ready. The Event System supports synchronous and asynchronous events,allowing peripherals to receive, react to and send events even in standby mode.
/* Arduino ZERO PRO low-power sleep mode with wakeup upon external interrupt (example sketch) Add a button on digital pin 0, with an additional pull-up resistor. Add an LED on digital pin 3 (don't forget resistor) NOTE: LED might not appear to toggle, or it might flash, that is because of switch bounce. (Electrically noisy contacts) I think there is a filtering option to digitally filter external interrupts. Might check that out in the future.*/bool ledState = true;void setup(){ Serial.begin(9600); pinMode(3, OUTPUT); // Output for an LED that is toggled on/off upon interrupt pinMode(13, OUTPUT); // Flashing LED pin // I could actually use the ARM macro thingies and set registers // and what not to do the same exact thing, // But I'm lazy so I just cheated and used this arduino function attachInterrupt(0, onInt, RISING); SCB->SCR |= 1<<2; // Enable deep-sleep mode // Set the EIC (External Interrupt Controller) to wake up the MCU // on an external interrupt from digital pin 0. (It's EIC channel 11) EIC->WAKEUP.reg = EIC_WAKEUP_WAKEUPEN11;}void loop(){ Serial.println("Sleeping in 3"); toggleAndDelay(); Serial.println("Sleeping in 2"); toggleAndDelay(); Serial.println("Sleeping in 1"); toggleAndDelay(); __WFI(); // This is the WFI (Wait For Interrupt) function call.}// Called upon interrupt of digital pin 0.void onInt(){ ledState = !ledState; digitalWrite(3, ledState);}// This just toggles the LED on pin 13, and delays.// Used in between the sleep countdown Serial.println()void toggleAndDelay(){ digitalWrite(13, HIGH); delay(500); digitalWrite(13, LOW); delay(500);}
|
https://forum.arduino.cc/index.php?topic=337289.msg2335190
|
CC-MAIN-2020-50
|
refinedweb
| 442
| 54.63
|
krankeMembers
Posts17
Joined
Last visited
kranke's Achievements
0
Reputation
- Hey man! Thanks again. I am going to explore my options. Right now I think is working like I want it. Still some performance issues. I think on exploring gif. Can I control a gif with gsap? Also I feel Raphael a viable option, do you mean importing assets and animating or recreate the illustrator vector graphics in canvas? Is that better? You are so helpful in a crunch time. Appreciated
- Hi, Thanks for the reply. Ok I got the whole thing working (not with a loader). I am fading opacity to 1 for each div to simulate the image sequence. Problem is that fading in those in top of one another kills my memory. I am trying to figure out a way to fade back to 0 the previous frame. Here is a snippet of the code and the files are attached, function createATimeline () { var j = 0; var children = document.getElementById('chartHolder').childNodes; for (var j = 0; j < children.length; j++) { chartTimeline.append( TweenMax.to(children[j], 1, {css:{autoAlpha: 1}}) ); } } What I'd like to do is fade the previous image... I hope it makes sense. See the example online.
- Jack you are the man! I would name my next son Jack but I already have one called Jake... Thanks! Fernando.
- Hi I have the dynamic props plugin, will take a look into it now. However to explain myself better... I have this tween cameraZoomTween = new TweenMax(camera, 10, {z: 5000, ease: Linear.easeNone}); on mouse move up the screen, I do cameraZoomTween.play() and mouse move down the screen cameraZoomTween.reverse() This works well but then I click in my nav to take the camera to a particular position animating also x, y and zoom. TweenMax.to(camera, 0.5, {zoom: selectedZoom, focus: selectedFocus, x: 300, y: 500, z: 1000, ease: Quint.easeOut}); Then it just jumps suddenly... I tried cameraZoomTween.updateTo({zoom: selectedZoom, focus: selectedFocus, x: 300, y: 500, z: 1000}) And it jumps to in addition too this I just don't know how to go back to the original tween. Because every time I click in a nav button it takes me to a whole different position. So basically i need to maintain original tween updated in mouse position and then when I clicked in a button it takes me out of that tween into a new one, then when I go back (or close that item) it should resume where it left off. So what does it mean, do I create a new Tween like this cameraZoomUpdate = new TweenMax (camera, 1, {zoom: selectedZoom, focus: selectedFocus, x: 300, y: 500, z: 1000}) and if so can I switch back and forth between the cameraZoomTween and cameraZoomUpdate? Thanks for helping me out on this I almost got it working but this is the last bit that is missing.
Updating TweenMax play position
kranke posted a topic in GSAP (Flash)Hi I have the following camera.z = -800 cameraZoomTween = new TweenMax(camera, 10, {z: (5000, ease: Linear.easeNone}); cameraZoomTween.pause(); on mouse move the TweenMax plays or reverses based on direction. switch (getVerticalDirection()) { case "up": if (!mouseOverInPlane) { TweenLite.to(cameraZoomTween, 1, {timeScale: 1, onStart: cameraZoomTween.play}); } else { TweenLite.to(cameraZoomTween, 0.2, {timeScale: 0.01, onComplete: cameraZoomTween.pause}); } break; case "down": if (!mouseOverInPlane) { //cameraZoomTween.reverse(true); TweenLite.to(cameraZoomTween, 1, {timeScale: 1, onStart: cameraZoomTween.reverse}); } else { TweenLite.to(cameraZoomTween, 0.2, {timeScale: 0.01, onComplete: cameraZoomTween.pause}); } break; case "none": TweenLite.to(cameraZoomTween, 2, {timeScale: 0.01, onComplete: cameraZoomTween.pause}); break; } This works great. However I also have buttons that take the camera out of that Tween position. My problem lies now that when I go back to the mouse movement the camera sort of jumps immediately to the previous position (from where the tween was interrupted), and what Id like if possible is sort of update that tween closer to the current position. So basically lets say, my tweening is playing and camera is at z: 1000, and a click a button to to send the camera at position 5000. When I resume the tween it goes back to position 1000 when instead I'd like it to resume from position 5000. I hope it makes sense, I have really tried my best without bothering you guys but I am pulling my hair on this one. Thanks Fernando
kranke replied to kranke's topic in Loading (Flash)Cool ;Jack! I just wish I was half as clever as you are! Thanks much!
kranke posted a topic in Loading (Flash)Hi I have an alternate URL in case the image path I am supplying to the loader is invalid, however I am unsure how to detect it onChildOpen before it throws the error and use replaceURL to provide the backup path. So basically the error is detected using this. imagesQueue = new LoaderMax({name: imagesQueue, requireWithRoot: this.root, onChildOpen: initHandler, onIOError: IOErrorHandler, onChildComplete: childCompleteHandler, autoDispose: true}); imagesQueue.append(new ImageLoader(imagePath, {name: nodes.title, container: imageHolder})); function IOErrorHandler(event:LoaderEvent):void { trace("error occured with " + event.target + ": " + event.text); } basically I'd like the IOErrorHandler to say, well If this URL stream is not valid and an image isn't found in here then try this other one. if (image != null) { imagePath = String(nodes.collectionImageResizePath); } else { imagePath = String(nodes.collectionImagePath); } So if the path I tried doesn't find an image then try the alternate path... I looked over at the skipFailed vars and set it to false, and I don't know how to supply an alternate. Thanks in advance for any help on this issue. All the best Fernando.
NetStatusEvent and NEtConnection
kranke posted a topic in Loading (Flash)Hi any way to implement this loader with a NetConnection stream?
- Dear Jack! Yes this makes sense, but I also want to add to this LoaderMax the actual main.swf which is loading the child.swf. So far and thanks for your advice I can get the loading percentage of the child.swf + all images. I need to also count for the main.swf (which is the preloader swf in which I create the LoaderMax instance) to better explain myself, if you look at your demos. You have LoaderMax_subload_parent.swf LoaderMax_subload_child.swf ---- Images that load into this SWF. In this example you obtain the loading time of LoaderMax_subload_child.swf and he images that load into it. However you do not account (or at least where I can see the loading bytes of LoaderMax_subload_parent.swf)... Basically I want to add to my loadermax mainQueue the parent.swf bytes size and load time. I hope this makes sense. Thanks as always for a prompt response. You are the very best. Fernando.
- Thanks for this explanation, this answers a different question but the original one still stands. I already got what I needed from the SWFLoader and its nested loaders. What I need is actually a way to add some sort of preloading to the main swf in which the SWFLoader is being created with the purpose of loading the content. so I have main.swf (I create SWFLoader here to load "content.swf") content.swf (I create LoaderMax to load the images) I can get information with loaders for content.swf plus the LoaderMax queue with all images which is great but I also want to add to this the loading time for the actual main.swf (which is the initial swf)
- Hey Thanks for replying so soon. Here is what I mean MY setup Main.swf (this has a loadermax instance mainQueue) Content.swf (this is the SWF I load) --- Inside this SWF I have another loadermax instance (which loads images from an array) imageQueue which I append to mainQueue) So far I am able to track the total load (Content.swf + images) which is wonderful... But I will also like to add if possible the weight size of Main.swf and the time it takes to download). Does it make sense? Thanks!!!
kranke posted a topic in Loading (Flash)HI I successfully implementing your awesome loader in my app. I have a swf with a LoaderMAx instance and this loads another SWF which has a nested LoaderMAx. So far I am able to track the loaded swf and the images inside that swf as a whole, but what I cannot figure out is how to actually include the loading time for the actual swf in which the main instance of the LoaderMax is created. I hope I explained it well. Thanks Fernando.
3d Zooming
kranke replied to kranke's topic in GSAP (Flash)I guess to clarify my question, is that if anyone know of an easy way to yse tweenmax to zoom in and out using mouse position? I am using the tweenmax relative position z:"1000"... but that doesnt work?
content sprite can it be custom
kranke replied to kranke's topic in Loading (Flash)One more question if you will. The COMPLETE event itself will be fired after the whole queue is done... right? or will it be fired per image loader? Also I am creating my sprite and the array with my objects before the COMPLETE Event (see red), so your suggestions is to create it after? Also the way I know is all loaded is that in my complete function if all of the images loaded == the image count then go ahead and execute. Here is my whole class, please if you have a minute help me make sense of the LoaderMax.... package com.letsmota.core { import flash.display.Sprite; import uk.co.richardleggett.drupal.model.Node; import com.letsmota.events.SGImageLoaderEvents; import com.letsmota.events.SGPreloadEvents; import com.letsmota.content.SGImageLoader; import com.letsmota.settings.SGSettings; public class SGImageStorage extends Sprite { private var totalImagesCount:int = 0; private var imagesLoaded:int; private var imagePath:String; private var planeLabels:SGPlaneLabel; public function SGImageStorage() { loadAllCollectionImages(); } private function loadAllCollectionImages():void { for each (var nodes:Node in SGSettings.drupalSiteMap.nodes) { if (nodes.type == "collections_image" || nodes.type == "contact_page" || nodes.type == "bio_page") { totalImagesCount++; var imageHolder:Sprite = new Sprite(); imageHolder.name = nodes.title;[/color] imagePath = String(SGSettings.baseSite + SGSettings.baseDrupal + nodes.collectionImagePath); var imageLoader:SGImageLoader = new SGImageLoader(imagePath) imageHolder.addChild(imageLoader); imageHolder.addEventListener(SGImageLoaderEvents.IMAGE_LOADED, increaseImagesLoadedCount); SGSettings.imageStore.push({parentId: nodes.parentNodeId, id: nodes.id, path: nodes.path, collection: nodes.collection, image: imageHolder}); //trace(SGSettings.imageStore.parentId); } } } private function increaseImagesLoadedCount(event:SGImageLoaderEvents):void { event.stopImmediatePropagation(); event.currentTarget.removeEventListener(SGImageLoaderEvents.IMAGE_LOADED, increaseImagesLoadedCount); //trace("This is the amount of images loaded so far", imagesLoaded); imagesLoaded++; if (totalImagesCount == imagesLoaded) { trace("All Collections Images and Thumbs Loaded, proceed"); dispatchEvent(new SGPreloadEvents(SGPreloadEvents.ALL_COLLECTION_IMAGES_LOADED, imagesLoaded)); } } } } And lastly, if I use the event raw content that's great but if not like I do not want to use the displaycontent... Thanks for taking the time to help. Fernando
|
https://greensock.com/profile/5736-kranke/
|
CC-MAIN-2022-40
|
refinedweb
| 1,804
| 59.6
|
I'm in the early stages of trying to write some sensible Javascript. I want to namespace basically everything under the name of my application to avoid globals as much as possible, but still give me a way to access functions declared around the place. However, I don't want to be super verbose in my function definitions.
My ideal CoffeeScript would be something like this:
class @MyApp
@myClassMethod = ->
console.log 'This is MyApp.myClassMethod()'
class @Module1
@moduleMethod = ->
console.log 'This is MyApp.Module1.moduleMethod()'
MyApp.Module.submoduleMethod = ->
@
// application.js
class @MyApp
//= require 'module1'
//= require 'module2'
// module1.js
class @Module1
@moduleMethod = ->
console.log 'This is STILL MyApp.Module1.moduleMethod()'
I have a module solution that I use in my code.
I define my modules like below
@module "foo", -> @module "bar", -> class @Amazing toString: "ain't it"
Amazing is available as
foo.bar.Amazing
implementation of the @module helper is
window.module = (name, fn)-> if not @[name]? this[name] = {} if not @[name].module? @[name].module = window.module fn.apply(this[name], [])
It's written up on the coffeescript website here.
|
https://codedump.io/share/NlTIBKioP5OV/1/how-do-you-write-dry-modular-coffeescript-with-sprockets-in-rails-31
|
CC-MAIN-2017-47
|
refinedweb
| 179
| 53.27
|
>>."
Opportunity (Score:4, Interesting)
Great chance for noobs to try removing crap until something breaks, and then see if they got a usable "recovery disc" with their OS. That's how I got started with computers..
Buying from the likes of Best Buy (Score:2)
While it does not happen often, sometimes pre-built PCs actually have an attractive set of hardware. In that case, buying the thing and reinstalling the OS and the applications from scratch may be attractive. I remember a discount PC from the early 2000s that actually had components from reputable brands. A friend asked "can you recommend that?", I said yes and the PC actually worked fine for several years.
Of course, that requires a user that CAN do a reinstall if necessary. A DRM-free pirate version of you
Re: (Score:2)
I couldn't agree more. I'd like to see a way of quantifying this type of pain and aggravation (dealing with pre-installed trialware/crapware) into the cost of a PC.
Re: (Score:2)
That's like saying the only way to get a drivable car is to buy a Lexus.
Re: (Score:2)
Re: h
Re: (Score:2)
The last one I saw had trial versions of iWork and MS Office, and of course the nagging to upgrade to Quicktime Pro.
Re: (Score:3, Interesting)
I hate to bring cars into this (obligatory car analogy?) but it's kind of like saying that it's an opportunity to become a mechanic if the new car you buy needs a lot of "under the hood" tweaking to get to run correctly.
The problem with the car analogy is that, with computers, there isn't as great a divide between "using" and "maintaining". Though few people do as much as installing their own car stereos or even changing their own oil, most people install software on their computer at some point. The skills of installing or uninstalling applications and moving/copying files are central to maintaining a computer, but they're also part of a normal user's repertoire.
Though I fully understand that most people don't want to
Re: )?
Re: (Score:2)
I certainly don't think it costs anywhere near $299. The chips are all standard models that have entered mass production. The operating system license is being subsidized by Microsoft. The real cost is assembly. Do you really think it costs that much to assemble a netbook?
No, the reason that BestBuy is interested in this is that it can't add as much markup to a $299 netbook as a $599 laptop. Therefore, any source of additional margin is a godsend for them.
Re: (Score:2)
Re: (Score:2)
Sorry, I think my sarcasm meter was broken earlier.
:)
Re: (Score:3, Interesting)
Every car I've ever bought new has needed an immediate ad-ware removal (bumper sticker & license plate frame).
Almost all of them, in my opinion, also needed an immediate brake pad replacement as well. Most people are satisfied with the crap that comes on there from the factory, though, even though they spend the first 20k miles scraping gunk off their wheels from the crappy pads, without even getting very good performance in exchange.
Many people buy a new car, and promptly shell out for "dealer options"
Re: New car ad-ware (Score:4, Interesting)
When I buy a new car, I add words to the contract that state: "Dealer shall affix no decals and will remove any dealer markings that are on the car. Dealer agrees to pay all costs of removal."
One car I bought had to go into the body shop so they could the holes created by the screw-on decal.
Re: (Score:2)
I simply wouldn't buy such a car. Holes? Really? You know what's going to rust out first.
Anyway, every dealership around here puts the crap on when the cars arrive on the lot. And I don't trust anybody to do a damage free job of removing stuff.
Re: (Score:2)
So you... don't ever buy cars?
Re: (Score:2)
I've never come across a new car where the dealer ad was affixed using non-factory-drilled holes.
Re: (Score:3, Insightful)
Great chance for noobs to try removing crap until something breaks
Except the "noobs" don't want that. They want to play games, watch porn and get on with their lives.
Re:Opportunity (Score:5, Insightful)
Except the "noobs" don't want that. They want to play games, watch porn and get on with their lives.
Then wonder why their computer is getting slow, and eventually think "i should just buy a new one".
Re: (Score:2)
Why should they have to learn when they can solve it with money? It keeps the economy going.
Re: .
Re: (Score:2)
Because... it's bad for the environment?
:-P
Re: (Score:2)
I can.
Re: (Score:2, Interesting)
I still am amazed if that document is true about the $5 just to put it on people's PCs. This is marginally better than "Forced optimization" until people realize they're probably charging extra just to put this best buy installer on the pc.
I am not 100%, but I'll bet there's a charge for "setting up the best buy installer".
Re:Opportunity (Score:5, Informative)
Funny, I got my first Windows PC (A 486DX running Win3.1) because the guy that had it owed me $100 and had gotten it full of malware and didn't know how to fix it. He figured it was a good excuse to lose the debt and at the same time give him a reason to shell out nearly $3K! on a brand new P100Mhz to play...was Heretic or Hexen first? Ehhh one of the two.
I got into doing PC repair for a living when I stopped by my local shop to score some RAM sticks and heard the boss cussing his brains out. He got stuck with a truckload of Gateway Astro [thejournal.com] from some guy that owed him a grand, and while they all had restore discs no OS was installed and it refused to take the restore discs. I told him "why don't you just use a standard Win98 disc?" and he swore to me because of the funky USB everything on those it couldn't be done. I bet him the RAM sticks I wanted I could do it, and after the Win98 install simply stuck in the restore discs and installed the drivers manually. He handed me the sticks and said "Grab a seat, there are 40 more of those in the back". I ended up being "the scary biker guy in the back that does great work" for 5 years. It was funny to hear little old ladies go "is the scary biker guy here?"
But back to the topic at hand, the problem with Worst Buy (other than they suck of course) and these other groups that offer "optimization" is they don't actually understand the customer. I too offer optimization, and my customers love it and talk about me like I walk on water. The secret? The average customer does NOT want a faster PC! I repeat, they do NOT want a faster PC they want an easier to use PC. So what I do is basically set them up a "toaster". Any customer that pays the $55 for optimization gets a PC that autoupdates, has AV set to autoscan and autoupdate, it automatically cleans the registry and temp files, defrags itself, has all the codecs (thanks to K-Lite Mega) installed, flash, Java,
.NET, Silverlight, all installed, Firefox with ABP and ForecastFox installed, and finally Go Open Office and GNUCash.
When I'm done all the customer has to do is "flip a switch and go" and THAT, not squeezing an extra couple of notches in some benchmark, is what I've found the customers REALLY want in a PC. Unlike my old boss I don't get folks coming back in a month or two infected like a Bangkok whore, but I have found the referrals more than make up for that. Give folks a good value, let them know you care about more than just their wallet, and they will go out of their way to brag on you and send business your way. Worst Buy doesn't care how bad your experience is, once they have your money and that is why they have a bad rep. Well that and the shitty service, pervs that go through your files looking for porn, geeks that don't know the right end of a screwdriver....
Re: (Score:2)
Funny, I got my first Windows PC (A 486DX running Win3.1) because the guy that had it owed me $100 and had gotten it full of malware and didn't know how to fix it.
I call BS on that! Malware/Spyware didn't start becoming a problem until around the year 2000. At that time, most consumer PCs were still running Win98, 98SE, and the occasional WinME. It was usually bundled with shareware programs (Limewire and other P2P apps) and downloadable games. The other vector for getting them was when using Internet Explo
Re: (Score:2)
AHAHAHAHAHAHAHAHAHHAHAHAAAAAA [breaths] AHAHAHAAHAA....
Ok, now that I've had my laugh, get the fuck off my lawn.
Re: (Score:2) [blogspot.com]
Now, STFU!
Re: (Score:2)
Bitch please:
1) blogspot. oooh, your sources have me trembling.
2) spyware is not the only sort of malware.
3) this party most certainly did not get started in the early 2000's [wikipedia.org]
Re: (Score:2)
Malware and Viri are not really the same thing. While it is true that Malware can contain (and spread) Viri, it is highly unlikely you will ever find a Virus that installs Malware. Also, Malware is presented to the user a legitimate program to be installed.
Viri on the other hand just needs to be executed once (EXE, COM files etc) for the payload to be installed automatically and without the computer users knowledge.
Like I said, Malware/Spyware only really started becoming a problem in 2000. And I stand behi
Re: (Score:3, Informative)
Please, you are just making yourself look silly at this point.
Viruses are widely considered to be a subset of malware (malware literally meaning "malicious software"). From wikipedia:
You might have a different definition of malware, but that definition is pretty much your own. The definition to you seem to be presenting for "malware" seems more in line wit
Re: (Score:2)
So basically, you crap up their machine with a bunch of shit they don't need and/or will have a hard time using since its not consistent with any other app they use.
Good job, you've recreated the same kind of crap setup you claim to be fixing.
Re: (Score:2)
Let me guess, you're the type that lets them loose with an unpatched Windows with no AV? No I do not put trialware, or crapware, or warez, or any of that other crap. a good 90% of mine is set up to use plain old Windows Task Scheduler to run the app at the appropriate time. But you really have to set up a lot of stuff for the clueless or you might as well just install the malware yourself and save them the effort, because they WILL get pwned otherwise.
Every machine that leaves my shop, whether they pay for
Re: (Score:2)
So in other words YOU sir are part of the problem! You know how many Best Buy computers I've had darken my doors with NO Windows updates since it left the factory, a shitty 30 day trail of Norton crapware, expired of course, and more viruses than a Bangkok whore? Too may to count!
Nowadays with zero day exploits you are frankly a fool if you let a Windows machine loose without having the latest patches and autoupdates running. And those Norton trialwares do nothing but bog the machine down and give the custo
OT music question (Score:2)
Any chance you've been listening to The Magnetic Fields [houseoftomorrow.com] lately? Specifically, All My Little Words [69lovesongs.info], track 3 on volume 1 of "69 Love Songs".
Just curious.
:)
Cheers,
Re: (Score:2)
Never heard of them. Are they southern? Because "all the tea in China" is a common phrase in the deep south, along with "colder than a witches tit" (which it is here right now, WTF happened to global warming?) "hotter than the hinges of hell" "slow as Xmas" and "dumber than a bag of hammers".
What can I say, we southern folk are a "colorful people" when it comes to language.
Re: (Score:2)
I'm not sure where the band is from, but one of the members might well be from somewhere southern.
Ah, local color. One of my favorite US town names was "Maggie's Nipples" (I think it was in Montana). The town was unfortunately renamed to something less notable some years after its founding.
But the colorful language goes beyond just English -- the Acadians have been similarly fond of no-nonsense-but-colorful nomenclature, calling one white bayou fish variety the sac-au-lait (bag o' milk).
I grew up with "
Re: h
After being found out they drop it but now what wi (Score:2)
After being found out they drop it but now what will they do with systems? bill you $20 to put on windows updates? and they will still pre install them be for selling systems and only have systems with that added service in stock?
Re:After being found out they drop it but now what (Score:5, Informative)
The whole "pre-setup" thing was a crock from the get-go. It was SUPPOSEDLY so people who wanted the service could get a computer faster, but it just ended up being wasted labor. Myself and MANY other employees railed against this practice from the start, and of course management refused to listen.
What would happen is we would get the ads for the next week a few days early. Of the notebooks in the ad, a certain percentage of each we got in were to have the pre-installed garbage done to it. This started out fairly low, but soon we were being pushed to have 40% of each model done this way. And of course the people on the sales floor were told to push the HELL out of these systems. Why? Because technically, if the customer truly did not want the service, we were to restore it back to factory, or simply not charge them for it. Obviously this becomes a problem when a lot of customers don't want the service and they end up getting it for free. This is where they stopped having the in-store people do said service because it was wasted labor to do something for free, and also wasted labor to remove something the customer didn't want. The solution? A heavy internal push to have all of this done by the much-hated "Agent Jonny Utah".
Who is "Agent Jonny Utah", you might ask (other than a crappy Point Break reference)? It's nothing more than Geek Squad Outsourcing. They hook the computer up to the network, and use a customized version of LogMeIn to let someone in Bangalore or wherever do their job for them. Only half the time they don't do anywhere NEAR what a store employee would do. For example, when performing the service upon request, we would remove ALL trialware, make sure ALL updates were applied, and run a few scripts to generally make things a bit quicker and less resource-hungry. I could do about 5-8 computers at a time and have them all done inside of an hour. Agent Outsource? It would be up to 2 hours before they would even TOUCH the system, and then they would proceed to install the updates and give it a GWB-esque "Mission Complete." This meant we STILL had to do work to the computer when they were done, because they didn't really do anything to begin with.
AJU is also the reason you don't take your computer to the store to get it cleaned up. The VAST majority of the time, they will just hook it up remotely (unless it's so infected it can't get an IP, in which case they'll just want to do a restore) and let the remote guys take a whack at it. Surprise, surprise, more often than not they botch the job. And of course when it took 3x as long because of having to re-do the work, customers got upset and WE got the blame. We were NEVER to let the customer even THINK that the machine was worked on by someone other than the people they see behind the counter.
And this is why there is such a backlash anymore. Of the people who were there when I started in GS, only one is left. In my store (not sure about any others), we thought of ourselves as techs first and foremost. Those with that attitude were forced to change or leave, as they don't want techs. They want salesmen wearing a shirt and tie using the perception of knowledge to hock more crap. In the end, all we were there for was to sell services, but not perform them. Software? Have AJU do it. Hardware? Do they have a service plan? Ship it to Louisville. Only a manufacturer warranty? Give them the MFR number.
When I was new to GS, it was a culture of "help the customer, get them what they need, and build lasting relationships." When I left, it had become nothing but "milk as much money out of as many people as you possibly can."
On a final note, if you DO make the mistake of taking your PC to them for service, point blank ask them if THEY will be cleaning it, or if they're just going to hook it up to have some hackjob in Hyderabad run a few scripts and say it's done...
How did they do in store hardware upgrades? (Score:2)
How did they do in store hardware upgrades?
and they shipped out systems that you hard the parts in store to fix sounds like a waste on shipping costs.
Re: (Score:3, Informative)
CompUSA used to do that ($20), but we'd actually optimize the various settings (all the tweaks that a power user would do to increase performance), remove the crapware, install all the updates, activate Windows (and Office or whatever else was bought/came with the machine), activate and update the AV/AS software, configure the network settings so the machine would go online right out of the box (keep in mind this was back in the day when Windows post-setup would pop up an idiotic list of choices on how to g
$5 per PC (Score:2)
Best Buy will make an extra $5 per PC? How many PCs do they sell in the course of a year? This would just barely cover the wages for one of their Geek Squad dorks.
Re: (Score:3, Informative)
Re:$5 per PC (Score:4, Interesting)
The problem with that is the laptop will be a smoldering hunk of plastic two minutes after the warranty expires, which kinda kills the savings. Working PC repair I have had to deal with MANY Worst Buy and Staples "$300 specials" and a good 7 out of 10 on the desktop and probably closer to 9 out of 10 on the laptops I have to tell the customer their best course of action is to shitcan it.
Why is that? Let me count the ways they bone you on those "$300 specials": Laptops- often they will use desktop chips in the laptops, and while Intel has thankfully killed the Netburst (although as late as last year I saw a Staples special with a netburst Pentium in a laptop) even the core desktop chips are WAY too hot for the small plastic laptop cases with those pissy little fans, which equal burnt chips, melted wires, just a mess. Speaking of fans, they screw you hard on the fans for both the desktop and laptop. Shitty fans that don't cool in badly designed cases is a recipe for disaster. Again fried chips, cooked HDDs, just nasty. Shitty plastic and substandard parts. I don't even have to explain what is wrong with that. Shitty heatsinks, again no explanation needed. Starving the OS, ala "Vista Capable". Thrashed drives, overheating, sluggish performance, and that is without the crapware.
Hell I could go on all day probably, but you get the picture. Those "$300 specials" are the most bottom of the barrel scraping junk they can throw together and frankly if it lasts 90 days past the warranty it is a miracle. I would recommend an off lease box before I would recommend a Worst Buy or Staples "$300 special" as they are 90% of the time anything but. Once in a blue moon you can a good deal on last year's model when it comes time to roll out the next one, but even then you would probably get a better deal just buying directly from the manufacturer. Just about every PC I have seen from Staples and Worst Buy that was a "$300 special" was nothing but E-waste.
Re: (Score:2)
Re: (Score:2)
Well, a Dell $700 laptop is decidedly *NOT* a "good" laptop. Try a Macbook or Thinkpad T series... Of course, disposable laptops can make sense for certain use patterns, and they seem to fit yours. Nothing consumer line, especially a Dell IME, is "good". Build it yourself for a desktop, buy business model laptop or desktops or go disposable (buy one a year)...
Re:$5 per PC (Score:5, Informative)
The margins on PCs are ridiculously thin.
That's why manufacturers have resorted to bundling crapware, and now apparently retailers as well.
Re: (Score:3, Informative)
Since all Vista/Win7 DVDs are the same now, I just download my MSDN image and use our keys to install.
Re: (Score:2)
I'm pretty sure that's against the MSDN license... Hey, it's one of the pitfalls of software(and our legal environment), you've got to follow the license, however stupid it may be. You can only use retail disks with retail keys, VLK disks with VLK keys etc... Of course you *can* ignore the license, but at that point, I've got to wonder why not just go all the way illegal and pirate it to save the money?
Re:$5 per PC (Score:5, Interesting)
I was instructed time and time again to "walk" customers if they weren't getting additional accessories or services, and at least once a day i did. So even though we weren't "on commission", something we were told to tell every customer, that didnt matter because we treated everyone like we were.
i know these stories are told every time an article about Best Buy pops up, i just wish more people could hear them. It has never been about providing "exceptional products and services in a user friendly environment", it has ALWAYS been about the fact that BB loses money when they sell computers without attachments.
Re: (Score:2)
When I'm in a big box store like Best Buy, I just politely tell the salesperson to take a walk. Of course, I rarely buy from such stores, tending to go through a few trusted online sources that treat me fairly, and keep the background sales buzz to a minimum.
Re: (Score:2)
CompUSA was much the same... but more so with printers... if you sold a printer, it BETTER go out the door with a USB cable and a set of ink cartridges.
Now some people would say "Well, duh, they need a USB cable since they dont come with printers anymore" but the simple fact is most people dont come in to buy their first printer, so most already have a printer cable, and a large portion of those people have a USB cable (while the rest had parallel).
But again, same reasons... $0-$5 a printer doesnt make
Re: (Score:2)
The real problem is that computers are a commodity item, and they're trying to sell commodities at retail with a luxury twist. A couple of grocers can get away with selling organically grown corn, but everywhere else, corn is corn, and comes in a cardboard or wood box on a shelf. You can try and sell butter and salt and
Re: (Score:2)
Yes, I didnt state it very well. I was trying to say that if they stopped the price wars between each other, they wouldnt be in this boat. But each always has to be the cheapest - until there is little to no margin left.
Still not clearly stating what I mean... but I need more coffee, so that's the best I can do for now.
;-)
Re: (Score:2, Informative)
you can assume you didnt read anything else i read, so just STFU.
Thanks for the warning (Score:5, Funny)
of the new Virus.
Interesting (Score:5, Insightful)
"preinstalled on most PCs, except Dell and HP"
Wonder if they are going to install it on Macs.
Re:Interesting (Score:5, Informative)
It's not on HP because HP has so much junk trial software already, any more it's going to explode. Well, at least the battery will, assuming it's a laptop.
Agreed (Score:3, Informative)
I did a reinstall on a friend's HP Vista laptop, and I was shocked and appalled by the amount of junk on there. The long interactive Flash video that plays when the computer is first booted would also be extremely misleading to a novice, as it appears to be offering software choices, but it's really just a bunch of advertising. This was far worse than any Dell or Sony I have worked on in the past.
The reinstall was needed after I attempted to work on her computer and noticed she didn't even have SP1 for Vi
Re: (Score:2)
How much does an upgrade from Vista Home Premium to Windows 7 Ultimate cost?
suckers (Score:4, Insightful)
Re:suckers (Score:5, Interesting)
Re: (Score:2)
And, I might add, in the case of a true emergency, you could return (unused) things to Walmart with a lot less difficulty, while buying more milk, eggs, etc.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
My son has that exact same one. Bought in August. It's in for a new HD right now.
The GeekSquad dude was surprisingly non-pushy about extra services and crap. When he asked about backups and reinstallation, "Nope, I just need a functioning hard drive". 'OK, come back Tuesday'.
Re: (Score:2)
Best Buy has decent prices for the things consumers pay attention to, and indeed something like three years ago Best Buy stopped the insane upselling pressure they were putting their customers under, but buyer beware for the things that consumers don't initially pay attention to, or initially comparison shop on.
(Mon$ter Priced) cables, spare Lithium-Ion batteries, or returning/troubleshooting issues, those are where Best Buy will still try to screw you on. You don't have to take my word for it. Just dig up
Re: (Score:2)
Re: (Score:2, Interesting)
I bought my last laptop from Best Buy. It wasn't for me, it was for my wife. She's perfectly happy with all the crapware that's installed. I shudder at it. The computer I purchased for myself came from a military base and was too (probably) loaded with junk. I wouldn't know. I had wiped it before I even had a chance to read the Vista license agreement. Now that said system dual boots Windows 7 and Ubuntu. Not a single bit of crapware in sight on either one.
Oh, as for my wife's system, the only thing I did w
Re: (Score:2)
Not a single bit of crapware in sight on either one.
Wait...you just finished saying Windows 7 was installed...
Ba dum tsh!
Thank you! I'll be here all night! Try the veal!
(P.S. I actually like Windows 7, but, the joke popped into my head, and I'm tired...)
Re: (Score:2)
Best Buy Sucks (Score:4, Interesting)
Re:Best Buy Sucks (Score:5, Funny)
I would bet its just another attem
Shit! The Geek Squad already got him!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I suppose it depends, but on my IBM Aptiva I got in 1995, it did come loaded with the full version of Lotus SmartSuite which included 1-2-3... It also (for some reason) came with MS Works. It was good in that I don't actually recall getting trialware except for the ISP links, but actually full version software. Then again, I suppose at $3500, they could throw in some software.
Best Buy's stance (Score:5, Informative)
From that memo, it seems that Best Buy admits that there's not much of a speed boost in it, certainly not $40 worth, but they still justify it as a time-saving procedure. That is, if you're some CEO and have a shitload of money but little time, then you don't want to waste it uninstalling trials of NetZero and Microsoft Works (which we don't actually uninstall anymore, we just prevent it from starting up automatically, since some customers complained that their new computers came without the great software trials that HP/Sony/Toshiba advertised).
It didn't seem like they wanted to stop the service, although they DID remind everyone that optimizing more computers than are likely to be sold and then making customers pay for them even if they don't want it is illegal and a bait-and-switch. Which is great, because the managers here in a central North Carolina store were seriously considering optimizing 90% of stock and trying to get rich that way. Bastards.
Re: (Score:2)
I'll believe you.
No kidding they dropped it (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It was never bait and switch. Ever. Not even a little. Read your own link...
How do you figure? The practice, as I understand it, was:
1) Advertise a unit
2) Customize a (large) portion of those units
3) Sell both, until the uncustomized runs out
4) Refuse to sell the customized units at the advertised price, instead refusing the sale or referring them to another store
Bait - Advertised unit, advertised price
Switch - Customized unit, higher than advertised price.
Misunderstanding (Score:5, Interesting)
The truth is that the optimization service is a good one for many people. Best Buy creates the specifics of the optimization service based on feedback from their customers and from the Geek Squad Agents who work on their computers. You must realize that for the majority of the Geek Squad's customers, a computer (tower) is a "router," Toshiba is "Toshibia," Linksys is "Linksky," Windows 7 is "Windows Veesta 7," and that's only if they know the difference between Windows and MS Office (which MANY do not). We're not talking about people with even passing computer knowledge. For these people, not having an icon for Internet Explorer or My Computer on their desktop (as is the case in many freshly-purchased machines) is akin to having a car with no steering wheel or pedals. The optimization service is designed to maximize the usability of a new computer for those customers who need it.
The optimization service takes some time (30 minutes to an hour) to complete. To save customers some time, the Geek Squad will "pre-optimize" a small percentage of their computers. In doing this, they are not violating any laws provided they leave any minimum available quantity (if stated in the weekly ad) unopened. If you attempt to purchase a computer and all they have left are pre-optimized units, they are required to sell you the computer at the normal retail price. They can not force you to pay the optimization fee. They do have the option, however, to restore the computer to factory defaults before they allow you to leave with it, and they do not have to give you an open-box discount. If employees are breaking these rules (laws) it is because of the poor management I referred to earlier, but it is certainly not company policy.
The real villains here are Microsoft and the computer manufacturers for not providing a consistent and customer-friendly experience for new computer buyers. Some of it comes from simply economics and marketing: manufacturers can reduce selling cost by including loads of trial software, not including MS Office and antivirus software, etc. The savings are then (misleadingly) passed to the customer. (I am sure, though, that Best Buy's enormous purchasing power has some say in what the manufacturers do, though.)
Re: (Score:2)
The real villains here are Microsoft and the computer manufacturers for not providing a consistent and customer-friendly experience for new computer buyers.
I believe they tried that once. It was called "Bob", and as I recall, it didn't go over so well.
Re: (Score:2)
You must be an Agent or a former Agent.
As a former Agent myself, thank you for attempting to express that the individual employees aren't trying to scam anyone and that WHEN FOLLOWING COMPANY POLICY, management isn't either.
However, managers everywhere go and screw the pooch.
Re: (Score:2)
You must be an Agent or a former Agent.
I had the same thought. I worked for them back before the GS Kool Aid got passed out, and guess what - things haven't changed a bit.
I know what you 'Agents' were told, and I know you're wired to believe it, but unfortunately it just isn't true.
Back to the OP:
The problem with the Geek Squad is that Best Buy managers are often so far removed from what the Geek Squad is and how it should work that it becomes a poorly managed mess in many stores. This is the crux of the issues many people have with the Geek Squad.
No, close, but no. The problem is that Best Buy corporate LIED to you about the mission. GS exists to expand Best Buy's bottom line, period. If they really were some altruistic, independent entity then the overlaps wouldn't exist. Agents wouldn't w
Delete trialware? (Score:4, Insightful)
.' Translation: instead of you paying Best Buy to delete trialware from your new PC,
I thought the Best Buy optimization thing only removed the shortcut icons to the trialware, and didn't actually uninstall or delete any of it?
You need only one program to remove trialware (Score:2)
New or used PC, download and run The PC Decrapifier [pcdecrapifier.com] Below is a list of programs it will remove. Very simple to use.
AOL Install
AOL UK
AOL US
Corel Paint Shop Pro Photo XI
Corel Photo Album 6
Corel Snapfire Plus SE
Corel WordPerfect
Dell Search Assistant
Dell URL Assistant
Digital Content Portal
Earthlink Setup Files
ESPN Motion
Get High Speed Internet!
Google Desktop
Google Toolbar
HP Rhapsody
Internet Service Offers Launcher
McAfee
Microsoft Office Activation Assistant 2007
Microsoft Office Home and Student 2007
Microsoft O
Re: (Score:2)
No worries. It doesn't just start nuking programs without asking you first. In fact, it walks you through some lists with program check boxes. Simply review your options prior to executing the removal.
If you're in a corporate environment, they even let you interface with it via CLI for making batch jobs. You have to pay for that version though.
Re: (Score:2)
Microsoft Office home and student 2007 and standard 2003 seem very important to me, as does Nortan Ghost. Hopefully the program lets you select which programs you want to remove.
Then you'll be disappointed to note that those programs aren't actually 'included' with the PC. All of these you've noted are trials. Office limits you to opening the software a fixed number of times. Norton gives you a short window of protection. In all cases, if you find you like it, you are expected to pay full price.
Something we learned (Score:2)
When we purchased our 42inch LCD last year, we had already figured out which TV we wanted, and went to the local BestBuy store to get it. First thing we did when we were approached by one of their people...
"We're here for this TV, and only this TV. We're not interested in extended warranties, or home theater systems and overpriced cables, and we're not interested in someone coming to our house to set it up. We're both experienced IT individuals, we've already got great HDMI and optical cables from monopr
Re: (Score:2)
This.
I have, on occasion, bought a computer this way as well. If you know what you want and actually shop the prices on it, eventually Best Buy will beat the online vendors.
Personally I prefer that 14-day window to swap it for a new one to dealing with shipping issues online. And I have in fact used it enough times to worry about it.
They musta stole it... (Score:2)
That's a description of Synaptic and apt-get.
What could go wrong? (Score:2)
This is like when the DVR first came out you could skip (not just fast forward) entire commercials. The people
Re: (Score:2)
$1K for a PC? What decade are you from?
Re: (Score:2)
The Canadian Best Buy website has at least 3 dozen PC + monitor bundles between $400 and $900, throw in another $50 for a printer.
I expect the US dollar would go even further.
The margins on this stuff are razor thin, or even a bunch of jerks like Best Buy wouldn't be resorting to bundling crapware for a measly $5/unit.: (Score:2)
Not sure, as I don't iPhone at all. However, all of that crapware you see is there because back in the day, people were pissed that the new $2000+ computer they bought had no software, so they had to spend $1000 or more on that before they could do anything. Retailers like Best Buy made big deals with the PC makers and software companies to preload this junk so they could rightly advertise that the machine came ready to run with all of that software. Unfortunately, the software loaded was never top-shelf st
|
http://news.slashdot.org/story/10/01/10/1348259/best-buy-abandoning-optimization-service
|
CC-MAIN-2016-07
|
refinedweb
| 6,524
| 69.92
|
Subject: Re: [boost] Visual Studio 2015 Update 3 has removed std::unary_function and std::binary_function
From: Andrey Semashev (andrey.semashev_at_[hidden])
Date: 2016-11-06 13:38:41
On 11/06/16 09:14, Daniela Engert wrote:
> Am 05.11.2016 um 20:21 schrieb Marshall Clow:
>> ... if you build with /std:c++latest.
>>
>
>> A quick grep for these terms found 153 instances of "std::unary_function"
>> and 118 of "std::binary_function" across several libraries, including
>> accumulators, algorithm, config, container, function, gil, graph, icl, mpi,
>> msm, polygon, ptr_container, serialization, smart_ptr, tr1, unordered,
>> utility - and probably others.
>
> The full list of libraries that need to be modified to run the test
> suite with /std:c++latest is this:
>
> accumulators, algorithm, asio, assign, bimap, bind, config, container,
> core, date_time, detail, function, functional, fusion, gil, graph, heap,
> icl, interprocess, intrusive, iostreams, iterator, lambda, locale,
> lockfree, move, msm, parameter, phoenix, polygon, pool, ptr_container,
> python, random, range, regex, serialization, signals, signals2,
> smart_ptr, spirit, statechart, test, typeof, wave, winapi, xpressive
>
> Library tr1 is dropped outright.
>
> This covers all stuff that is affected by the removal of deprecated
> features: namespace tr1, adapters, binders, random_shuffle, and auto_ptr
>
> I've made the required changes for 1.62 and I'm in the process of
> applying those changes to 1.63. In many cases it is sufficient to simply
> drop the function binders, in some cases other parts of the libraries
> rely on the tr1 protocol.
>
> All of this stuff is on GitHub:
> Each affected library has a branch 'feature/remove-deprecated'
> I can send out PRs to library authors interested in that and help
> working on a converged Boost-wide solution
Thank you Daniela for the link. I've incorporated some of your changes
into Atomic, Core, Detail, Log, Utility and WinAPI.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2016/11/231432.php
|
CC-MAIN-2021-21
|
refinedweb
| 314
| 52.49
|
Custom Traffic Maps with Yahoo Traffic RSS,
MSN Virtual Earth Maps, and
Remote Scripting [AJAX=NOT!]
by
Peter A. Bromberg, Ph.D.
"Rudeness is the weak man's imitation of strength" -- Eric Hoffer
Last year (2004) I went to San Diego Tech-Ed, and I've always remembered one of the coolest apps that one developer put together for the Pocket PC. He got some traffic data from the local provider (CALTRAN or whatever it is in Southern CA) and plotted it in colors over a map route of his trip into San Diego. There were three color codes - green, yellow, and red. Anyone who is familiar with San Diego traffic on a weekday morning can guess what color most of the markers were! Aside from B.J. Holtgrew's wild taxi ride over the border into Tijuana chasing their runaway VSTO promotional blimp, most San Diegans are pretty much resigned to the traffic crunch.
I always wanted to provide something similar as a sort of public service and developer tutorial, but wasn't able to find the traffic data, until recently. It seems that Yahoo has resurrected their traffic data and now provides reports on most major cities in RSS format. Here's a sample URL:
The csz parameter above is a zip code (or a location such as city state) ; the mag = number of miles range to show, and the minsev parameter is the minimum severity of accidents and incidents to show. You can click on the link if you want to see the actual RSS XML Document that it returns. If you do, note carefully that the "link" element in each "item" aggregate has a URL-style delimited string of information that includes the latitude and longitude coordinates of the actual incident.
"Well," I thought - "RSS means I can handle this traffic stuff as easy as falling off an overpass!". If you've followed any of my previous articles dealing with RSS you may remember that a key factor is the ability to use the DataSet's built-in ReadXml method to consume the entire RSS feed programmatically in one fell swoop! That's right, you just point the ReadXml method at your "preconstructed" Yahoo traffic URL and querystring, and Presto! -- you have yourself a DataSet. As our friend and contributor Dr. Dotnetsky would probably say, "That's slicker'n snot on a doorknob!" The only thing remaining is to figure out how to parse out what you need. In the case of Yahoo Traffic, we don't even need any Regex, because we can String.Split our way into Traffic Heaven.
The other interesting thing that came about recently is the rollout of MSN Virtual Earth, the API, and the Developer help pages and site. Google Maps is great, Virtual Earth could be even better. Why? A very simple Javascript-based API, lots of good features, and in some cases, the maps and aerial photo tiles are better than Google's. Plus, you don't need a license key. The commercial license requires only that you include the MSN widgets on your map, which eventually will be used to serve ads. If you want to see what it all looks like, visit the MSN Virtual Earth site here.
As a side note, I should probably mention that my MVP buddy and co - founder of this site, Robbe Morris and I decided we'd put the complete (17 Million + location records) optimized US Census TIGER geolocator database online so that we could start to provide geocoding services. We now have our database in full operation, with a complete .NET WebService interface, so if you have data that you need to be geocoded or you have a business idea you'd like to explore for a possible joint venture, be sure to let us know. We'll provide trial license keys to the GeocoderService upon request. Our geocoder service returns latitude / longitude coordinates of any parsed address in the United States with six digits of precision - tight enough to plot a geolocation point 300 feet or less from your house!
But, I digress. Let's get into the code!
The final piece is Remote Scripting. And of all the offerings out there, Jason Diamond's "MyAjax.Net" is in my opinion, the best. Why? Simple. The code is elegant and mature, its only about 450 lines, which means you can actually INCLUDE 100% of it right inside a UserControl (which I do here), and above all, it maintains and roundtrips that stateful ASP.NET page and the controls in it, which is so important to the concept of Remote Scripting with ASP.NET, without relying on external URL's or HTTPHandlers. The version I use here is something around version 6 and I've changed the namespace "back to" RemoteScripting, for reasons I don't feel it's necessary to repeat. However, Jason's latest version is currently up to number 10, and most of the additions have not been "fluff". I've also included a second solution download using the newest "10" version, with the Remote Scripting class in it's normal "habitat" - outside of the control, and with no changes whatsoever to his distributed code. The whole concept here is that if you select some stuff and press a button, you shouldn't have to have an annoying page reload. We should be able to go get our "stuff" (in this case a real DataTable from our DataSet I referred to above) and work with it in the client-side state of the page, updating the DOM and using the Virtual Earth MAP controls. You see Jason's marvelous piece of work, which already has several contributors, here. One thing developers need to think about with all this Remote Scripting / Ajax stuff is that its easy to "go overboard". Think about the user experience, how your application will behave online, and use Remote Scripting judiciously where appropriate.
Before we get into the actual code, if you are curious, you may wish to take a quick look at my live sample here.
If you looked at the above, you realize that the visible UI is very simple; we have a textbox for zip code, and some dropdowns for the minimum severity level of the incidents (accidents) to show, miles range to show, the Virtual Earth Map initial zoom factor, and a button to make it all go. Since this stuff is trivial for most developers, I'll skip to the goodies - what happens when you press the "MAP!" button.
In the very top of the User Control, I add a "helper" variable that allows me to identify the UniqueID of the control:
Page.RegisterStartupScript("ctrlId","<script>var ctrlId='"+this.ClientID +"_';</script>");
Then I have the server-side C# RemoteScripting.Method attributed ("Ajax.Method") method that is called when the button is pressed:
[Method]
public DataTable PopulateZipData(string zipCode,int miles,int severity)
{
string url1="";
string url2 ="&mag=";
string url3 ="&minsev=";
DataTable dt2=new DataTable() ;
// make the call and do map
DataSet ds = new DataSet();
string fullUrl =url1 +this.txtZip.Text +url2 + miles.ToString()
+ url3 + severity.ToString() ;
ds.ReadXml(fullUrl) ;
DataTable tbl = ds.Tables[2];
dt2.Columns.Add("latitude");
dt2.Columns.Add("longitude") ;
dt2.Columns.Add("title");
dt2.Columns.Add("description");
dt2.Columns.Add("severity") ;
DataRow row1=null;
foreach(DataRow row in tbl.Rows )
{
row1 = dt2.NewRow() ;
row1["title"] =row["title"];
row1["description"] = row["description"];
row1["severity"] = row["severity"];
string[] gotRow = row["link"].ToString().Split('&') ;
foreach (string s in gotRow)
{
if(s.IndexOf("mlt=")>-1)
{
string[] mltStr = s.Split('=') ;
row1["latitude"] = mltStr[1];
}
if(s.IndexOf("mln=")>-1)
{
string[] mlnStr = s.Split('=') ;
row1["longitude"] = mlnStr[1];
}
}
dt2.Rows.Add(row1);
}
Session["mapds"]=dt2;
return dt2;
}
I'm pretty sure I don't need to explain much about the above; as mentioned it populates a DataSet from the concatenated Yahoo Traffic URL to the RSS feed, and then it creates a new Datatable and does some more parsing and string splitting to get "latitude" Longitude" "title", "description" and "severity" columns with a row for each incident that the Yahoo feed brings back. Then, I store it in Session just in case I want to do something else with it such as transfer to a different page. I also return the DataTable from the method.
Incidentally, you can still use the XmlReader overload of DataSet's ReadXml method if you are behind a proxy / firewall such as ISA Server. Here's how:
// proxy code:
HttpWebRequest rqst =
(HttpWebRequest)WebRequest.Create(fullUrl);
WebProxy loProxy = new WebProxy("yourISAServer:8080",true);
loProxy.Credentials = new NetworkCredential("username","passs");
rqst.Proxy = loProxy;
HttpWebResponse rsp = (HttpWebResponse)rqst.GetResponse ();
XmlTextReader rdr = new XmlTextReader(rsp.GetResponseStream());
ds.ReadXml(rdr) ;
The rest of the server-side codebehind for the control is Jason Diamond's "myAjax.net" code, virtually untouched, except with the namespace changed "back" to RemoteScripting.
Now, we switch on over to the client side, where our callback method receives the Datatable:
<script>
function PopulateWithDataTable() {
//alert(ctrlId);
var zip = document.getElementById(ctrlId+'txtZip').value;
if(zip.length <5) return; // need 5 digit zip code dood.
var slM =document.getElementById('slMiles');
var slMiles=slM.options[slM.selectedIndex].value;
var slS=document.getElementById('slSeverity');
var slSeverity = slS.options[slS.selectedIndex].value;
// show the user a message while we make the out-of-band call...
document.getElementById(ctrlId+"lblZipInfo").innerText = "Getting Data...";
VETraffic.Control.PopulateZipData(zip, slMiles,slSeverity,DoTableCallBack);
}
function DoTableCallBack(result) {
// here is our callback where we process our return data in the "result" object
var table=result.value; // yup, it gave us a DataTable in script.
/*
VE_MapControl(Latitude, Longitude, Zoom, MapStyle, PositionType,
Left, Top, Width, Height);
*/
if (table==null)
{
document.getElementById(ctrlId+"lblZipInfo").innerText =
"No Data for selection.";
return null;
}
var width = screen.width
var height = screen.height
var zoom = document.getElementById("slZoom");
var zoomlevel = zoom.options[zoom.selectedIndex].value;
map = new VE_MapControl(table.Rows[0].latitude, table.Rows[0].longitude, zoomlevel,
'r', "absolute", 0, 280, width, height);
// add a location marker
var descr= '';
var sevr='';
var titl='';
var popstr="";
for(var i=0;i<table.Rows.length;i++)
{
descr=table.Rows[i].description ;
sevr= table.Rows[i].severity;
titl=table.Rows[i].title;
map.AddPushpin('pin'+i, table.Rows[i].latitude, table.Rows[i].longitude,
80, 110,'pin2',"<div id=pinmarker"+i+" title='"+ titl
+ descr + "'>" + sevr + "</div>");}
document.getElementById(ctrlId+"Panel1").appendChild(map.element);
document.getElementById(ctrlId+"lblZipInfo").innerText = "Done.";
}
</script>
As can be seen above, the RemoteScripting infrastructure has marshalled the server-side call back into a javascript object that comes back into the page, looking, feeling and acting just like a server - side DataTable. The rest of the code is simply to use the Virtual Earth API to create a map, and add the "pushpin" location marker objects. The HTML "title" attribute of the div tag is used to show the mouseover message describing the Title and Description fields of the traffic item, and the visible text of the pushpin is the Severity column. So, it makes for a very compact UI on a zoomable, pannable map.
I've seen others do stuff like this and use up incredible amounts of javascript to show custom - styled javascript popups with timers and such. I assure you, this is completely unnecessary. I believe "less is more", and the title attribute, which is present in most HTML elements, will work just fine.
I'm already using this before I leave for work every morning, and it has been very helpful to me. I encourage you to create your own. Please read the Virtual Earth Terms of Service, as you are required to use the commercial format (with MS's "widgets" on the map) for commercial purposes. Then, you hopefully won't see stuff like this:
I have two solutions you can download today. First is the original i present in the article, and the second is the identical solution, but compiled with Diamond's My Ajax.Net version 10 (the newest) library, with his "Ajax" Remote Scripting class outside of the control.
Download the Solution that accompanies this article
Download the Version 10 Solution
|
http://www.nullskull.com/articles/20051017.asp
|
CC-MAIN-2015-40
|
refinedweb
| 2,015
| 55.34
|
[Ertl, John] > I need to take a number and turn it into a formatted string. > The final output needs to look like XXXXYYYY when the X is the > integer part padded on the left and Y is the decimal part padded > on the right. > I figured I could split the number at "." and then use zfill or > something like this (LEVEL1 = "%04d" % LEVEL1) for the XXXX > part but I am not sure how to right pad the decimal part of the > number. > Example. > 1 and 1.0 needs to look like 00010000 ( I figured I would have to > check the length of the list made from the split to see if a decimal > portion existed) > 1.1 needs to look like 00011000 > 22.33 needs to look like 00223330 Really? The input has two digits 3, but the output has three digits 3. I'll assume you meant 00223300 instead. > 4444.22 needs to look like 44442200 > Any ideas on the right padding the decimal side using "0" I expect that a "%09.4f" format does everything you asked for, except that it contains a period. So let's try to repair that: >>> def johnpad(n): ... return ("%09.4f" % n).replace('.', '') Then: >>> johnpad(1) '00010000' >>> johnpad(1.0) '00010000' >>> johnpad(1.1) '00011000' >>> johnpad(22.33) '00223300' >>> johnpad(4444.22) '44442200' >>>
|
https://mail.python.org/pipermail/tutor/2004-December/034098.html
|
CC-MAIN-2016-50
|
refinedweb
| 221
| 84.78
|
Yes, thank you.
I've fixed it already, due to the help of Robert Murphy.
I think it's a good idea to add explanations to the SMW admin manual.
Maybe, we should consider to add explanation, to SMW user manual too.
For example, to add small section regarding namespaces to this article:
With the link to
Regards,
V.
On 4/27/08, S Page <info@...> wrote:
>
>
>
Joe Clark wrote:
> Here's an interesting series of steps:
>
> * Create a few pages using "id", "description", and "hours" properties,
> but don't create the Property:description page.
Those properties default to type:Page (relations between pages) and
internally are stored in the smw_relation table.
> * On page "Test", create an inline query based on an "id" property and
> also print "description" and "hours" properties.
> * Create the Property:description page and add the text [[has type::String]]
Changing type does not update all the semantic information in the
database. All it does is add/update an entry in the smw_specialprops
table for that property page. Furthermore a string property is stored
in a completely different table, smw_attributes. If you go to
Property:description you'll see there's nothing present, because SMW is
querying the wrong table.
> * Render page "Test": it now shows a blank "description" field for each
> result row (before it had shown the correct descriptions).
Same thing. SMW is querying the wrong table.
> * Run the SMW_refreshData.php script from a login shell. No errors are
> reported.
This is the right thing to do. (Since you've updated the property page
within normal MediaWiki, you don't have to first run SMW_refreshData
with -p to restrict to property pages to get the type information.)
SMW_refreshData calls MediaWiki's parser, then calls
SMWFactBox::storeData. This deletes the old information about the few
pages with Property:Description and updates the new information. So the
underlying tables are now correct.
I'm not sure what happens to the page with the query during
SMW_refreshData.php. I'm pretty sure SMW executes the query, but I
don't know what happens to various caches.
> * The "Test" page still renders incorrectly.
Yup, I noticed this when I reproduced. In my case MediaWiki loads an
old version of the page from the parser cache. I see
Trying parser cache wikidb:pcache:idhash:1838-0!1!0!default!!en!2
SQL: SELECT /* MediaWikiBagOStuff::_doquery 127.0.0.1 */
value,exptime FROM `objectcache` WHERE
keyname='wikidb:pcache:idhash:1838-0!1!0!default!!en!2'
Found.
in the debug log. SMW definitely doesn't execute the query.
It seems to me that either SMW_refreshData should re-cache in whatever
cache(s) is holding onto the wrong query results,
or it should invalidate that cache(s). Various parts of MediaWiki call
ParserCache->save(), but they're all commented @todo document.
As has been mentioned, one way to invalidate the cache is to touch
LocalSettings.php or re-save the file.
> * Edit/Save the "Test" page. -- This finally fixes the rendering problem.
>
> So, is this having to edit/save pages a common occurrence? It seems to
> me (after an admittedly short intro period), that SMW does some very
> cool things, but it can be a little misleading about what data is
> present with expected "glitches" like this. I want to at least learn
> the rules, so I know when I need to update things.
The rule is editing a page only updates that page's semantic information
(and that information is all the properties for which the page is the
subject). Nothing that depends on those properties will be re-run and
may be subject to caching.
The apparent rule is if you need to invalidate queries and possibly
other stuff in the parser cache, update the modified time of
LocalSettings.php.
> Pretty cool stuff.. I just have to get used to it more. Thanks
Enjoy.
Robert Murphy wrote:
> I've just come to expect that I must update all pages concerned, every
> time I change some.
That's also safe, if you can figure out the dependency relation.
> I just a bot and it "touches" (open, change
> nothing, saves) every page in a list for me.
What bot is that?
--
=S Page
Guy Heathcote wrote:
> Over the last couple of days a few of our users have started noticing that
> particular wiki pages aren't being displayed. Although the rest of the wiki
> is working fine, just a few pages either show just a blank screen or a
> message saying "Fatal error: Allowed memory size of 20971520 bytes exhausted
> (tried to allocate 33361 bytes) in
> E:\Wiki\wamp\www\mediawiki\includes\SkinTemplate.php on line 409".
Sounds like you've simply hit the MediaWiki memory_limit, see
--
=S Page
|
https://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=200804&viewday=27
|
CC-MAIN-2017-47
|
refinedweb
| 787
| 66.74
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.