text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
propdb 0.3.0
Property Bag Style Database
Description
The propdb package is a simple database package. or by cPickle; therefore, the property item values must be able to be seriailized by one of those two modules.
Property bag name and location form the path where the bag is saved. If no location is set then the current working directory is used. If no name is set then a temp name is automatically generated.).
Property bags can be serialized as JSON or by cPickle by setting the “backend” argument when instantiating a bag.
Property bags can be encrypted by setting the secret_key argument. The secret_key can be 32 characters in length and AES encryption algorithm is used. Note: The M2Crypto must be installed for this feature to work.
Property bags can be sync’d if multiple instances of the same bag is referenced by other property bags by calling the sync() method. If autosync is set to true when instantiating a property bag then sync’ing will automatically be done when saving.
Installation
pip install propdb
Basic Usage
from propdb.propbag import propbag as bag from propdb.propbag import propitem as item bag1 = bag('bag1').propbag import propbag from propdb.propbag import propitem
Create a new property bag
# all arguments are optional propbag(name, directory, autosave, backend, secret_key) # creates a bag with a random name bag1 = propbag() # creates a bag named 'mybag' in the current working directory bag2 = propbag('mybag') # creates a bag named 'mybag' in the temp folder bag3 = propbag('mybag', 'c:\\temp') # creates a bag named 'mybag' where autosave is off and the # backend is pickle bag4 = propbag('mybag', autosave = False, backend = backendformat.pickle) bag4.save() # creates an encrypted bag bag5 = propbag(secret_key = 'supersecretpassword') print(bag5.location).
from propdb.propbag import propbag # create new bag bag = propbag('mybag') # adds a property item with the value set to None bag.add('item1') # by list of name/value pairs bag.add(('item1', 1, 'item2', 2, ...)) # by dictionary bag.add({'item1':1, 'item2':2, ...}) # by list or propitems bag.add((propitem, propitem, ...)) # by propitem bag.add(propitem)
Adding property items with the + and += operators can be done in the same exact way as the add method (e.g. propbag + propitem, etc). The only difference is that there is no return value when adding by operator.
For example:
bag + 'item1' bag += (.
from propdb.propbag import propbag # create new bag bag = propbag('mybag') # by list of name/value pairs bag.set(('item1', 1, 'item2', 2, ...)) # by dictionary bag.set({'item1':1, 'item2':2, ...}) # by list or propitems bag.set((propitem, propitem, ...)) # by propitem bag.set(propitem)
Updating property items with the [] operator can be done by indexing the property bag with the name of the property item and passing in a new value or a property item.
For example:
bag['item1'] = value bag. (‘item1’, propitem)).
from propdb.propbag import propbag # create new bag bag = propbag('mybag') # drops one property by name bag.drop('item1') # drops one property item by name bag.drop(propitem) # by list of names bag.drop(('item1', 'item2', 'item3', ...)) # by list of propitems bag.drop((propitem, propitem, ...))
Dropping property items with the - and -= operators can be done in the same exact way as the drop method (e.g. propbag - propitem, etc). The only difference is that there is no return value when dropping by operator.
For example:
bag - 'item1' # drops one property item by name bag -= ('item1', 'item2', 'item3', ...) # via list of names
Sync’ing property items
Use propbag.sync to sync changes in a bag if altered by a different instance of the same bag.
Setting autosync to true when instantiating a property bag will activate this feature automatically.
from propdb.propbag import propbag # create two bags that point to the same bag bag1 = propbag('mybag') bag2 = propbag('mybag') # create property item in bag1 bag1.add({'item1':1}) # sync bag2...bag2 now has item1 bag2.sync()
Changes
0.3.0 - Sync and autosync features added to property bags.
0.2.0 - Supports two types of serialization: backendformat.json (default) and backendformat.pickle. Supports saving the bag encrypted with M2Crypto.
0.1.4 - Added unittests and fixed a bug when adding by dictionary
<= 0.1.3 - Working out the details of publishing to PyPI
- Downloads (All Versions):
- 7 downloads in the last day
- 99 downloads in the last week
- 461 downloads in the last month
- Author: Dax Wilson
- License: GNU-GPL
- Package Index Owner: daxwilson
- DOAP record: propdb-0.3.0.xml
|
https://pypi.python.org/pypi/propdb/0.3.0
|
CC-MAIN-2015-27
|
refinedweb
| 742
| 59.6
|
Complex numbers are a subject simple enough to be taught in high school math, but subtle enough to continue to be investigated through college mathematics and beyond. A complex function is a function that accepts a complex number as its argument and returns a complex number as its value. Complex functions are the bread and butter of complex analysis, which plays an important role in algebra, geometry, number theory, and in a host of practical applications of mathematics such as physics and engineering.
Despite the critical role they play in the mathematical sciences, there is no widely accepted way to "draw" a complex function. Real functions can be easily represented by 2-D plots: the argument goes on the horizontal axis, the function value goes on the vertical axis. The interpretation of such plots has become so natural to us that we "get" a function faster by looking at its plot than by looking at any other representation. Unfortunately, the plot technique does not generalize to complex functions. We need the two dimensions of a page just to represent the function argument. Living in three spatial dimensions doesn't save us; since we need an additional two dimensions to represent the functions' value, we would need to live in four spatial dimensions in order to "plot" a complex function.
A number of poor substitutes for the 4-D plots we need have become common. At Wolfram's pages on the Gamma function, you can peruse separate "3-D perspective" plots of the function's real part, imaginary part, and absolute value, or 2-D "heat maps" of the same. For me at least, these images do more to show how poor these visualization techniques are than to give me much of a feel for the behavior of the Gamma function.
This article outlines a different visualization technique, and implements it in a simple WinForms application we call "Complex Explorer". Our technique produces images of complex functions that are beautiful and rich in information.
The drop-down menu allows you to choose the function to visualize. The Save button allows you to save the currently displayed image to a PNG file. If you click on a point in the image, the Complex Explorer will tell you the argument and function values at that point.
Our visualization technique uses a form of domain coloring I first saw implemented by Claudio Rocchini.
The first principal of the technique is taken from topographic mapping. Topographic maps show elevations by drawing contour lines, which are lines of equal altitude. If you walk along a contour line, you go neither up nor down. If you turn and move at right angles to a contour line, you are moving up or down, and will eventually reach another contour line. Where contour lines are close together, elevation changes rapidly over a short distance, that is, the slope is steep; where contour lines are widely spaced, the slope is gentle. "3-D Perspective" maps may give a slightly more immediate feel of a landscape, but they unavoidably hide some areas behind others, and it is much easier to read an accurate estimate of the elevation at a particular point from a contour line map than from a perspective map. For this reason, "topo maps" using contour lines have become a standard tool for hikers worldwide. The same principal is used in the isobar and isotherm maps used by meteorologists. With just a little practice, you can very quickly get the "lay of the land" by looking at contour lines.
The second principal of our technique is to use color to encode information. This is a long-standing technique used in many maps; for example, using blue for water and green for forest in geographic maps. We will specify colors using hue-saturation-value (HSV) triplets, rather than the red-green-blue (RGB) triplets most familiar in computer graphics, because the axes of the HSV system correlate more directly with the human-perceived qualities of colors. In the HSV system, as illustrated, hue is specified by the angle along a color wheel, and saturation and value specify the intensity and brightness of the color. Black has zero value and white has zero saturation.
The final principal of our visualization technique is to think of complex function values in terms of magnitude and phase rather than in terms of real and imaginary parts. If you are used to thinking of complex numbers only in terms of real and imaginary parts, now is the time to expand your horizons. Just as different spatial coordinate systems highlight different aspects of the relationships between points, different complex coordinate systems highlight different aspects of complex numbers. For example, as we near a singularity of a complex function, we can't say anything in general about the function's real and imaginary parts x and y, but we can say that its magnitude ρ is increasing. The relationships between a complex number z, its real and imaginary parts x and y, and its magnitude ρ and phase θ, are illustrated in the diagram below.
With contour lines and colors, we have the two degrees of freedom we need to represent a complex function's magnitude and phase. Since the natural measure of a complex function's "height" is its absolute value ρ, we will use contour lines to represent the "landscape" |f(z)|. Since both the phase θ of a complex number and the hues on a color wheel are naturally periodic, we will use hue to represent the phase arg(f(z)). The hues on a color wheel run from red through yellow, green, blue, violet, and back to red. Note: this is the same order as the colors of a rainbow. With this convention, positive real points are red, and negative real points are cyan (light blue), which lies opposite to red on the color wheel. To generate contour lines, we will adjust the saturation and value so that our pixels become white near |f(z)| = 0, |f(z)| = 1, and at exponentially increasing regular values thereafter, and black at regular intervals in between. Thus our images will consist of alternating white and black contour lines indicating magnitude, with fully saturated hues indicating phase in the areas in between.
We will discuss the details of our implementation later. For now, let's fire up Complex Explorer and get to know a few complex functions.
Let's begin with the very simple function that Complex Explorer shows when first started: f(z)=z. Since this function is its argument, by studying it, you can get a feel for how our technique represents a complex number. Since |z| is the distance from the origin, the contour lines are concentric circles centered at the origin: a white dot in the middle where z=0, another white circle at |z|=1, and a third white circle for a yet larger |z|. Since the phase of z is the angle between it and the positive real axis, the hues in this plot are constant along radial lines, but change as we sweep around the complex plane.
Now let's look at a simple polynomial. Looked along the real axis, this would be a parabola, with zeros at x = ±1, negative between those roots, positive outside of them. And looking at our visualization of its complex generalization, we see exactly that behavior along the real axis: zeros of f(z) at z = ±1, with cyan points (negative real numbers) between them and red points (positive real numbers) beyond them. But we also see much richer behavior in the complex plane.
We see that the small, isolated valleys around each root join at the |f(z)| = 1 white contour line, which has a beautiful "infinity symbol" shape enclosing both roots. We see that for larger |z|, there is a single oval valley, which becomes increasingly circular as |z| increases. We see that f(z) is negative and real (cyan) along the entire imaginary axis. By tracing one orbit around the origin, we see that the phase of f(z) completes two full cycles (red to cyan to red to cyan to red) as z makes one. We see the z -> -z symmetry of the equation embodied in the reflection symmetry of our image around the vertical axis.
This cubic polynomial has only one real root, at z = -1, but our figure immediately shows that it has three complex roots, as the fundamental theorem of algebra says it must. (The others are z = (1 ± i sqrt(3))/2.) We see that, again, the small valleys of |f(z)| around each root join together to produce one large valley as |z| increases.
The phase structure is more complicated. We see three "channels" of positive real numbers (red) that flow into a circle in the center, whose boundaries are defined by the roots. Areas with other phases are not similarly connected. Instead, the three positive real channels are separated by three disconnected regions in which the phase of f(z) goes through its cycle as we move from one positive real channel to the next.
Notice that, while our previous example had a two-fold visual symmetry, this example has a three-fold visual symmetry. The visual symmetry in the previous example was induced by the algebraic symmetry z -> -z. Can you write down the algebraic symmetry that induces the visual symmetry in this example?
Lest I be accused of over-selling this visualization technique, let me give an example which illustrates a problem. At right, you see the image of f(z) = 1/z, and you will immediately notice that it bears a striking resemblance to the image of f(z) = z with which we began. Yet the behavior of these functions could hardly be more different: while f(z) = z is zero at z=0 and grows larger as |z| increases, f(z) = 1/z is infinite at z=0 and falls off as |z| increases.
The problem you see here is exactly the same problem that plagues users of "topo" maps: as a collection of unlabeled contour lines, a valley looks the same as a mountain. Without contour line labels, there is no way to know whether the slope is going up or down. So you do have to know a little bit about the "lay of the land" (or read it from the labels) to see what a "topo" map is telling you, and you do have to know a little bit about the function you are visualizing (or deduce it from the functional form), in order to correctly interpret these images.
Notice, by the way, that the colors circulate around our simple pole in the opposite way they do around a simple root. That's because the phase arg(z-1) = -arg(z) is the opposite of the phase of z.
Our image of the exponential function may look boring at first glance, but the reason for its surprising simplicity can give us some insight. It appears from our plot that the magnitude (elevation) of ez depends only on the real part of z, while its phase (hue) depends only on the imaginary part of z. How does that happen? By writing z = x + i y, we find f(z) = ex ei y. That is, |f(z)| = ex depends only on x while arg(f(z)) = y mod 2π depends only on y, which is precisely what the image tells us.
Recall that we began by critiquing Wolfram's visualization of the Gamma function. Let us see whether ours, shown above, does any better.
In the right complex plane, we see the saddle point at z ≈ 1.5; contour lines show the function increasing as we move outward from that point to the "east" or "west", decreasing as we move outward from that point to the "north" or "south". In the left half of the complex plane, we see singularities at the integer values 0, -1, -2, etc. Note that the colors circulate each pole in the same sense as in our 1/z example above. From the density of contour lines, we see that the poles nearer the origin are stronger (that is, rise higher faster) than the poles at higher negative integers. On the real axis, the function's sign alternates between positive (red) and negative (cyan) at intervals separated by the poles. Note, while the magnitude |f(z)| falls off as we move away from a pole in any direction, the behavior of the phase is not so uniform: as we move outward from a positive real interval, the function stays positive real, but as we move outward from a negative interval, the function can become either positive imaginary or negative imaginary, depending on the exact direction we move.
All this could be read off our image. And it's pretty to boot!
We have so far explored only six complex functions. Nearly that many again are built in to Complex Explorer, and you can easily add more of your own. Take a look at log(z) or sqrt(z) to see a discontinuity in the phase of a complex function, called a cut. Take a look at Rocchini's example function, whose beautiful visualization inspired me to create Complex Explorer. Take a look at ψ(z) and try to understand how it is related to Γ(z).
Complex Explorer is a simple WinForms app. The core logic for producing an image from a complex function simply iterates over each pixel, computes the corresponding z value, computes the corresponding f(z) value, and maps that value to a color.
public void DrawImage () {
// get the function to evaluate
Function<complex,complex> f =
functions[functionList.SelectedIndex].Function;
Bitmap image = new Bitmap(imageBox.Width, imageBox.Height);
// iterate over all image pixels
for (int x = 0; x < imageBox.Width; x++) {
double re = re_min + x * (re_max - re_min) / imageBox.Width;
for (int y = 0; y < imageBox.Height; y++) {
double im = im_max - y * (im_max - im_min) / imageBox.Height;
// form a complex number based on the pixel value
Complex z = new Complex(re, im);
// compute the value of the current complex function
// for that complex number
Complex fz = f(z);
// don't try to plot non-numeric values (e.g. at poles)
if (Double.IsInfinity(fz.Re) || Double.IsNaN(fz.Re) ||
Double.IsInfinity(fz.Im) || Double.IsNaN(fz.Im)) continue;
// convert the complex function value to a HSV color triplet
ColorTriplet hsv = ColorMap.ComplexToHsv(fz);
// convert the HSV color triplet to an RBG color triplet
ColorTriplet rgb = ColorMap.HsvToRgb(hsv);
int r = (int) Math.Truncate(255.0 * rgb.X);
int g = (int) Math.Truncate(255.0 * rgb.Y);
int b = (int) Math.Truncate(255.0 * rgb.Z);
Color color = Color.FromArgb(r, g, b);
// plot the point
image.SetPixel(x, y, color);
}
}
// put the image in the image box control
imageBox.Image = image;
}
The mapping of a complex value to an HSV color triplet is the key operation for our visualization technique. Determining the hue is easy; we just get the phase from the ComplexMath.Arg function and express it as a fraction of 2π. Determining the saturation and value is harder. We need our algorithm to fade to black or white near the desired contour line values, but keep those regions small enough that the figure does not become "washed out" with large regions devoid of color.
ComplexMath.Arg
public static ColorTriplet ComplexToHsv (Complex z) {
// extract a phase 0 <= t < 2 pi
double t = ComplexMath.Arg(z);
while (t < 0.0) t += TwoPI;
while (t >= TwoPI) t -= TwoPI;
// the hue is determined by the phase
double h = t / TwoPI;
// extract a magnitude m >= 0
double m = ComplexMath.Abs(z);
// map the magnitude logrithmicly into the repeating interval 0 < r < 1
// this is essentially where we are between countour lines
double r0 = 0.0;
double r1 = 1.0;
while (m > r1) {
r0 = r1;
r1 = r1 * Math.E;
}
double r = (m - r0) / (r1 - r0);
// this puts contour lines at 0, 1, e, e^2, e^3, ...
// determine saturation and value based on r
// p and q are complementary distances from a countour line
double p = r < 0.5 ? 2.0 * r : 2.0 * (1.0 - r);
double q = 1.0 - p;
// only let p and q go to zero very close to zero;
// otherwise they should stay nearly 1
// this keep the countour lines from getting thick
double p1 = 1 - q * q * q;
double q1 = 1 - p * p * p;
// fix s and v from p1 and q1
double s = 0.4 + 0.6 * p1;
double v = 0.6 + 0.4 * q1;
return (new ColorTriplet() {X = h, Y = s, Z = v} );
}
This mapping is due to Claudio Rocchini. Notice that the variables p and q measure the distance from and to the nearest contour lines. A naive approach would be to map those quantities linearly into s and v, but that produces large regions that are nearly white or nearly black instead of sharply defined contour lines. (Try it for yourself!) Rocchini's trick for avoiding that problem is to use the variables p1 and q1 in place of p and q. p1 and q1 are produced from p and q via a simple function (1 - (1-x)3) that preserves ordering (that is, as the input rises from 0 to 1, the output also rises from 0 to 1), but stays closer to one over a larger ranger (the output is already ~0.9 by the time the input reaches ~0.5). This ensures that our contour lines are thin.
p
q
s
v
p1
q1
The .NET Framework does not provide direct support for HSV color specification, so we need to translate our HSV triplet to an RGB triplet. That computation is standard, and we will not go over it in detail. You can learn about this and other straightforward aspects of Complex Explorer's logic, such as its representation of complex functions and image save functionality, by studying the code.
Complex Explorer uses the Meta.Numerics library for its advanced complex functions like the Gamma function. As an added bonus, Meta.Numerics defines its own Complex type (just introduced in .NET 4.0) and function delegate (introduced in .NET 3.5), enabling Complex Explorer to be compiled with and work on .NET Framework versions all the way back to 2.0.
There are a lot of ways that Complex Explorer could be expanded. It would be nice to have a plug-in mechanism so that new functions could be added without having to change the program's code. It would be nice to be able to zoom in or out on different regions of the complex plane. Complex Explorer is free software under the MS-PL license, and I encourage anyone interested to take it and run with it, adding these and other cool.
|
http://www.codeproject.com/Articles/80641/Visualizing-Complex-Functions?fid=1571824&df=90&mpp=10&sort=Position&spc=None&select=4203779&tid=3859323
|
CC-MAIN-2015-27
|
refinedweb
| 3,140
| 60.65
|
--- Kevin Atkinson <address@hidden> wrote: > Please see my post to aspell-user. OK....turned out to be very easy after all! I've included the full patch, which also produces aspell.exe > > > Btw. it is possible to get the common lib to compile using the intel 6.0 > > compiler. > > It may be. I have never tried. Clean patches gladly excepted. I'll give it a shot. Probably needs more of this ugly dllimport nonsense. Ruurd __________________________________________________ Do You Yahoo!? Yahoo! Finance - Get real-time stock quotes
diff -u /tmp/aspell-0.50/common/iostream.hpp common/iostream.hpp --- /tmp/aspell-0.50/common/iostream.hpp 2002-07-24 21:56:48.000000000 +0200 +++ common/iostream.hpp 2002-08-27 10:12:37.000000000 +0200 @@ -9,6 +9,12 @@ #include "fstream.hpp" +#if defined(__CYGWIN__) || defined (_WIN32) +#define DLLIMPORT __declspec(dllimport) +#else +#define DLLIMPORT +#endif + namespace acommon { // These streams for the time being will be based on stdin, stdout, @@ -16,9 +22,9 @@ // functions. It is also safe to assume that modifcations to the // state of the standard streams will effect these. - extern FStream CIN; - extern FStream COUT; - extern FStream CERR; + extern DLLIMPORT FStream CIN; + extern DLLIMPORT FStream COUT; + extern DLLIMPORT FStream CERR; } #endif Only in common: settings.h diff -u /tmp/aspell-0.50/common/speller.cpp common/speller.cpp --- /tmp/aspell-0.50/common/speller.cpp 2002-08-21 10:46:22.000000000 +0200 +++ common/speller.cpp 2002-08-27 09:46:15.000000000 +0200 @@ -7,6 +7,8 @@ #include "speller.hpp" #include "convert.hpp" #include "clone_ptr-t.hpp" +#include "Config.hpp" +#include "copy_ptr-t.hpp" namespace acommon {
|
http://lists.gnu.org/archive/html/aspell-devel/2002-08/msg00029.html
|
CC-MAIN-2016-07
|
refinedweb
| 269
| 56.01
|
oc-observe man page
oc observe — Observe changes to resources and react to them (experimental)
Synopsis
oc observe [Options]
Description
Observe changes to resources and take action on them
This command assists in building scripted reactions to changes that occur in Kubernetes or OpenShift resources. This is frequently referred to as a 'controller' in Kubernetes and acts to ensure particular conditions are maintained. On startup, observe will list all of the resources of a particular type and execute the provided script on each one. Observe watches the server for changes, and will reexecute the script for each update.
Observe works best for problems of the form "for every resource X, make sure Y is true". Some examples of ways observe can be used include:
· Ensure every namespace has a quota or limit range object
· Ensure every service is registered in DNS by making calls to a DNS API
· Send an email alert whenever a node reports 'NotReady'
· Watch for the 'FailedScheduling' event and write an IRC message
· Dynamically provision persistent volumes when a new PVC is created
· Delete pods that have reached successful completion after a period of time.
The simplest pattern is maintaining an invariant on an object - for instance, "every namespace should have an annotation that indicates its owner". If the object is deleted no reaction is necessary. A variation on that pattern is creating another object: "every namespace should have a quota object based on the resources allowed for an owner".
$ cat set_owner.sh
#!/bin/sh
if [[ "$(oc get namespace "$1" --template='{{ .metadata.annotations.owner }}')" == "" ]]; then
oc annotate namespace "$1" owner=bob
fi
$ oc observe namespaces -- ./set_owner.sh
The set _owner.sh script is invoked with a single argument (the namespace name) for each namespace. This simple script ensures that any user without the "owner" annotation gets one set, but preserves any existing value.
The next common of controller pattern is provisioning - making changes in an external system to match the state of a Kubernetes resource. These scripts need to account for deletions that may take place while the observe command is not running. You can provide the list of known objects via the --names command, which should return a newline-delimited list of names or namespace/name pairs. Your command will be invoked whenever observe checks the latest state on the server - any resources returned by --names that are not found on the server will be passed to your --delete command.
For example, you may wish to ensure that every node that is added to Kubernetes is added to your cluster inventory along with its IP:
$ cat add_to_inventory.sh
#!/bin/sh
echo "$1 $2" >> inventory
sort -u inventory -o inventory
$ cat remove_from_inventory.sh
#!/bin/sh
grep -vE "^$1 " inventory > /tmp/newinventory
mv -f /tmp/newinventory inventory
$ cat known_nodes.sh
#!/bin/sh
touch inventory
cut -f 1-1 -d ' ' inventory
$ oc observe nodes -a '{ .status.addresses[0].address }' \
--names ./known_nodes.sh \
--delete ./remove_from_inventory.sh \
-- ./add_to_inventory.sh
If you stop the observe command and then delete a node, when you launch observe again the contents of inventory will be compared to the list of nodes from the server, and any node in the inventory file that no longer exists will trigger a call to remove from inventory.sh with the name of the node.
Important: when handling deletes, the previous state of the object may not be available and only the name/namespace of the object will be passed to your --delete command as arguments (all custom arguments are omitted).
More complicated interactions build on the two examples above - your inventory script could make a call to allocate storage on your infrastructure as a service, or register node names in DNS, or set complex firewalls. The more complex your integration, the more important it is to record enough data in the remote system that you can identify when resources on either side are deleted.
Options
- --all-namespaces=false
If true, list the requested object(s) across all projects. Project in current context is ignored.
- -a, --argument=""
Template for the arguments to be passed to each command in the format defined by --output.
- -d, --delete=""
A command to run when resources are deleted. Specify multiple times to add arguments.
- --exit-after=0
Exit with status code 0 after the provided duration, optional.
- --listen-addr=":11251"
The name of an interface to listen on to expose metrics and health checking.
- --maximum-errors=20
Exit after this many errors have been detected with. May be set to -1 for no maximum.
- --names=""
A command that will list all of the currently known names, optional. Specify multiple times to add arguments. Use to get notifications when objects are deleted.
- --no-headers=false
If true, skip printing information about each event prior to executing the command.
- --object-env-var=""
The name of an env var to serialize the object to when calling the command, optional.
- --once=false
If true, exit with a status code 0 after all current objects have been processed.
- --output="jsonpath"
Controls the template type used for the --argument flags. Supported values are gotemplate and jsonpath.
- --print-metrics-on-exit=false
If true, on exit write all metrics to stdout.
- --resync-period=0
When non-zero, periodically reprocess every item from the server as a Sync event. Use to ensure external systems are kept up to date.
- --retry-count=2
The number of times to retry a failing command before continuing.
- --retry-on-exit-code=0
If any command returns this exit code, retry up to --retry-count times.
- --strict-templates=false
If true return an error on any field or map key that is not missing in a template.
- --type-env-var=""
The name of an env var to set with the type of event received ('Sync', 'Updated', 'Deleted', 'Added') to the reaction command or --delete.
- -.
- --default-unreachable-toleration-seconds=300
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
- --docker="unix:///var/run/docker.sock"
docker endpoint
- --docker-tls=false
use TLS to connect to docker
- --docker-tls-ca="ca.pem"
path to trusted CA
- --docker-tls-cert="cert.pem"
path to client certificate
- --docker-tls-key="key.pem"
path to private key
- -=24h"
Interval between container housekeepings
- --httptest.serve=""
if non-empty, httptest.NewServer serves on this address and blocks
- -
- --logtostderr=true
log to standard error instead of files
- --machine_id_file="/etc/machine-id,/var/lib/dbus/machine-id"
Comma-separated list of files to check for machine-id. Use the first one that exists.
- -.
- -s, --server=""
The address and port of the Kubernetes API server
- --stderrthreshold=2
logs at or above this threshold go to stderr
- --storage_driver_buffer_duration=0
- -v, --v=0
log level for V logs
- --version=false
Print version information and quit
- --vmodule=
comma-separated list of pattern=N settings for file-filtered logging
Example
# Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services -a '{ .spec.clusterIP }' -- register_dns.sh
See Also
oc(1),
History
June 2016, Ported from the Kubernetes man-doc generator
Referenced By
oc(1).
|
https://www.mankier.com/1/oc-observe
|
CC-MAIN-2019-30
|
refinedweb
| 1,191
| 53.51
|
#include <qdatabrowser.h>
sql: insert When the data browser enters insertion mode it emits the primeInsert() signal which you can connect to, for example to pre-populate fields. Call writeFields() to write the user's edits to the cursor's edit buffer then call insert() to insert the record into the database. The beforeInsert() signal is emitted just before the cursor's edit buffer is inserted into the database; connect to this for example, to populate fields such as an auto-generated primary key. update For updates the primeUpdate() signal is emitted when the data browser enters update mode. After calling writeFields() call update() to update the record and connect to the beforeUpdate() signal to manipulate the user's data before the update takes place. delete For deletion the primeDelete() signal is emitted when the data browser enters deletion mode. After calling writeFields() call del() to delete the record and connect to the beforeDelete() signal, for example to record an audit of the deleted record.
Definition at line 55 of file qdatabrowser.h.
|
http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQDataBrowser.html
|
CC-MAIN-2018-22
|
refinedweb
| 173
| 50.57
|
connect − initiate a connection on a socket
#include <sys/types.h>
#include <sys/socket.h>
int connect(int sockfd, const struct sockaddr *serv_addr, socklen_t addrlen);.
If the connection or binding succeeds, zero is returned. On error, −1 is returned, and errno is set appropriately.
The following are general socket errors only. There may be other domain-specific error codes.
EBADF
The file descriptor is not a valid index in the descriptor table.
EFAULT
The socket structure address is outside the user’s address space.
ENOTSOCK
The file descriptor is not associated with a socket..
SVr4, 4.4BSD (the connect function first appeared in BSD 4.2). SVr4 documents the additional general error codes EADDRNOTAVAIL, EINVAL, EAFNOSUPPORT,)
|
http://man.sourcentral.org/RHEL4/2+connect
|
CC-MAIN-2019-26
|
refinedweb
| 116
| 53.98
|
The project is hosted at CodePlex in the Sandcastle Help File Builder project. Go there for the latest release, source code, the issue tracker, and discussion boards.
Using the help file builder provides the following advantages:
<code>
#region
#if/#else/#endif
See the help file supplied with the help file builder for more information on how to use it. A FAQ is included in the help file that should answer most of the common questions and provide solutions to most of the common issues encountered by users of the help file builder.
The top part of the form contains a list box showing the list of assemblies that will be documented. The entries display the assembly name and the XML comments filename. The three buttons to the left of the list allow you to do the following:
See the Project and Namespace Summaries topic for information on setting the project and namespace level summary text and how to limit which namespaces are documented.
NOTE: Only add assemblies and XML comment files that you want documented in the top section. Third-party assemblies, interop assemblies, and other dependent DLLs should be added to the project's Dependencies property. See the Dependencies Property help topic for more information.
Dependencies
The center section of the form contains a property grid that displays the project options. The options are grouped into several categories and are listed alphabetically within them. More information on the properties can be found in The GUI Project Manager section. The bottom section of the form contains an output window that will display the messages from the build process. The View Output in Window option on the Documentation menu will open a resizable window that can be used to more easily view the build log output.
C:\HelpTest\ Solution folder.
|
+-TestAssembly Application project folder.
| |
| +-Bin
| |
| +-Release Location of assembly and comment files.
|
+-Doc Help file builder project location.
|
|
+-Help The output folder for the help file.
|
+-Working The intermediate working folder used
| during the build.
|
+-DLL Dependencies folder for MRefBuilder
| (if needed).
|
+-Output Help file project compilation folder.
+-art
+-html
+-scripts
+-styles
OutputPath
Check the namespaces that you want to appear and uncheck those that you do not want to appear. By default, all namespaces in all assemblies are documented with the exception of the global (unnamed) namespace that sometimes appears and contains private module details.
To edit a namespace summary, select the namespace in the list and edit the comments in the text box at the bottom of the form. The comments can include HTML markup (i.e. bolding, e-mail links, etc). If no summary is defined, a red warning note will appear for the namespace on the root namespaces page and above the namespace content in the help page for the namespace itself. When this form is opened, it scans the documentation assemblies for new namespaces and adds them to the list if it finds any. If you build a help file without updating the namespace list, any unknown namespaces will appear in the help file but they will also contain a red warning message indicating that the namespace summary is missing.
Namespaces are never removed from the list even if the namespace no longer appears in one of the documented assemblies. You can remove old namespaces from the list by selecting them and clicking the Delete button.
WorkingPath
width="640" data-src="/KB/cs/SandcastleBuilder/Dependencies1.jpg" class="lazyload" data-sizes="auto" data->
By specifying the dependent assemblies or the folders containing them, the build process can create a folder containing the dependencies for MRefBuilder to use. Paths to the assemblies can be absolute or relative. Relative paths are assumed to be relative to the project folder.
In addition to file and folder dependencies, you can also select assemblies from the GAC. This is useful in situations where an assembly only appears in the GAC and does not have an easily located copy elsewhere on disk. Entries selected from the GAC are prefixed by the identifier "GAC:" and show the fully qualified name rather than a file path. At build-time, the GAC is queried and the necessary assemblies are copied from it to the dependencies working folder.
Be aware that if an option is selected that produces a website, the output folder will be cleared of all of its current content before the web site content is copied to it. When producing a help file alone, the output folder is not cleared. When producing a website, the following additional files are copied to the root of the output folder.
The value of the CopyrightText property is treated as plain text. It will be HTML encoded where necessary to resolve issues related to the ampersand character and the XML parser. In addition, you can encode special characters such as the copyright symbol using an escaped hex value (i.e. © = \xA9).
Preliminary
Guid
MemberName
HashedMemberName
CSharp
VisualBasic
CPlusPlus
JSharp
All
Standard
None
AdditonalContent
AdditionalContentResourceDirectory
FilesToInclude
RootPageFileName
RootPageTOCName
<see>
ExcludeItems
ContentPlacement
The list box on the left lists the additional content. The property grid on the right lists the properties for the selected entry. The buttons below the list box allow you to:
DestinationPath
The help file builder scans each HTML file for several tags and specially formatted comments that allow you to define the title and sort order of the table of content entries as well as which one should be the default topic. The tags can be added to the files manually or you can use the Preview option to visually arrange the items and set the default topic. See the help file for more details.
Messages are written to the log file indicating how the link was resolved. If no matches are found, a message appears in the log stating that the identifier could not be found and it will be rendered in bold rather than as a link in the help file as is the case with the second example. If a single best match is found, the log message indicates the fully qualified name that was matched and the tag is converted to a link to the associated page in the help file. If multiple matches are found, the log will include a list of all fully qualified names that contained the identifier and the first entry found will be used as the target of the hyperlink.
<pre>
source
region
lang
SourcePath: Styles\presentation.css
DestPath: Styles\
CultureInfo.Name
How the files are encoded is very important if they contain extended characters. To ensure that the help file builder and the Sandcastle tools properly interpret the encoding within the files, it is best to save the files such that they contain byte order marks at the start of the file for Unicode encoded formats as well as an XML header tag that specifies the correct encoding. In the absence of byte order marks, the encoding in the XML header tag ensures that the file is still interpreted correctly. The supplied default language resource files contain examples of this.
When using entities to represent special characters in the XML language resource files or in the header text, copyright text, etc, use the numeric form rather than the name form as the XML parser will not recognize them and will throw an exception. For example, if you specify Ä (Latin capital letter A with diaeresis) an exception will be generated. To fix it, use the numeric form instead (Ä). This also applies for symbols such as © in the copyright text. Instead, you should use © to get the copyright symbol.
Ä
Ä
©
©
HeaderText
FeedbackEMailAddress
Once a build has finished, you can use the Documentation | View Output in Window menu option or toolbar button to view the build process output in a resizable window which makes it easier to see more information. Selecting the Documentation | View Help File menu option or toolbar button will allow you to view the resulting help file after a successful build. The help file and the log file can be found in the folder specified in the project's OutputPath property.
The help file builder project can be added to the solution file so that it can be checked into source control or opened from within Visual Studio. I like to add it as a solution item by right clicking on the solution name in the Solution Explorer, selecting Add | Existing Item, and then selecting the help file builder project. It is then added to a Solution Items folder in your solution.
You can also have Visual Studio open the help file builder project using the GUI tool rather than its default text editor. To do so, right click on the help file builder project and select Open With.... Click Add to add a new program to the list. Enter the path to SandcastleBuilderGUI.exe for your system and enter something like "Sandcastle Help File Builder" for the friendly name. Click OK to save it, and then click Set as Default to make it the default tool for opening the help file builder projects. Click OK to save it. Now, whenever you double-click the help file builder project, it will open in the GUI tool automatically.
IF
IF "$(ConfigurationName)"=="Debug" Goto Exit
"C:\Program Files\EWSoftware\Sandcastle Help File Builder\
SandcastleBuilderConsole.exe" $(SolutionDir)Doc\TestProject.shfb
:Exit
In a solution with multiple projects that are documented by the same help file builder project, the post-build event should be defined on the last project built by Visual Studio. If the projects are documented individually, you can place a post-build event on each one.
It is also possible to specify project option overrides via the command line and to use a response file to contain a complex list of projects and option overrides to build one or more help files. See the Console Mode Builder topic in the supplied help file for details.
For a list of planned future enhancements and to make suggestions of your own, see the Issue Tracker at the Sandcastle Help File Builder's project website.
CodeBlockComponent
PostTransformComponent
ComponentConfigurations
ShowMissing*
ShowMissingComponent
VersionInfoComponent
%DXROOT%
%WINDIR%
The OutputPath property is still a string. If relative, it is always relative to the project folder and thus should not point at the prior location if the project is saved in a new folder. For similar reasons, the DestinationPath property of additional content items has also been left as a string.
All fully qualified paths in the affected properties in projects created by prior versions of the help file builder will become relative paths automatically when opened in the latest version. If you need a fixed path, expand the property and set the IsFixedPath property to true so that it is saved as an absolute rather than a relative path.
IsFixedPath
#Else If
#End If
#End Region
System.Collections.Generic
PresentationStyle
footer
CleanIntermediates
placement
alignment
logoFile
Help1xAndHelp2x
Help1xAnd2xAndWebsite
HelpFileFormat
For a full list of all changes in prior versions see the help file supplied with the application.
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
<?xml version="1.0"?>
<doc>
<assembly>
<name>YourAssemblyNameHere</name>
</assembly>
</doc>
Document*
csharpbird78 wrote:intellisense xml file
=======================================
Last part of the log file
=======================================
Saving topic : Help2x.hxc
Info: Saving topic : Help2x.HxF
Info: Saving topic : Lava.Core.HxT
Info: Saving topic : Help2x_A.HxK
Info: Saving topic : Help2x_K.HxK
Info: Saving topic : Help2x_F.HxK
Info: Saving topic : Help2x_N.HxK
Info: Saving topic : Help2x_S.HxK
Info: Saving topic : Help2x_B.HxK
Error HXC3031: A group of keywords for a single Help link or KTable exceeds 4,096 bytes.
Error HXC3031: A group of keywords for a single Help link or KTable exceeds 4,096 bytes.
Info: Number of Topics: 8835.
Info: Number of URLs: 8835.
Info: Processing complete.
Info: Number of errors: 2.
Info: Number of warnings: 6.
BUILD FAILED: Unexpected error in last build step. See output above for details.
=======================================
Exception that crashes the application
=======================================
System.Runtime.Remoting.RemotingException was unhandled
Message="Object '/2d1ce3b2_0f06_438c_a409_110ce0ebe0e4/ugedioyhbzvazsmk2mv8zrki_3.rem' has been disconnected or does not exist at the server."
Source="SandcastleBuilder.Utils"
StackTrace:
at SandcastleBuilder.Utils.Gac.AssemblyLoader.get_AppDomain()
at SandcastleBuilder.Utils.Gac.AssemblyLoader.ReleaseAssemblyLoader()
at SandcastleBuilder.Utils.BuildProcess.Build()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/15176/Sandcastle-Help-File-Builder?msg=1754442
|
CC-MAIN-2019-43
|
refinedweb
| 2,091
| 55.03
|
Hi folks.
I have read this article:
And I tried to understand the documentation but I still struggle to create this simple chart. Let’s see if someone knows how to do it.
I have a Pandas dataframe with several columns (col1,col2,…), in each of these columns I have numbers, and I have the date in the index. I don’t understand how to use a time index whose type is DateTime ns so I created a column specifically for the Dates, it is not the best solution but it is something that may work. (This is not the main question so let’s forget this)
import plotly.express as px import pandas as pd a = [1,2] b = [3,5] c = [2018,2019] df = pd.DataFrame ({"col1": a, "col2": b, "Dates": c})
As you can see, we have a simple data structure.
What I want is having the names of the columns shown in the x axis (“col1”, “col2”), and in the y axis I want:
1.- In the first screen, corresponding to Date 2018 the values of the col1 and col2 shown as a bar, the origin of the bar is 0 (always) and the lenght of the bar is the number in the first row of the two columns (in this case it should be 1 and 3).
- After clicking on the “Play button” I want the bars to change its length to the next row. So now the data shown will be:
x labels: Doesn’t change, they are still col1 and col2.
y labels: Doesn’t change, they are simply a range of values.
bar: Origin is again 0, but lenght has changed to 2 and 5.
I tried these code:
fig = px.bar(df, x = ["col1", "col2"], animation_frame=df.Dates)
But this is what I see:
As you can see:
- The slider below is correct, it starts at 2018 and ends at 2019, that is fine.
- But the origin of the bars on the y axis is not 0, is -0.4.
- When you click on the play button they simply dissapear because they don’t start from 0 at every different timestamp.
- In the x axis I don’t see the names of the columns.
This is basically what I want:
Is it possible to create this chart?
Thanks
|
https://community.plotly.com/t/how-to-create-a-simple-bar-animation-with-column-name-in-x-axis-and-column-data-in-y-axis/41007
|
CC-MAIN-2021-10
|
refinedweb
| 387
| 78.08
|
Functions are a basic building block for writing C/C++ programs. Breaking a program up into separate functions, each of which performs a particular task, makes it easier to develop and debug a program.
There are several advantages of using functions in program development. Functions allow for breaking down the program into discrete units. Programs that use functions are easier to design, program, debug and maintain. It is possible to perform separate compilation of functions. Functions can return data via arguments and can return a value. Functions have local variables plus have access to global variables. The general structure of a function is as follows:
storage_class function_return_type function_name(arguments) declaration of argument types; { local_variable declarations; body_of_the_function; }
The components are:
**storage_class** - This is an optional item that indicates the storage class of a function. If this is not present, the default storage class type is **extern**. The only other allowable storage class is **static**. Neither **auto** nor **register** are valid storage class types for functions.
function-return_type - This tells the type of data item that will be returned by this function. The function can only return one value and it will be of this type. All standard C data types, plus constructed data types, plus ‘void’ are allowed.
**function_name** - The name of the function which follows the rules previously stated for variable names. This name must be unique within the program. **arguments** - These are optional. Some functions do not require arguments to be passed to them, so the argument list is empty, getchar(), for example. Arguments are separated by commas with a maximum of 16 arguments. The argument list must be enclosed in parentheses with no semi-colon following the function header. With a traditional or K & R **C** compiler the data type of the arguments is listed separately from the actual argument list. With an ANSI **C** or **C++** compiler the data type of each argument is listed in the argument list. **declaration of argument types** - For each argument passed to the function, its data type must be declared. The declaration must occur before the opening curly brace of the function. Each list of arguments of a specific type must end with a semi-colon. All variable declarations are treated as local or **auto** class variables. This style of argument type declaration is used only with traditional or K & R style **C** compilers. **{** - This marks the beginning of the function. **local_variable declarations** - Declare any variables needed to accomplish the task of the function. These are **auto** class variables by default and are visible only to the function and disappear when the function passes control back to the calling function. Other storage class type variables may be declared at this location. **body_of_the_function** - C and C++ statements that perform the task of the function which can also include calls to other C or C++ functions, assembler routines, Pascal procedures, or FORTRAN subroutines. **}** - The closing curly brace indicates the end of the function. This forces a return value of zero for the **function_return_type** specified unless a prior return statement has explicitly stated a value that is to be returned.
#include int main() { int age; int getInteger( char [], int, int ); age = getInteger( "Enter your age: ", 21, 50 ); cout << "Glad to here you are " << age << " years old." << endl; return 0; } int getInteger( char prompt[], int min, int max ) { int temp, valid = 0; do { cout << prompt; cin >> temp; if( temp >= min && temp <= max ) valid = 1; else { cout << "Input must be between " << min << " and " << max << ". Try Again!" << endl; valid = 0; } } while( !valid ); return temp; }
Notice that in the above example the prototype of the function to be called
int getInteger( char [], int, int );
is in the function that will call that function. The prototype must appear before the first call to the function. In the function prototype, only the data types of the arguments need be present, not the actual argument names as appears in the function.
The return statement allows a function to return a value of the stated data item type. This statement immediately pushes a value onto the return stack and causes control to move to the ending curly brace, }, of the function, which returns control back to the calling function. Without a return statement a function implicitly returns a value of zero for the data type for which the function was typed. The general form of the return statement is:
return(value);
#include <iostream.h> int main() { int ch, type, chkletter(); cout << "\nPress any key followed by RETURN:"; cin >> ch; type = chkletter( ch ); switch(type) { case 0: cout << "\nNon alpha"; break; case 1: cout << "\nUppercase alpha"; break; case 2: cout << "\nLowercase alpha"; break; } return 0; } int chkletter( int c) { if(c >= 'A' && c <= 'Z') return( 1 ); if(c >= 'a' && c <= 'z') return( 2 ); }
Arguments can be constants or variables holding values. The default method is that arguments are passed by value. Passing by value means that only a copy of the value held in the argument is brought into the locally declared argument within the function. Passing by value prevents the function from altering the original variable’s value in the calling function.
int main() { . . . x = add(10,20); } int add( int a, int b) { return(a+b); }
C and C++ supports calling functions and passing arguments by reference. Passing arguments by reference means passing the actual address of a variable so that the called function can affect data stored in the original variable. To pass an address of a variable requires that the address of operator, &, be used on the calling side. The address passed is then received in a pointer type data item. Pointer is a data type just as int and float are data types. Pointer type variables are intended to hold memory addresses. These memory addresses represent the locations in computer memory where data values are stored. To look at the values at those address, the value at the address operator, *, must be used to dereference the pointer holding the memory address and obtain the value stored at that memory address.
#include <stdio.h> int main() { int x , y; void swap( int *, int *); x = 10; y = 20; swap( , ); printf("%d %d",x,y); return 0; } void swap(int *a, int *b) { int temp; temp = *a; // store the value at the address held // in pointer a *a = *b; // store the value at the address held // in pointer b into the value at the // address held in pointer a *b = temp; // store the value held in temp into // the value at the address held in // pointer b }
The main() function can have arguments passed to it from the command line. Three arguments can be passed to the main() ** function; **argc which gives the number of arguments on the command line; argv which holds the actual arguments from the command line; and, envp which holds the current settings for any environment block variables, this is an optional argument and is usually not included. What is the command line? The operating system has a task running that reads the command line associated with the operating system prompt. The command line is anything from just after the operating system prompt upto and including the first newline character. Anything typed on the command line can be passed to a C, C++ or assembly language program.
; }
Notice that there are two arrays passed to the man() function, char *argv[] and char *envp[]. These arguments are declared as arrays of pointers to character type data. The concept of pointers will be discussed in a later chapter but for now assume that these arguments hold lists of strings.
Another improvement to functions in C++ is that you can specify the default values for the arguments when you provide a prototype for a function. For example, if you are defining a function named create_window that sets up a window (a rectangular region) in a graphics display and fills it with a background color, you may opt to specify default values for the window’s location, size, and background color, as follows:
// A function with default argument values // Assume that Window is a user-defined type Window create_window(int x = 0, int y = 0, int width = 100, int height = 50, int bgpixel = 0 );
With create_window declared this way, you can use any of the following calls to create new windows;
Window w; // The following is the same as: create_window(0,0,100,50,0); w = create_window(); // This is the same as: create_window(100,0,100,50,0); w = create_window(100); // Equivalent to: create_window(30,20,100,50,0); w = create_window(30, 20 );
As you can see from the examples, it is impossible to give a nondefault value for the height argument without specifying the values for x, y, and width as well, because height comes after them and the compiler can only match arguments by position. In other words, the first argument you specify in a call to create_window always matches x, the second one matches y, and so on. Thus, you can leave only trailing arguments unspecified.
Using the ellipsis, ..., with C++ function prototypes, means that the function can be specified with an unknown number and type of parameters. This feature can be used to suppress parameter type checking and to allow flexibility in the interface to the function. C++ allows functions be to declared with an unspecified number of arguments. Ellipsis marks are used to indicate this, as follows:
return_type function_name( ... )
The function printf(), from header stdio.h, is declared as
int printf( char *, ... );
Calls to printf() must have at least one argument, namely a string, beyond this, the additional arguments are unspecified both in type and in number. Argument checking is turned off when a function is declared to have an unspecified number of arguments. It is therefore recommend against using this capability unless it is absolutely necessary. Header stdarg.h contains a set of macros for accessing unspecified arguments. The reader is urged to study the macros in this header file.
Inline functions are like preprocessor macros, because the compiler substitutes the entire function body for each inline function call. The inline functions are provided to support efficient implementation of OOP techniques in C++. Because the OOP approach requires extensive use of member functions, the overhead of function calls can hurt the performance of a program. For smaller functions, you can use the inline specifier to avoid the overhead of function calls. On the surface, inline functions look like preprocessor macros, but the two differ in a crucial aspect. Unlike the treatment of macros, the compiler treats inline functions as true functions. To see how this can be an important factor, consider the following example. Suppose you have defined a macro named multiply as follows:
#define multiply(x,y) (x*y)
If you were to use this macro as follows:
x = multiply( 4+1, 6);
By straightforward substitution of the multiply macro, the preprocessor will transform the right-hand side of this statement into the following code:
x = (4+1*6);
This evaluates to 10 instead of the result of multiplying (4+1) and 6, which should have been 30. Of course, you know that the solution is to use parentheses around the macro arguments, but consider what happens when you define an inline function exactly as you defined the macro:
#include // Define inline function to multiply two integers inline int multiply ( int x, int y ) { return( x * y ); } // an overloaded version that multiplies two doubles inline double multiply( double x, double y ) { return( x * y ); } int main() { cout << "Product of 5 and 6 " << multiply( 4+1, 6 ); cout << "Product of 3.1 and 10.0 " << multiply( 3.0+.1, 10.0 ); return 0; }
When you compile and run this program, it correctly produces the following output:
Product of 5 and 6 = 30 Product of 3.1 and 10.0 = 31.000000
As you can see from this example, inline functions never have the kind of errors that plague ill-defined macros. Additionally, because inline functions are true functions, you can overload them and rely on the compiler to use the correct function based on the argument types. Because the body of an inline function is duplicated wherever that function is called, you should use inline functions only when the functions are small in size. In addition, any looping construct that appears within an inline function will cause the compiler to force the function to not be inline. Most compilers will generate a warning to the effect that the function is being treated as a non-inline function.
C normally passes arguments by value. This means that when you call a function with some arguments, the values of the arguments are copied to a special area of memory known as the stack. The function uses these copies for its operation. To see the effect of call by value, consider the following code:
void twice( int a ) { a *= 2; } . . int x = 5; // call the "twice" function twice( x ); printf( "x = %d\n", x);
You will find that this program prints 5 as the value of x, not 10, even though the function twice multiplies its argument by 2. This is because the function twice receives a copy of x and whatever changes it makes to that copy are lost on return from the function. In C, the only way you can change the value of a variable through a function is by explicitly passing the address of the variable to the function. For example, to double the value of a variable, you can write the function twice as follows:
void twice( int *a ) { *a *= 2; } . . int x = 5; // call the "twice" function twice( ); printf( "x = %d\n", x);
This time, the program prints 10 as the result. Thus, you can pass pointers to alter variables through a function call, but the syntax is messy. In the function, you have to dereference the argument by using the * operator. C++ provides a way of passing arguments by reference by introducing the concept of a reference, which is the idea of defining an alias or alternative name for any instance of data. The syntax is to append an ampersand ( &) to the name of the data type. For example, if you have the following:
int i = 5; int *p_i = // a pointer to in initialized to point to i int _i = i; // a reference to the int variable i
then you can use r_i anywhere you would use i or *p_i. In fact, if you write this:
r_i += 10; // adds 10 to i
i will change to 15, because r_i is simply another name for i. Using reference types, you can rewrite the function named twice to multiply an integer by 2 in a much simpler manner:
void twice( int& a ) { a *= 2; } . . int x = 5; // call the "twice" function twice( x ); cout << "x = " << x;
As expected, the program prints 10 as the result, but it looks a lot simpler than trying to accomplish the same task using pointers. Another reason for passing arguments by reference is that when structures or classes are passed by value, there is the overhead of copying objects to and from the stack. Passing a reference to an object avoids this unnecessary copying and allows an efficient implementation of OOP.
C++ provides the ability to overload functions. Function overloading is a type of polymorphism and is one way of allowing the programming environment to be dynamically extended. In C++, two or more functions can share the same name. Therefore, a program could have several functions to perform the absolute value function with all of them named abs. The functions are distinguished from each other by have the types of their arguments differ or by having the number of their arguments differ or both. Because these functions share the same name they are said to be overloaded. The compiler will automatically select the correct version to call based upon the number and/or type of arguments used to call the function.
#include <iostream.h> // // prototype functions // int abs( int ); long abs( long ); float abs( float ); double abs( double ); int main() { int intValue; long longValue; float floatValue; double doubleValue; // // ask for values // cout << "\nEnter a negative integer value: "; cin >> intValue; cout << "\nEnter a negative long integer value: "; cin >> longValue; cout << "\nEnter a negative floating point value: "; cin >> floatValue; cout << "\nEnter a negative double floating point value: "; cin >> doubleValue; cout << "\nAbsolute values are: " << endl; cout << "\t Integer: " << abs( intValue ) << endl; cout << "\t Long: " << abs( longValue ) << endl; cout << "\t Floating Point: " << abs( floatValue ) << endl; cout << "\t Double Floating Point: " << abs( doubleValue ) << endl; return 0; } int abs( int x ) { return (x < 0 ? (-1 * x ) : x); } long abs( long x ) { return (x < 0 ? (-1L * x ) : x); } float abs( float x ) { return (x < 0 ? (-1 * x ) : x); } double abs( double x ) { return (x < 0 ? ((double)-1 * x ) : x); }
This program defines four functions called abs(). With function overloading, a single name can be used to describe a general class of action. Unlike in C, there is no need for four differently named functions, one for each data type to be handled. In C++, the compiler determines which function is appropriate to perform the task. This is a rudimentary form of polymorphism, which is simply one interface representing multiple methods or functions.
|
https://docs.aakashlabs.org/apl/cphelp/chap06.html
|
CC-MAIN-2020-45
|
refinedweb
| 2,867
| 58.92
|
Remember the last time you worked with unfamiliar code? It can seem like it takes forever to understand how to change it and even longer to see the potential impact. What if you had a picture that shows how the code is organized and gives you more information about how changes might affect it?
Good news! In Visual Studio 11 Beta, you can visualize your code and understand its relationships. You can create, read, and edit dependency graphs faster and easier. Here’s an example:
Note: You can create graphs with Visual Studio 11 Ultimate. You can read and edit graphs with Visual Studio 11 Premium and Professional. To download the Visual Studio 11 Beta ALM virtual machine, which includes the Ultimate version, and hands-on-labs, see Brian Keller’s post here.
This blog post covers the following scenarios and uses the .NET Pet Shop 4.0 sample application in its examples:
To magnify the examples, just click them.
To get an overview of the code, follow these steps:
The first time you generate a graph, it might take a little while. Visual Studio builds the solution, analyzes the binaries produced from each project, and indexes relevant details to generate future graphs faster. If you don’t need a graph for the entire solution, you can also speed up this process by visualizing only those parts that you care about or narrowing the scope of your solution. To learn more, see Focusing on the Details and How to: Visualize Code by Generating Dependency Graphs.
This example shows the top-level assemblies in the sample solution. Everything outside the solution, like platform dependencies, is found in Externals:
To save the graph:
- Open the shortcut menu for the graph surface, choose Move <graphname.dgml> into, and then choose Solution Items.
- When the Save File As box appears, name the graph document, and save it.
Visual Studio saves the graph to the Solution Items folder.
Browsing Dependency Graphs
The resulting graph can seem large and overwhelming. You have several ways to explore the graph:
- To zoom in and out, rotate the mouse wheel, or drag the slider in the upper left corner of the graph.
- To pan the graph, drag the graph surface in any direction.
- To fit the entire graph into the window, double-click the graph surface, or click the zoom-to-fit button under the zoom slider.
For other ways to explore graphs, see How to: Browse and Rearrange Dependency Graphs. To see more mouse and keyboard gestures, create a blank graph, and then choose the Help links on the graph:
Examining Items and Relationships
A rectangle, or node, represents an item on the graph. An icon identifies the item’s type. To see what the icons mean, use the Legend:
Some items contain other items, for example, assemblies contain namespaces, which contain types, and so on. To expand these containers, or groups, move the mouse pointer over the left part of a node. If a
appears, choose it.
To expand and collapse groups, use these keyboard shortcuts:
- To expand a selected group, use the plus key (+).
- To expand successive levels of groups, use the asterisk key (*).
- To collapse a selected group, use the minus key (-).
If an expanded group seems too big, you can drag its contents to rearrange them. The layout of the surrounding nodes and links adjust automatically:
Layout Before Rearranging
Layout After Rearranging
Arrows, or links, between nodes represent relationships. If there are multiple relationships between two nodes, a single aggregate link combines all those relationships into one link.
To learn more about a relationship between two nodes:
- Select nodes that you want to focus on.
- Open the shortcut menu for the selection, choose Select, and Hide Unselected to hide everything else. The graph is just a one-way view of the code, so you can edit the graph without changing the code.
- Move the mouse pointer on top of a link. This shows a tooltip with details about the link and also arrows that let you move between the source and target nodes. Use these arrows when the graph has too many items and you want to see which nodes are connected.
The following example shows two assemblies. The tooltip tells you that the aggregate link represents multiple relationships: Calls, References, and Return. The denser the link, the more relationships it represents:
To see all the links between the items in these assemblies, make sure the graph shows all cross-group links:
Expand the assemblies and namespaces to see the links between their contents:
Now, suppose you only care about a specific area of code because you have to fix a bug or update some functionality. You want to know how that change affects other parts of the code. For example, maybe you want to update the CreditCardInfo constructor to accept an authorization code. There are several ways to find that code:
- If the CreditCardInfo method is on the graph, use the graph search tool to find it. Then, follow the links to find items that depend on that method.
- If the CreditCardInfo method isn’t on the graph, use Solution Explorer to find the CreditCardInfo method and items that depend on that method. Select those items in Solution Explorer, and then drag them to an existing graph or create a new graph.
Finding Items on Graphs
To find any items on the graph, use CTRL+F to open search:
- Type the item’s name, and then press ENTER. The first matching item appears selected on the graph. Note: By default, search includes items in collapsed groups. However, if an item is in collapsed group that was never expanded, the item might not be found. Make sure you have expanded all the groups at least once.
- To see the next match, press F3 or choose the Find Next arrow. To see all matching items, open the drop-down list, and choose Select All.
The following matches appear selected on the graph:
To see the CreditCardInfo method definition, select it on the graph, and use F12 to open the code editor. Now, suppose you want to find other code that has dependencies on this method. To see items that have dependencies on a selected item, make sure that you can see the cross-group links for selected items. On the graph toolbar, open the
list, and choose Show Cross-Group Links on Selected Nodes.
When you select the CreditCardInfo method, a link appears from PetShopWeb.dll and points at CreditCardinfo, showing that something in PetShopWeb.dll calls CreditCardInfo:
To find the methods in PetShopWeb.dll that call CreditCardInfo, expand PetShopWeb.dll incrementally and follow the links:
Finding Items in the Solution
To start from the solution, search or browse for the CreditCardInfo method in Solution Explorer. If you search, you find the following:
When you select the CreditCardInfo method, Visual Studio shows the method’s definition in the code editor:
Without leaving Solution Explorer, you can find what calls CreditCardInfo method, what CreditCardInfo calls, or what uses it. Open the shortcut menu for the CreditCardInfo method, and choose Is Called By:
You see that two items that call CreditCardInfo. Select each item, and use F12 to see their definitions. You can create a new dependency graph that shows only these items and their relationships with CreditCardInfo:
which produces the following graph:
To see the missing children in these groups, choose Refetch Children
on each group. Visual Studio shows them on the graph:
To add more related items to the graph, open the shortcut menu for a node, follow these steps:
Communicating the Changes
Now, suppose you come across dependencies on the graph that shouldn’t exist. Suppose you want to propose changes to these dependencies and use the graph to discuss your suggestions with your team. You can edit the graph several ways:
- Add new undefined nodes, linked nodes, comments, or groups. To do this, choose the node where you want to add these items. When the floating menu appears, choose the corresponding task.
- Add new types, members, and also their parent containers to the graph. To do this, select the types and members in Solution Explorer, and then drag the items to the graph. To include the parent containers, press and hold the CTRL key while you drag the items. Or, you can open the Create a new graph document… list on the Solution Explorer toolbar (make sure the dependency graph is visible), and choose Add to Active Dependency Graph with Ancestors.
- Rename nodes by editing their labels.
- Delete nodes and links, or retrieve hidden ones on the graph.
- Hide nodes or show hidden nodes more easily.
- Organize nodes into groups.
- Change the styles and appearances of nodes and links.
To learn more about how to edit dependency graphs, see How to: Edit and Customize Dependency Graphs.
To let us know what you think:
- For questions or discussion: Visual Studio Visualization & Modeling Forum
- For feature suggestions: Visual Studio User Voice (Visual Studio Ultimate)
- For bugs: Microsoft Connect
Nice. It is possible to get a graph of "basic blocks"? Would help immensely when working with large legacy code base.
Amazing. It'll make it much easier to refactor and combine used one time classes into a single C# file to depopulate assemblies used only one place.
Ahh the color…it hurts my eyes….stop it please. Didn't you guys get the memo….Dev11 means using gray.
Now I have to go to the eye doctor….please remove all color and only use gray….like you're supposed to be doing to be Metro!
Very useful thing. Thank you for this post!
@Tobias: Hi Tobias, thanks for your feedback. Which legacy code are you working with?
@Tom and Aviw: Thanks for the kudos!
@Allen: Visual Studio isn't all gray. 🙂
|
https://blogs.msdn.microsoft.com/visualstudioalm/2012/03/05/visualize-code-with-visual-studio-11-beta/
|
CC-MAIN-2016-36
|
refinedweb
| 1,635
| 72.66
|
I have:
def f(x, y): return x**2 + y**2
def g(x, y): return x**3 + y**3
def h(x, y): return x**4 + y**4
def J(x, y):
return f(x,y)*g(x,y)*h(x,y)
myFunctions = [f,g,h]
J
f
g
h
f
g
h
print(J(2, 2)) # 4096
myFunctions
A function can accept arbitrarily many arguments using
* like this:
def J(*args):
This will store all of
J's arguments in the list
args. That list can then be converted back into multiple arguments to call other functions like this:
def J(*args): return f(*args) * g(*args)
This solves the problem of the number of arguments changing. So now let's handle the fact that there can be arbitrarily many functions. First we need to call the function in your list. We can just do that by iterating over them and using
():
def J(*args): return [func(*args) for func in myFunctions]
This will return a list of the functions' return values. So all we need now is to get the product of a collection:
from functools import reduce from operator import mul def product(numbers): return reduce(mul, list, 1) def J(*args): return product(func(*args) for func in myFunctions)
|
https://codedump.io/share/iCR8Wj7l1txV/1/programmatically-define-a-new-function-from-a-list-of-other-functions
|
CC-MAIN-2017-34
|
refinedweb
| 214
| 58.15
|
Fraser 0 Report post Posted December 19, 2013 (edited) Good evening, To start, sorry if this has been posted/asked before have been searching for an answer for months now. I have created a number/letter string generator for work (some call it a password generator but it’s much more than that ), currently I have a few IF statement that filter for bad letter formats i.e. If StringRegExp($string, "c**k", 0) = 1 then generate() ElseIf StringRegExp($string, "d**k", 0) = 1 then generate() else guictrlsetdata($label1, $string) endif My question is, is there any way to combine multiple search criteria in one IF statement? Thank you. Fraser Edited December 19, 2013 by Fraser Share this post Link to post Share on other sites
|
https://www.autoitscript.com/forum/topic/157152-filter-for-more-than-one-value-in-a-string/
|
CC-MAIN-2018-26
|
refinedweb
| 126
| 50.2
|
";
Spark has two JavaScript files that need to be incorporated into your build in order for Spark's behavior to work.
spark-core-prerender.js - This file detects if JavaScript is loaded and also sets up the type loader. It needs to execute it's code before the page is rendered and therefore needs to be imported in the head of the document.
import sparkCorePrerender from "@sparkdesignsystem/spark-core/spark-core-prerender";
spark-core.js - This file contains the bulk of Spark's behavior and can be loaded after the page is rendered, this is best done before the closing body tag.
import sparkCore from "@sparkdesignsystem/spark-core/spark-core";
There are also ES5 versions if preferred. They're located in @sparkdesignsystem/spark-core/es5.
Init the Spark Core JS, passing in a config object (optional).
sparkCorePrerender({ //config, see below });
See below for available configuration options:
Import the post-render ES6 setup file in your JS build. This is best done before the closing body tag. This will bring all the Spark-Core JS into your build. There is also an ES5 version if preferred. It's located in @sparkdesignsystem/spark-core/es5.
import sparkCore from "@sparkdesignsystem/spark-core/spark-core";
Init the Spark Core JS, passing in a config object (optional).
sparkCore({ //config, see below });
See below for available configuration";
sprk-u-JavaScript
You'll need to
import the
spark-core-angular NgModule
in your main app.module.ts file and add it to the NgModule imports array.
import { SparkCoreAngularModule } from "@sparkdesignsystem/spark-core-angular";..
|
https://november-2-2018.sparkdesignsystem.com/gettingstarted/developers
|
CC-MAIN-2019-30
|
refinedweb
| 256
| 50.23
|
🌼 Introduction
Hello, reader! 🐱
As you may know, configuring Webpack can be a frustrating task. Despite having good documentation, this bundler isn't a comfortable horse to ride for a few reasons.
Webpack team is working really hard and relatively quickly developing it, which is a good thing. However, it is overwhelming for a new developer to learn all stuff at once. Tutorials are getting old, some plugins break, found examples can be confusing. Sometimes you can be stuck at something trivial and google a lot to find some short message in GitHub issues that finally helps.
There is a lack of introductory articles about Webpack and how it works, people rush straight to tools like create-react-app or vue-cli, but one sometimes needs to write some simple plain JavaScript and SASS with no frameworks or any fancy stuff.
This guide will look at step-by-step Webpack configuration for ES6, SASS, and images/fonts without any framework. It should be enough to start using Webpack for most simple websites or use it as a platform for further learning. Although this guide requires some prior knowledge about web-development and JavaScript, it may be useful for someone. At least I would be happy to meet something like this when I started with Webpack!
🎈 Our goal
We will be using Webpack to bundle our JavaScript, styles, images, and fonts files together into one dist folder.
Webpack will produce 1 bundled JavaScript file and 1 bundled CSS file. You can simply add them in your HTML file like that (of course you should change path to dist folder if needed):
<link rel="stylesheet" href="dist/bundle.css"> <script src="dist/bundle.js"></script>
And you are good to go 🍹
You can look at the finished example from this guide: 🔗link.
Note: I updated dependencies recently. This guide applies to the latest Webpack 5, but config keeps working for Webpack 4 in case you need it!
1. Install Webpack
We use npm:
$ npm init command to create a package.json file in a project folder where we will put our JavaScript dependencies. Then we can install Webpack itself with
$ npm i --save-dev webpack webpack-cli.
2. Create entry point file
Webpack starts its job from a single JavaScript file, which is called the entry point. Create index.js in the javascript folder. You can write some simple code here like
console.log('Hi') to ensure it works.
3. Create webpack.config.js
... in the project folder. Here is where all ✨ magic happens.
// Webpack uses this to work with directories const path = require('path'); // This is the main configuration object. // Here, you write different options and tell Webpack what to do module.exports = { // Path to your entry point. From this file Webpack will begin its work entry: './src/javascript/index.js', // Path and filename of your result bundle. // Webpack will bundle all JavaScript into this file output: { path: path.resolve(__dirname, 'dist'), publicPath: '', filename: 'bundle.js' }, // Default mode for Webpack is production. // Depending on mode Webpack will apply different things // on the final bundle. For now, we don't need production's JavaScript // minifying and other things, so let's set mode to development mode: 'development' };
4. Add npm script in package.json to run Webpack
To run Webpack, we have to use npm script with simple command
webpack and our configuration file as config option. Our package.json should look like this for now:
{ "scripts": { "build": "webpack --config webpack.config.js" }, "devDependencies": { "webpack": "^4.29.6", "webpack-cli": "^3.2.3" } }
5. Run Webpack
With that basic setup, you can run
$ npm run build command. Webpack will look up our entry file, resolve all import module dependencies inside it and bundle it into single .js file in dist folder. In the console, you should see something like this:
If you add
<script src="dist/bundle.js"></script> into yout HTML file you should see
Hi in a browser console!
🔬 Loaders
Great! We have standard JavaScript bundled. But what if we want to use all cool features from ES6 (and beyond) and preserve browser compatibility? How should we tell Webpack to transform (transpile) our ES6 code to browser-compatible code?
That is where Webpack loaders come into play. Loaders are one of the main features of Webpack. They apply certain transformations to our code.
Let's add to webpack.config.js file new option module.rules. In this option, we will say Webpack how exactly it should transform different types of files.
entry: /* ... */, output: /* ... */, module: { rules: [ ] }
For JavaScript files, we will use:
1. babel-loader
Babel is currently the best JavaScript transpiler out there. We will tell Webpack to use it to transform our modern JavaScript code to browser-compatible JavaScript code before bundling it.
Babel-loader does exactly that. Let's install it:
$ npm i --save-dev babel-loader @babel/core @babel/preset-env
Now we are going to add rule about JavaScript files:
rules: [ { test: /\.js$/, exclude: /(node_modules)/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'] } } } ]
testis a regular expression for file extension which we are going to transform. In our case, it's JavaScript files.
excludeis a regular expression that tells Webpack which path should be ignored when transforming modules. That means we won't convert imported vendor libraries from npm if we import them in the future.
useis the main rule's option. Here we set loader, which is going to be applied to files that correspond to
testregexp (JavaScript files in this case)
optionscan vary depending on the loader. In this case, we set default presets for Babel to consider which ES6 features it should transform and which not. It is a separate topic on its own, and you can dive into it if you are interested, but it's safe to keep it like this for now.
Now you can place ES6 code inside your JavaScript modules safely!
2. sass-loader
Time to work with styles. Usually, we don't want to write plain CSS. Very often, we use SASS preprocessor. We transform SASS to CSS and then apply auto prefixing and minifying. It's a kind of "default" approach to CSS. Let's tell Webpack to do exactly that.
Let's say we import our main SASS file sass/styles.scss in our javascripts/index.js entry point.
import '../sass/styles.scss';
But for now, Webpack has no idea how to handle .scss files or any files except .js. We need to add proper loaders so Webpack could resolve those files:
$ npm i --save-dev sass sass-loader postcss-loader css-loader
We can add a new rule for SASS files and tell Webpack what to do with them:
rules: [ { test: /\.js$/, /* ... */ }, { // Apply rule for .sass, .scss or .css files test: /\.(sa|sc|c)ss$/, // Set loaders to transform files. // Loaders are applying from right to left(!) // The first loader will be applied after others use: [ { // This loader resolves url() and @imports inside CSS loader: "css-loader", }, { // Then we apply postCSS fixes like autoprefixer and minifying loader: "postcss-loader" }, { // First we transform SASS to standard CSS loader: "sass-loader" options: { implementation: require("sass") } } ] } ]
Note important thing about Webpack here. It can chain multiple loaders; they will be applied one by one from last to the first in the
use array.
Now when Webpack meets
import 'file.scss'; in code, it knows what to do!
PostCSS
How should we tell to postcss-loader which transformations it must apply? We create a separate config file
postcss.config.js and use postcss plugins that we need for our styles. You may found minifying and autoprefixing the most basic and useful plugins to make sure CSS is ready for your real web site.
First, install those postcss plugins:
$ npm i --save-dev autoprefixer cssnano.
Second, add them to postcss.config.js file like that:
module.exports = { plugins: [ require('autoprefixer'), require('cssnano'), // More postCSS modules here if needed ] }
You can dive into PostCSS deeper and find more plugins that suit your workflow or project requirement.
After all that CSS setup only one thing left. Webpack will resolve your .scss imports, transform them, and... What's next? It won't magically create a single .css file with your styles bundled; we have to tell Webpack to do that. But this task is out of loaders' capabilities. We have to use Webpack's plugin for that.
🔌 Plugins
Their purpose is to do anything else that loaders can't. If we need to extract all that transformed CSS into a separate "bundle" file, we have to use a plugin. And there is a special one for our case: MiniCssExtractPlugin:
$ npm i --save-dev mini-css-extract-plugin
We can import plugins separately right at the start of the webpack.config.js file:
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
After our
module.rules array where we set loaders add new
plugins code where we activate our plugins with options:
module: { rules: [ /* ... */ ] }, plugins: [ new MiniCssExtractPlugin({ filename: "bundle.css" }) ]
Now we can chain this plugin into our CSS loaders:
{ test: /\.(sa|sc|c)ss$/, use: [ { // After all CSS loaders, we use a plugin to do its work. // It gets all transformed CSS and extracts it into separate // single bundled file loader: MiniCssExtractPlugin.loader }, { loader: "css-loader", }, /* ... Other loaders ... */ ] }
Done! If you followed along, you could run
$ npm run build command and find bundle.css file in your dist folder. General setup now should look like this:
Webpack has tons of plugins for different purposes. You can explore them at your need in official documentation.
🔬 More loaders: images and fonts
At this point, you should catch up on the basics of how Webpack works. But we are not done yet. Most websites need some assets: images and fonts that we set through our CSS. Webpack can resolve
background-image: url(...) line thanks to css-loader, but it has no idea what to do if you set URL to .png or jpg file.
We need a new loader to handle files inside CSS or to be able to import them right in JavaScript. And here it is:
file-loader
Install it with
$ npm i --save-dev file-loader and add a new rule to our webpack.config.js:
rules: [ { test: /\.js$/, /* ... */ }, { test: /\.(sa|sc|c)ss$/, /* ... */ }, { // Now we apply rule for images test: /\.(png|jpe?g|gif|svg)$/, use: [ { // Using file-loader for these files loader: "file-loader", // In options we can set different things like format // and directory to save options: { outputPath: 'images' } } ] } ]
Now if you use inside your CSS some image like this:
body { background-image: url('../images/cat.jpg'); }
Webpack will resolve it successfully. You will find your image with a hashed name inside dist/images folder. And inside bundle.css you will find something like this:
body { background-image: url(images/e1d5874c81ec7d690e1de0cadb0d3b8b.jpg); }
As you can see, Webpack is very intelligent — it correctly resolves the path of your url relatively to the dist folder!
You can as well add a rule for fonts and resolve them similarly to images; change outputPath to fonts folder for consistency:
rules: [ { test: /\.js$/, /* ... */ }, { test: /\.(sa|sc|c)ss$/, /* ... */ }, { test: /\.(png|jpe?g|gif|svg)$/, /* ... */ }, { // Apply rule for fonts files test: /\.(woff|woff2|ttf|otf|eot)$/, use: [ { // Using file-loader too loader: "file-loader", options: { outputPath: 'fonts' } } ] } ]
🏆 Wrapping up
That's it! A simple Webpack configuration for a classic website. We covered the concepts of entry point, loaders, and plugins and how Webpack transforms and bundles your files.
Of course, this is quite a straightforward config aimed to understand a general idea about Webpack. There are many things to add if you need them: source mapping, hot reloading, setting up JavaScript framework, and all other stuff that Webpack can do, but I feel those things are out of the scope of this guide.
If you struggle or want to learn more, I encourage you to check Webpack official documentation. Happy bundling!
Discussion (42)
Hey Anton,
I am having one issue when I run build I am getting this error :
Any ideas why this is happening?
Hey Robin!
Well, it basically what it says, sass dependency probably was outdated for sass loader, since the article is 2 years old!
But no worries, I updated all dependencies in the repository (including Webpack, it works now for the latest Webpack 5!), and fixed some typos and wording in the article. You can try to check out the repo and run the code again if you still need it :)
Hi,
fix this one please:
Thanks, I just noticed what was wrong! Fixed.
Hi Anton, thanks for this article, very helpful! however when I try it in the production mode I get "You did not set any plugins, parser, or stringifier. Right now, PostCSS does nothing. Pick plugins for your case on postcss.parts/ and use them in postcss.config.js.". I installed postcss-loader, created postcss.config.js file the same as you did but it's not working. Any idea what could be the issue?
Hi!
It's because in postcss.config.js there is a check for process.env.NODE_ENV variable. Even if you set Webpack mode to production it won't automatically change Node environment variable.
The simplest way to configure this is to install cross-env package:
Then just add another npm script in package.json for production mode:
Now when you run
npm run build-productionthe process.env.NODE_ENV variable will be production and postcss.config.js check is going to work:
From Webpack documentation:
Many thanks! Now it works :)
@Anton, Big big thank you!!! You've saved my nerves 😊 I had a problem with paths where webpack puts assets after building project. Now you made it absolutely clear!!! THANK YOU BRO! Big plus to your karma 😉
Thank you! Glad it helped! 😊👍
Loved this article, it's concise and up to date.
The most confusing part for me is Webpack using a .js file for an entry point, instead of an .html.
Speaking of which... there should be an HTML loader setup of some sort, so my .html content is packed into the bundle too, right?
Gonna go search the documentation for it. Cheers!
Correct, there is a html plugin that will automatically create html from template and insert your JS bundle there.
Hey Anton ,
I am very new to JS and everything written here is clearly understandable to me , thank you for this.
though i am getting exception in build for module not found for third party java script libraries .
dev-to-uploads.s3.amazonaws.com/i/...
Thanks, man.
It's hard to say what exactly is wrong with your build, as I can't see your gulp or Webpack config file and what you are actually doing there.
But the error message is pretty clear, you miss
jqueryand
slick-carouselmodules. Maybe you declare them globally like
window.jQueryas it usual for WordPress websites, but Webpack doesn't know about them. You might need to install them via npm or expose them via expose-loader.
stackoverflow.com/questions/290801...
Check this, it might be your case
Hi Anton,
i am using babel-loader..
this is my gulpfile
var gulp = require('gulp'),
settings = require('./settings'),
webpack = require('webpack'),
browserSync = require('browser-sync').create(),
postcss = require('gulp-postcss'),
rgba = require('postcss-hexrgba'),
autoprefixer = require('autoprefixer'),
cssvars = require('postcss-simple-vars'),
nested = require('postcss-nested'),
cssImport = require('postcss-import'),
mixins = require('postcss-mixins'),
colorFunctions = require('postcss-color-function');
gulp.task('styles', function() {
return gulp.src(settings.themeLocation + 'css/style.css')
.pipe(postcss([cssImport, mixins, cssvars, nested, rgba, colorFunctions, autoprefixer]))
.on('error', (error) => console.log(error.toString()))
.pipe(gulp.dest(settings.themeLocation));
});
gulp.task('scripts', function(callback) {
webpack(require('./webpack.config.js'), function(err, stats) {
if (err) {
console.log(err.toString());
}
console.log(stats.toString());
callback();
});
});
gulp.task('watch', function() {
browserSync.init({
notify: false,
proxy: settings.urlToPreview,
ghostMode: false
});
gulp.watch('..//*.php', function() {
browserSync.reload();
});
gulp.watch(settings.themeLocation + 'css//.css', gulp.parallel('waitForStyles'));
gulp.watch([settings.themeLocation + 'js/modules/.js', settings.themeLocation + 'js/scripts.js'], gulp.parallel('waitForScripts'));
});
gulp.task('waitForStyles', gulp.series('styles', function() {
return gulp.src(settings.themeLocation + 'style.css')
.pipe(browserSync.stream());
}))
gulp.task('waitForScripts', gulp.series('scripts', function(cb) {
browserSync.reload();
cb()
}))
*and following is my webpack configuration *
const path = require('path'),
settings = require('./settings');
module.exports = {
entry: {
App: settings.themeLocation + "js/scripts.js"
},
output: {
path: path.resolve(__dirname, settings.themeLocation + "js"),
filename: "scripts-bundled.js"
},
module: {
rules: [
{
test: /.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env']
}
}
}
]
},
mode: 'development'
}
following is how i am importing the 3rd party libraries ..
import $ from 'jquery';
import slick from 'slick-carousel';
i tried to use the plugin configuration in webpack setting also but no luck.
Hi Anton,
i added following line in my webpack configuration with absolute path of my node_modules and now its able to build..
resolve: {
modules: [
'__dirname','/Users/rajat.c.singhaccenture.com/Local\ Sites/cp-unique-convent-school/app/public/build-setup/node_modules'
]
Thank you, this is the only tutorial I have found that really helped me understand and get the basics of webpack working! However, in case any other beginners are reading, I want to mention that I would not have been able to get past the first section if I hadn't just come from another tutorial that told me to initialize with "npm init -y". When I tried without the -y, a bunch of confusing questions that I didn't know how to answer came up. With -y, everything worked perfectly.
Yes, a good point! That's one way to do it.
You can just press Enter on each of those questions to accept defaults!
Glad you liked the guide :)
Hi Anton,
I wrote the exact same code and I have an error that I can't fixe..
"Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js):
ModuleParseError: Module parse failed: Unexpected token (1:0)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See webpack.js.org/concepts#loaders"
Do you have an idea of what's happened ?
Thank for your article !!
Good post. Thanks
Simply amazing guide Anton!! Really, very simple and understandable. I already followed you before I even finished reading.
Just one pro-tip. In English, if you are talking about objects (non-living things) use "it" instead of "he/she". Simply, use "it" when talking about everything except humans or animals. So, webpack == "it"
Cheers!
Glad you enjoyed the guide!
Oh, sorry! Good to know! I'm not a native speaker, but I try to improve my English constantly, so thanks 😊
Great tutorial, thanks Anton! It would really help if you could let us see your full
webpack.config.jsthough. I'm getting a weird error
TypeError: this.getOptions is not a functionfrom
sass-loader.
I'm pretty sure it's because something's in the wrong place in my config.
My problem was due to package versions. On
npm installI was getting these warnings:
Installing webpack 5.0.0 made them go away and got the build running correctly.
this tutorial is super amazing, thanks for the help.
Very nice and well documented!
Спасибо, это лучшее что я нашла по webpack для новичков
Incredibly helpful Anton, I really appreciate the step-by-step explanation of why we're adding lines.
Thanks for the post - Webpack can be super tricky to setup, and this is a great step-by-step tutorial for it. Thanks!
great stuff!
remember to install webpack globally and use webpack-dev-server too for more convenience using npm install webpack webpack-cli webpack-dev-server -g
This was great - very useful, thanks.
I haven't read the article yet, just title of it....but I guess today it will save my nerves for setting up Webpack 😊 I have troubles with images/fonts in dev/prod modes. Thanks a lot in advance!
Thanks Anton for the clear and lucid guide. You just saved my day.
Hey Anton,
I love the explanation, great work. I am having one issue though when I run the
$ npm run build,
the bundle.css file isn't created in my dist folder.
Any ideas?
Great Tutorial.
İ use the webpack-dev-server-plugin.
Please, can anyone Show, what I have to write, to config, that the dev-server-plugin serves a
phpfile(index. php) instead of Index. ktml?
This was fabulous!
How to configure multi-page application with webpack4?
Single bundle solutions are harder to fit within a performance budget, particularly for large applications
cool, but where should I place index.html file in project?
Right in the root, where webpack.config.js is placed.
This was really useful in helping me wrap my head around a basic setup for webpack. Thanks so much for putting this article out!
Love this tutorial! It so easy :D
I hope there is another tutorial Webpack 4 & Bootstrap setup XD
Is there a way to set up webpack with a watcher, so that anytime you edit a scss or js file it automatically runs the sass/babel/postcss tasks?
Of course there is, for example you can use webpack-dev-server for that. But that is way out of scope of this guide and this is more advanced stuff.
webpack.js.org/concepts/hot-module...
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/antonmelnyk/how-to-configure-webpack-from-scratch-for-a-basic-website-46a5
|
CC-MAIN-2021-21
|
refinedweb
| 3,551
| 59.5
|
Simon Marlow wrote: > Ian Lynagh wrote: >> >> HEAD validates for me both with and without your workaround. I have: >> >> $ gcc --version >> i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) >> >> Won't the workaround break when Apple ships gcc 4.3? > > Yes, it would break if a future Apple gcc follows the standard gcc in > this regard. So for that reason I propose we use > > #if defined(__GNUC_GNU_INLINE__) && !defined(__APPLE__) > # if defined(KEEP_INLINES) > # define EXTERN_INLINE inline > # else > # define EXTERN_INLINE extern inline __attribute__((gnu_inline)) > # endif > #else > # if defined(KEEP_INLINES) > # define EXTERN_INLINE > # else > # define EXTERN_INLINE INLINE_HEADER > # endif > #endif > > on the grounds that the !__GNUC_GNU_INLINE__ case is perfectly safe to > use, except that it might end up duplicating a little code. This will > be safe even if Apple changes their mind in the future. Could someone > try this version? validates fine here. thanks, d~
|
http://www.haskell.org/pipermail/cvs-ghc/2008-September/045176.html
|
CC-MAIN-2013-20
|
refinedweb
| 146
| 68.87
|
VM GRE Poor performance
Hi,
I have Openstack Juno with Neutron (GRE) running on Ubuntu 14.04. I have allocated one VM in two different compute nodes. When I run iperf between the two compute nodes I have close to 1Gbps (they are connected using 1Gbps NICs). I have tried to change MTU size to 1450 as recommended in: (...) (...) (...)
I have set-up the DHCP agent to set the mtu on VMs. This is how it looks dhcp_agent.ini: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver handle_internal_only_routers = TRUE external_network_bridge = br-ex ovs_use_veth = True use_namespaces = True dmasq_config_file = /etc/neutron/dnsmasq.conf
and dnsmasq-neutron.conf: dhcp-option-force=26,1450
I have tried as well to disable gro using: ethtool -K eth0 gro off on the VM. However, the performance between VM to VM allocated in different compute nodes is so poor (close to 100Mbps). If I run multiple tcp sessions I get around 200 Mbps max (sum).
I have tried to find out where is the mistake or bottleneck but I could not find it. Can anyone give me some advices for finding out the bottleneck? or anyone has face this problem before?
Thanks
|
https://ask.openstack.org/en/question/97629/vm-gre-poor-performance/
|
CC-MAIN-2019-09
|
refinedweb
| 193
| 68.06
|
Cases of BP and M&S
Task 1
Blame can be adjusted by mitigation or aggravation; Praise can be adjusted by attenuation and amplification.
The Combined Code on Corporate Governance sets out standards of best practice and includes high-level principles and detailed provisions. ‘Comply or explain' allows a company to choose whether to follow the code's provision. Although M&S have a moral responsibility to separate the roles of chairman and chief executive according to Combined Code, there is no legal requirement. If the companies haven't complied with the Code's provisions, they would only need to provide an explanation (Financial Reporting Council, no date). Sir Stuart Rose chose not to the Combined Code because he is more familiar with ‘comply and explain'. Rose has not done something wrong according to “comply-or-explain” approach, so the blame can be turned by mitigation.
In the UK, 95% of companies separate the CEO and chairman positions. This suggests that although the majority of companies follow the Combined Code, there are 5% of companies that didn't follow it (Kakabadse, A.P. & Kakabadse, Nada K., 2006). According to Booz Allen Hamilton data, a separation of the roles of Chairman or CEO would reduce investors' return (Chuck, Rob & Edward, 2004). Therefore, it is possible that Rose didn't follow the Combined Code because he wanted to increase the return to their shareholders.
M&S promoted Sir Stuart Rose to be chairman and chief executive. A decision that was supported by four shareholders within the company. It can be surmised that he has done a good job when he was Chief Executive. Thus, it resulted in his promotion to the dual role of Chairman and CEO. Rose joined M&S in 1972 and spent ten years in charge. He successfully rehabilitated M&S with a series of key changes such as the stopping of use of ‘chubby' models. M&S subsequently became the second most profitable retailer in the world because of him (BBC news, 2007). With such heavy success under his tenure, no one wanted him to leave the company so hence he was promoted and largely due to his past performances.
James Madison stated that “The accumulation of all powers in the same hands, whether of one, a few or many, and whether hereditary, self-appointed or elective, may justly be pronounced the very definition of tyranny” (Elizabeth, 1995, p.130). Concerning consequences, Sir Stuart Rose would gain too much power when he becomes chairman and chief executive and tyranny could be emerged in M&S. In addition, M&S haven't complied with the Code's provisions. A possible future result would be that other companies may follow M&S and breach best practice by degrees in the future. They won't take responsibility to uphold corporate ethics in the UK.
Task 2
M&S chose to promote Sir Stuart Rose to be company chairman and chief executive, but Rose informed second-biggest shareholder L&G only one hour before announcing it. Rose wanted all shareholders to support the decision but the fact that its major shareholders including L&G was informed with so little notice, it could be seen as culpable behaviour by the company. It could be surmised that the reason that Rose chose to act in this manner, was because he didn't want to give enough time for the shareholders to influence the decision as there was a risk of them rejecting it. Shareholders have rights to participate, ask questions and vote at meetings to discuss key decisions such as this (Freshfields Bruckhaus Deringer, 2008). Sir Stuart Rose provided limited time for the shareholder to participate. He made many excuses for their behaviour such as they were afraid that the details could leak to the media so delayed telling its shareholders (Timesonline, 2008).
Sir Stuart Rose hasn't complied with the Combined Code which can be seen as culpable. The fact that he became both the Chairman and Chief Executive could have negative implications as there is too much power placed onto one man. All companies should be ethical as they have moral responsibility to uphold corporate ethics. Smith and Johnson (1996) stated that “morally irresponsible behaviour may be condoned as long as it does not break the law.” (Smith & Johnson, 1996). M&S is a company that could be seen as being morally irresponsible as it didn't follow the Combined Code, although they have not broken the law. Other companies may follow suit resulting in a negative impact on society. This might aggravate blameworthiness towards Rose for the consequences that may arise due to the company's decisions to ignore the Combined Code in the first place. However, Sir Rose has helped to train his future successor and has transferred some of his current responsibilities. Sir Rose deserves praise for this as this would allow success of the current regime to continue at M&S even when he steps down. This might mitigate some of the blame directed towards him.
Sir Stuart Rose failed to give a justified explanation to the company shareholders for the reasons behind the combining of the roles of Chairman and CEO that was in-depth or clear. The investors were simply told that the decision gained a lot of support, although they weren't told who or where the support was coming from. Sir Rose has the responsibility to explain clearly. However, he wrote letters to the investors spelling out the plan which is laudable.
Task 3
“An act is morally right if the net benefits over costs are greatest for the majority. Also, the greatest good for the greatest number must result from this act.”(Joseph, 2009, p.105). Lord Browne cut costs because he wanted to maximize BP's profit. Thus, BP can maximize benefitz for their shareholders. He also wanted to maximize benefit for society. Cutting cost can provide cheaper oil prices to their customers. According to the above theory, Lord Browne's act is morally right.
“Kant is praised or blamed for actions within the control. Kant insisted that the moral evaluation of our actions was concerned, consequences did not matter.”(Charles, 1997) The CSB found BP's drum and stack were unsafe and the company failed to replace blowdown drums and stacks with a flare. Although Lord Browne has responsibility to do this appropriately and well, he ignored it and instead, chose a less expensive option (Timesonline, 2007). Based on Kant's theory, Lord Browne's action might be blamed.
“The view that equates morality with self-interest is referred to as egoism. The act is morally right if it best promotes an agent's interests.”(William, 2008, p.45) Texas City employees knew that their working place was unsafe, but yet they never informed anyone about it and just continued to work. This is because they only considered their self interest rather than others as well. Texas City is a small city where it is difficult to find a job if they left BP. Hence, the workers didn't pressure BP to protect all the employees' safety in case they lost their job as a result.
Virtue ethics awards ‘internal goods'. One refinery manager refused BP permission to cut costs because he believed that cutting cost would cause an unsafe working environment for his employees. Although his refusal could result in him losing his job, he cared little about it as he only wanted his employees to be safe. He deserves praise for putting the safety of his workers ahead of his own job security.
According to OSHA standards, employers should provide enough training programme to their employees. The training programme should train employees about health effects of the chemicals they work with and what they can do to protect themselves (OSHA, 1970) However, Lord Browne ordered to cut training course. CSB found that the employees were undertrained, increasing the risk of accident (Timesonline, 2007). Lord Browne deserves blame for this.
Employers have responsibilities to make sure their employees have and use safe tools and equipment and properly maintain the equipment (OSHA, 1970). However, CSB found that BP lacked of supervisory oversight. BP's managers failed to ensure their workers were using safe tools. The failure of the alarm system and lack of other indicators and automatic safety devices also caused the accident (Timesonline, 2007). Thus, we can see that managers didn't maintain the equipment properly.
Task 4
The senior managers who work at Toledo stated that ‘cost savings and production targets didn't override safety concerns'. However, the long series accidents that have occurred in the past tell another story. For example, In 1890, an accident occurred which killed 3 people when workers unloaded crude oil from a railcar; in 1987 and 2000, a big fire and an explosion occurred respectively where 2 people were killed; in 2005, an explosion at BP's Texas City refinery resulted in the death of 15 workers, and 170 injured. The frequency of accidents over a timescale suggests that safety levels for the workers were not very high and without improvements. This weakens the statements from senior workers that their plans did not override safety concerns and it is possible that these accidents (in large) could have been avoided and shouldn't have happened in the first place had safety levels been higher.
Toledo hourly-paid workers say that “BP's production was a higher priority than process safety.” The BP managers have also admitted that production had been their priority rather than safety when interviewed. The utilitarian ethic condemns people who don't promote the greatest human welfare (William, 1999, p.50). BP's managers care about their production, rather than employees' welfare.
Lord Browne has stated that he would ‘reform the company' despite saying that he would retire and leave BP on 31st December 2008 (Heather, 2006). However, he did not fulfil his promise before he left BP. He tried to justify why he would retire early by saying that it was because he had nothing to do with the Bake report. Analysts condemn that Lord Browne would leave because of longstanding company policy. BP's safety systems still should be improved.
CSB condemns that cutting cost is the most important reason for the high frequency of accidents. BP had not provided enough resources to ensure process safety. BP ‘'strongly disagreed'' with the contents of the CSB report, particularly with many of the findings and conclusions. However, despite the innocence claims from BP, it is a fact that BP reduced the training cost by half from 1998 to 2004. Furthermore, BP also cut the training staff size from 30 people in 1997 to just 8 in 2004 (Xinhua, 2006). This again severely weakens BP's arguments since it is likely that so many accidents have occurred where a lack of training for employees is a big factor.
To monitor the safety of the company, BP's managers relied solely on ‘'personal injury rate statistics''. However, personal injury rate statistics does not review an accurate representation of process safety and/or the numbers of injuries of workers since it does not take into account the contractor workers injuries. Hence, the figures are very misleading, and likely to underestimate the true number of injuries and overestimate the safety of the working environment. The accuracy of personal injury forms is also debatable since it uses a check-box format, in which is completed by the worker and checked off on safety policy and procedural requirements, even when those requirements were not met. Hence, BP needs to needs to readdress more accurate ways to correctly monitor the safety of the work area and to make the necessary improvements.
Task 5
Regulations and legislations are based on different legal restrictions to control people's behavior using penalties. However, according to current legislation, the penalties are weak. Thomas and Patricia claimed that the government regulation is impotent and ineffectual since light penalties are unlikely to strike any fear to people that commit mistakes (Thomas & Patricia, 1996, p.440). For example, a fine of US$ 7,000 can be seen as a measly sum for a violation involving a high probability that serious injury or death could result, where the employer knew (or should have), of the hazard (OSHA, 1970). Thus, the government should make changes to the legislation and impose far heavier penalties. Only then will people, including Lord Browne (BP) will care to improve the current company standards.
In the case of M&S, although they have a moral responsibility to separate the roles of Chairman and CEO according to the Combined Code, there is no legal requirement to do so (Financial Reporting Council, no date). Jack and Elizabeth stated that “when everyone is responsible, no one takes responsibility unless everyone feels personally responsible for the day-to-day maintenance of standards.” (Jack & Elizabeth, 1992, p.166). In an attempt to prevent other companies from following suit and ignoring the Combined Code, the government should impose more legislation with heavy penalties, where it is a legal requirement for all companies to separate the roles of Chairman and CEO. William claimed that “a regulatory approach is that standards would be legally enforceable.” If the organizations don't meet these standards, they could be forced to shut down (William, 1999, p.405).
“Virtues are dispositions that our moral principles require us to develop.” It is correlated with utilitarianism, rights, justice and caring. Virtue ethic condemns person who is dishonest or only reluctantly told the truth out of fear (William, 1999, p.132-136). An example of this is Rose's late announcement to his shareholders of his new combined role. Although it can be surmised that this was due to not wanting to give enough time for his shareholders to influence the decision, he made the excuse of not wanting details to be leaked to the media for the lateness of his announcement (Timesonline, 2008). This is a clear example of dishonesty. Virtue ethics also condemns those that are ruthless (William, 1999, p.134). An example of ruthless behavior was demonstrated by Lord Browne, who only focused on business profits by cutting costs, while ignoring the safety of his employees in doing so.
Therefore, virtue ethics is an important requirement for any organization to achieve success, as it requires people to break bad habits and to develop good ones. This is essential to allow people to make correct decisions in the future, to prevent corruption and to help achieve goals. Joseph stated that “virtue ethics adds an important dimension to rules and consequentiality ethics by contributing a different perspective for understanding and executing stakeholder management.” (Joseph, 2009, p.114). Thus, virtue ethics is what we need to make us behave well, and encourages a supportive culture.
References
Books
Elizabeth, M. (1995) Business ethics at work. USA : Cambridge University Press.
Jack, M. & Elizabeth, V. (1992) Business ethics in a new Europe. London: Kluwer Academic.
Joseph W. (2009) Business Ethics: A stakeholder and Issues Management Approach with cases.5th edn. Mason, Ohio: South-Western Cengage Learning.
Smith, Ken G., and Phil Johnson. Business Ethics and Business Behaviour. Boston, MA: International Thomson Business Press, 1996.
Thomas Donaldson & Patricia H. Werhane (1996) Ethical issues in business: a philosophical approach. 5th edn. Upper Saddle River, N.J.: Prentice-Hall.
William H. Shaw. (1999) Business ethics. 3rd edn. Belmont, CA: Wadsworth Pub.
William H. Shaw (2008) Business Ethic. 6th edn. United States: Thomson Wadsworth.
Government Publications
United States Department of Labor Occupational Safety and Health Administration (OSHA) Worker Rights Under the Occupational Safety and Health Act 1970
United States Department of Labor Occupational Safety and Health Administration (OSHA) Worker Rights Under the Occupational Safety and Health Act 1970
United States Department of Labor Occupational Safety and Health Administration (OSHA) Worker Rights Under the Occupational Safety and Health Act 1970
Articles
Austin, C. About website. Available at: (Accessed: 12 January 2010).
BBC News (2007) Available at: (Accessed: 3 January 2010).
Carl, M. (2007) Timesonline website. Available at: (Accessed: 8 January 2010).
Charles D. Kay. (1997) Wofford website. Available at: (Accessed: 9 January 2010).
Chuck, L., Rob, S. & Edward, T. (2004) ‘The World's Most Prominent Temp Workers', CEO Succession 2004, pp.10. Booz Allen Hamilton [online]. Available at: (Accessed: 15 January 2010).
Financial Reporting Council. Available at: (Accessed: 3 January 2010).
Freshfields Bruckhaus Deringer LLP (2008) ‘Implementing the shareholders' rights directive in the UK', pp.2. Freshfields[online]. Available at: (Accessed: 6 January 2010).
Heather, T. (2006) The New Your Times website. Available at: (Accessed: 13 January 2010).
Kakabadse, A.P. & Kakabadse, Nada K. (2006) ‘Chairman and Chief Executive Officer (CEO): That Scared and Secret Relationship', Management Development, pp.10. Cranfield [online]. Available at: (Accessed: 5 January 2010).
Sue, N. Business Ethics Qfinance website. Available at: (Accessed: 7 January 2010).
Xinhua (2006) People website. Available at: (Accessed: 15 January 2010).
Need an essay? You can buy essay help from us today!
Request the removal of this essay.
|
http://www.ukessays.com/essays/english-language/cases-of-bp-and-m-s.php
|
CC-MAIN-2013-20
|
refinedweb
| 2,832
| 55.34
|
Evan Phoenix on Rubinius - VM Internals Interview
- |
-
-
-
-
-
-
Read later
Reading List
Whereas other alternative Ruby implementations such as JRuby, XRuby, Gardens Point Ruby.NET, IronRuby, target existing VMs or, as Ruby 1.x, are written in C, rubinius uses another approach, taking ideas from the Smalltalk VMs, particularly Squeak Smalltalk. Squeak is written in a subset of itself. This subset, called Slang, is basically C with a Smalltalk syntax and a few restrictions. One of the goals of rubinius is to use this approach throughout the project as well. The language for this is called Garnet (formerly called "Cuby"), and work on it is under way. Evan explains the current status:
It's still something I plan on doing sooner rather than later. There ended up being a lot of issues we wanted to tackle first, and we haven't get got back to working on Garnet (the new name for Cuby). There were no particular problems yet, but I'm sure well find some.Currently, basic pieces of the VM are written in C, also using an approach from Smalltalk. Evan explains the idea behind primitives:
Garnet looks like ruby at first glance, but the semantics of what things mean have been rewired.
For example, in garnet code 'd = c.to_ref' appears to call a method called to_ref on c, but garnet will translate that into 'd = &c', which is C code. One way to think about it is as a really advanced C preprocessor. It tries to map as much as it can to C constructs. The idea is something that looks like ruby, but behaves like C.
They're small chunks of C code that can be called to from Ruby. They're used to implement things that really can not be done in Ruby. A great example is the ability to allocate an object. On the backend, this interacts with the garbage collector to find enough memory to allocate the specific object in. That operation can not be in Ruby, it's an operation at the bottom of the stack, thus a primitive operation.To show how this actually looks, here an example of a rubinius primitive:
Rubinius primitives are exactly the same as Smalltalk's primitives. If a method is assigned a primitive operation number, then when the method is run, the primitive is invoked instead of the normal ruby code. If the primitive fails (the primitive itself indicates that it has failed), then the ruby code is run, as fallback behavior. This ruby code can do things like raise an exception about why it failed, convert the arguments and try again, or any number of things.
def fixnum_size(_ = fixnum)Evan runs us through the code:
<<-CODE
stack_push(I2N(sizeof(int)));
CODE
end
The primitives and instructions use a kind of funny format to make maintaining them easier. All the operations are just ruby methods, where the body is a string that contains C code. At build time, these files are run and some code calls each method, collecting the C code and then spitting it out do a file which is #include'd into other C code. One of the primary reasons for this is primitives and instructions are wrapped in a huge switch statement in C, which is a pain to maintain by hand.
Also, it gives us some preprocessor capabilities. For example, here you see that we've put a little magic sauce in the argument definition the fixnum_size primitive. First, the point of that code is to automatically register that the C code "POP(self, FIXNUM_P)" should be run before code in the method. The code that runs the methods and outputs C code will notice that and write that out properly. We use this form to make the primitives easier to write.
I should note that not all primitives use this form currently. Soon, we'll be doing an audit and switching out all of them to use it.
As can be seen, the borders between Ruby and C code are fuzzy and are going to shift in the future, as more of Garnet becomes available. Further up the stack, Ruby is used for implementing standard libraries.
The core VM (which is really just the opcodes) is currently written in C, as are the primitives. The primitives are the first thing I want to use Garnet on. Both garbage collectors are also written in C (though the amount of code they consume is small).
Other than that, everything is in Ruby. Everything from parsing the interpreter command line arguments (things like rubinius -d -v, which turn on debugging and warnings) to all the methods on String. The complete runtime environment of code is can easily be manipulated, as it's just Ruby.
Rubinius has made great strides in the past. To keep this up and to get more developers to start coding or testers/users to start banging away at the existing VM, some more information about the project is needed.
At the moment, the IRC channel #rubinius is the best source for information (besides, of course, the source). Some historic digests of this channel can be found online too. Evan details the plans to improve the transparency of the project:
We're really work hard to make the process more transparent. Currently, the IRC channel is the primary way we communicate, and we're actually working to integrate IRC logs into. We also encourage people to use the forums on to ask questions. The forum has RSS feeds setup and most of the devs watch them for discussions (though I have to admit, this hasn't really caught on yet).
Any suggestions people have about making the project more transparent, I'm all ears. We can always use people writing specs. Once we have a complete spec suite, the rest falls in place pretty easily.
Watch out for part two of this interview, which will feature a more technical look behind the scenes of the new debugging іmplementation, GC and ObjectSpace and threading.
Rate this Article
- Editor Review
- Chief Editor Action
|
https://www.infoq.com/news/2007/07/rubinius-interview-part-one
|
CC-MAIN-2017-47
|
refinedweb
| 1,013
| 71.14
|
In this article we are going to explore building a simple native Android application that utilizes the Chatter REST API within the Salesforce Platform. To accomplish this, we will use the Salesforce Mobile SDK 2.1, which acts as a wrapper for low-level HTTP functions, allowing us to easily handle OAuth and subsequent REST API calls. The TemplateApp provided in the.
Getting Set Up
I’m using IntelliJ IDEA for this tutorial, this is the IDE that Android Studio is based on. If you’re already using Android Studio, there will be no appreciable difference in workflow as we proceed; Eclipse users are good to go. Once you have your IDE setup we can go about installing the Salesforce Mobile SDK 2.1 (see link in paragraph above). Salesforce.com recommends a Node.js based installation using the node package manager. We will go an alternate route; instead we are going to clone the repo from Github [Page 16].
Once you have your basic environment setup, go to, and sign up for your Developer Edition (DE) account. For the purposes of this example, I recommend sign up for a Developer Edition even if you already have an account. This ensures you get a clean environment with the latest features enabled. Then, navigate to to log into your developer account.
After you’ve completed your registration, follow the instructions in the Mobile SDK Guide for creating a Connected App [Page 13]. For the purposes of this tutorial you only need to fill out the required fields.
The Callback URL provided for OAuth does not have to be a valid URL; it only has to match what the app expects in this field. You can use any custom prefix, such as sfdc://.
Important: For a native app you MUST put “Perform requests on your behalf at any time (refresh_token)” in your selected OAuth scopes or the server will reject you, and nobody likes rejection. The Mobile SDK Guide kind of glosses over this point, for more details see: [ ]
When you’re done, you should be shown a page that contains your Consumer Key and Secret among other things.
Now that we’ve taken care of things on the server side, let’s shift our focus over to setting up our phone app.
First, we’re going to start a new Project in IntelliJ; make sure you choose Application Module and not Gradle: Android Application Module,as the way the project will be structured doesn’t play nice with the Gradle build system.
Name it whatever you want, be sure to uncheck the box that says Create “Hello World!” Activity, as we won’t be needing that. Now that you’ve created your project, go to File -> Import Module…
Navigate to the directory where you cloned the Mobile SDK repo, expand the native directory and you should see a project named “SalesforceSDK” with an IntelliJ logo next to it.
Select it and hit ok. On the next screen, make sure the option to import from external model is selected, and that the Eclipse list item is highlighted. Click next, and then click next again on the following screen without making any changes. When you reach the final screen, Check the box next to SalesforceSDK and then click finish. IntelliJ will now import the Eclipse project (Salesforce SDK) in your project as a module.
The Salesforce Mobile SDK is now yours to command….almost; go to File -> Project Structure… Select ‘Facets’ under ‘Project Settings’, now choose the one that has Salesforce SDK in parenthesis; make sure Library module box is checked [IMG]. Now, select the other, then select the Packaging tab, and make sure the Enable manifest merging box is checked.
Next, select ‘Modules’ from the ‘Project Settings‘ list, then select the SalesforceSDK module. Under the dependencies tab there should be an item with red text; right-click on it and remove it. From there, click on <your module name>; under the dependencies tab click the green ‘+’, select ‘Module Dependency…’, Salesforce SDK should be your only option, click ‘Ok’. Now select ‘Apply’ in the Project Structure window and then click ‘Ok’.
Making the calls
Create a file named bootconfig.xml in res/values/; the content of that file should be as follows:
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="remoteAccessConsumerKey"> YOUR CONSUMER KEY </string> <string name="oauthRedirectURI"> YOUR REDIRECT URI </string> <string-array <item>chatter_api</item> </string-array> <string name="androidPushNotificationClientId"></string> </resources>
Remember the connected app we created earlier? That’s where you will find the consumer key and redirect (callback) uri.
For the curious, despite the fact that we specified refresh_token in our OAuth Scopes server-side, we don’t need to define it here. The reasoning behind this is that this scope is always required to access the platform from a native app, so the Mobile SDK includes it automatically.
Next, make sure your String.xml file look something like this:
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="account_type">com.salesforce.samples.templateapp.login</string> <string name="app_name"><b>Template</b></string> <string name="app_package">com.salesforce.samples.templateapp</string> <string name="api_version">v30.0</string> </resources>
The above values should be unique to your app.
Now create another class named KeyImpl.
public class KeyImpl implements KeyInterface { @Override public String getKey(String name) { return Encryptor.hash(name + "12s9adpahk;n12-97sdainkasd=012", name + "12kl0dsakj4-cxh1qewkjasdol8"); } }
Once you have done this, create an arbitrary activity with a corresponding layout that extends SalesforceActivity, and populate it as follows:
public class TutorialApp extends Application { @Override public void onCreate() { super.onCreate(); SalesforceSDKManager.initNative(getApplicationContext(), new KeyImpl(), TutorialActivity); } }
This is our application entry point where we initialize the SalesforceSDKManager.
From the Salesforce Mobile SDK Developer Guide:
“The top-level SalesforceSDKManager class implements passcode functionality for apps that use passcodes, and fills in the blanks for those that don’t. It also sets the stage for login, cleans up after logout, and provides a special event watcher that informs your app when a system-level account is deleted. OAuth protocols are handled automatically with internal classes.”
For this tutorial, our corresponding layout is as follows:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <Button android: </LinearLayout>
As you can see above, for the purposes of this tutorial, we’re keeping things really simple. A button to make the call and a text box to display it.
Before we get into implementation details, I want to mention a critical point about the manifest file.
Since we have new application entry point, in our case TutorialApp, we need to let the system know by defining this in the manifest.
If you miss this step, your application will crash with a runtime error complaining that you haven’t made the appropriate call to SalesforceSDKManager.init()
Once this is complete, your application should run without a problem
Now, we’re going to write the functions to make the actual request. The following methods will go in the activity created above.
private void sendRequest(RestRequest restRequest) { client.sendAsync(restRequest, new RestClient.AsyncRequestCallback() { @Override public void onSuccess(RestRequest request, RestResponse result) { try { //Do something with JSON result. println(result); //Use our helper function, to print our JSON response. } catch (Exception e) { e.printStackTrace(); } EventsObservable.get().notifyEvent(EventsObservable.EventType.RenditionComplete); } @Override public void onError(Exception e) { e.printStackTrace(); EventsObservable.get().notifyEvent(EventsObservable.EventType.RenditionComplete); } }); }
The code above will execute a RestRequest object we pass to it and return the results asynchronously.
Let’s construct a RestRequest object for the client to execute:
First, we need a couple of helper methods to help construct our HttpEntities
private Map<String, Object> parseFieldMap(String jsonText) { String fieldsString = jsonText; if (fieldsString.length() == 0) { return null; } try { JSONObject fieldsJson = new JSONObject(fieldsString); Map<String, Object> fields = new HashMap<String, Object>(); JSONArray names = fieldsJson.names(); for (int i = 0; i < names.length(); i++) { String name = (String) names.get(i); fields.put(name, fieldsJson.get(name)); } return fields; } catch (Exception e) { Log.e("ERROR", "Could not build request"); e.printStackTrace(); return null; } } private HttpEntity getParamsEntity(String requestParamsText) throws UnsupportedEncodingException { Map<String, Object> params = parseFieldMap(requestParamsText); if (params == null) { params = new HashMap<String, Object>(); } List<NameValuePair> paramsList = new ArrayList<NameValuePair>(); for (Map.Entry<String, Object> param : params.entrySet()) { paramsList.add(new BasicNameValuePair(param.getKey(), (String) param.getValue())); } return new UrlEncodedFormEntity(paramsList); }
Next, we define the method that will generate our desired RestRequest object.
private RestRequest generateRequest(String httpMethod, String resource, String jsonPayloadString) { RestRequest request = null; if (jsonPayloadString == null) { jsonPayloadString = ""; } String url = String.format("/services/data/%s/" + resource, getString(R.string.api_version)); // The IDE might highlight this line as having an error. This is a bug, the code will compile just fine. try { HttpEntity paramsEntity = getParamsEntity(jsonPayloadString); RestRequest.RestMethod method = RestRequest.RestMethod.valueOf(httpMethod.toUpperCase()); request = new RestRequest(method, url, paramsEntity); return request; } catch (UnsupportedEncodingException e) { Log.e("ERROR", "Could not build request"); e.printStackTrace(); } return request; }
Lastly, the helper method that prints our result. This isn’t strictly necessary, but is included here for
/** * Helper method to print object in the result_text field * * @param object */ private void println(Object object) { if (resultText == null) return; StringBuffer sb = new StringBuffer(resultText.getText()); String text; if (object == null) { text = "null"; } else { text = object.toString(); } sb.append(text).append("\n"); resultText.setText(sb); }
Usage:
Remember that button we created earlier in our layout? Now we’re going to give it some life.
public void onFetchClick(View view) { RestRequest feedRequest = generateRequest("GET", "chatter/feeds/news/me/feed-items", null); sendRequest(feedRequest); }
Now, run your app and if all goes well you will be asked to put in your Salesforce credentials. You will then be asked to authorize the app you just created; click allow and you will then see your app.
When you press the button it should dump all of the JSON data from our request into our textfield. It’s not pretty, but it gets the job done and illustrates the concepts.
Now you can take this data, bind it to a ListView or use it to make other requests such as a ‘like’ or ‘comment’.
Here’s a fully working sample app for reference:
Note:
I’d recommend using a library for parsing JSON such as Gson or Jackson. We didn’t do that here for the sake of understanding and continuity, as its closer to everything you will see in the Salesforce documentation and examples.
Resources
Refer to these documents to see much of what you can do.
- Salesforce Developer Platform – Mobile Resources
- Chatter REST API Cheat Sheet (pdf)
- Chatter REST API documentation
Have fun!
Hi,
I’ve managed to make your code working, but I wanna modify it according to our business process. I have been working on this for days now but I’m still nowhere near completion. Here’s the problem: once I run the app, it should not redirect to salesforce login page yet. It should go first to an Acitivity where I put a button. Once clicked, then that’s the time it should proceed to the login page. I can’t seem to get this part working. Hope you can help.
Thanks.
|
http://www.javacodegeeks.com/2014/06/interfacing-salesforce-with-android.html
|
CC-MAIN-2016-07
|
refinedweb
| 1,854
| 56.15
|
The Samba-Bugzilla – Bug 10247
vfs_streams_xattr fails to list streams on FreeBSD server
Last modified: 2013-11-19 12:44:23 UTC
Created attachment 9364 [details]
Possible bug fix
When creating a file on a share with streams support via vfs_streams_xattr, files stored by an OSX client do not display their properties and resource fork in finder.
Debugging shows that the problem is related to vfs_streams_xattr because vfs_streams_depot works.
Steps to show behaviour on windows:
Use a share like this:
[test]
comment = test
browseable = yes
writable = yes
case sensitive = yes
path = /data/test
nt acl support = no
vfs objects = zfsacl streams_xattr
store dos attributes = yes
ea support = yes
Using the windows client, create a file test.txt, then
* Storing an alternate stream (echo >test.txt:stream1) works
* Retrieving the alternate stream (type test.txt:stream1) works
* Listing the alternate streams of test.txt with a tool like stream explorer fails. This is also the reason why the OSX client fails, it seems to list the streams before it tries to access them.
Debugging:
The problem seems to be the function bsd_attr_list() in lib/replace/xattr.c. It calls extattr_get_file() for both system and user extattr namespace. The first call for the system namespace fails with EPERM if the user is not root, making bsd_attr_list() fail.
Attached patch to ignore EPERM on system namespace seems to fix the problem for me.
looks reasonable, but i thought of the following approach also:
when iterating through the extattr namespaces in bsd_attr_list(), skip the EXTATTR_NAMESPACE_SYSTEM namespace when we are not running as root. This would also reduce some overhead.
this is fixed with 374b2cfde74e0c61f4b2da724b30d0e430596092 in master, which can (and should) be cherry-picked to 4.0 and 4.1.
Patch in master - looks good to me. Should apply cleanly to 4.1.next, 4.0.next, 3.6.next.
Jeremy.
Works for me, thanks!
Pushed to autobuild-v4-1-test and autobuild-v4-0-test.
Does not apply to current v3-6-test.
Is it very important for 3.6?
for people with *BSD servers and OS X clients it's a nasty bug but it is not a security problem, so 3.6, which is security-fixes-only-mode does not need this.
(In reply to comment #6)
> for people with *BSD servers and OS X clients it's a nasty bug but it is not a
> security problem, so 3.6, which is security-fixes-only-mode does not need this.
There will be one last maintenance release of Samba 3.6.
So if it's worth it and someone provides a patch...
Created attachment 9410 [details]
git-am fix for 3.6.next
Back-port for 3.6.next.
Jeremy.
(In reply to comment #8)
> Created attachment 9410 [details]
> git-am fix for 3.6.next
>
> Back-port for 3.6.next.
>
> Jeremy.
Thanks a lot, Jeremy!
Pushed to v3-6-test.
This fix breaks build in case --enable-uid-wrapper or --enable-nss-wrapper is enabled.
The fix is trivial though, see the attachment.
Created attachment 9438 [details]
Expose geteuid function call
|
https://bugzilla.samba.org/show_bug.cgi?id=10247
|
CC-MAIN-2017-13
|
refinedweb
| 512
| 67.65
|
This worked before SP1 was installed.
namespace Test { public class Adder { public int Add(int n1, int n2) { return n1 + n2; } } } namespace AdderTest { [TestFixture] public class Class1 { [Test] public void AddTest1() { var add = new Adder(); var res = add.Add(2, 3); Assert.AreEqual(5, res); } } }
This command still works after SP1 installed
vstest.console.exe /usevsixextensions:true /framework:framework45 /platform:x86 AdderTest.dll
This one doesn't and it fails on every PC in our team where SP1 is installed.
vstest.console.exe /enablecodecoverage /usevsixextensions:true /framework:framework45 /platform:x86 AdderTest.dll
Error: The active Test Run was aborted because the execution process exited unexpectedly. Check the execution process logs for more information. If the logs are not enabled, then enable the logs and try again.
The logs show the following error:
V, 10664, 11, 2016/02/02, 15:00:09.114, 2115392201142, vstest.console.exe, TestRunRequest:SendTestRunMessage: Starting.
I, 10664, 11, 2016/02/02, 15:00:09.116, 2115392205435, vstest.console.exe, TestRunRequest:SendTestRunMessage: Completed.
E, 10664, 10, 2016/02/02, 15:00:23.722, 2115441528953, vstest.console.exe, TAEF Engine Execution: [HRESULT: 0x800706BA] Failed to create the test host process for out of process test execution. (The test host process failed to run with exit code 0xc0000005. Failed to set up communication with the test host process. (The connection attempt timed out.))
It appears SP1 installed all new vstest exe's and DLL's, also seemed to install the TAEF stuff, although I am using windows 7
Using NUnit 2.6 & the VS Nunit Test runner extension (also tried NUint 3.0 with it's test runner - still broke)
We are using VSTEST because our code is a combination of C++/C# and 64 bit components. We need unite and coverage tests.
Update:
Used VS 2105 to write an intellitest - that fails the same way running coverage.
This is a regression in VS 2015 Update 1. Wex.Communication.dll was calling GetModuleFileNameExW while its global variables were being initialized. On Windows 7 while collecting code coverage, this API caused Wex.Communication.dll to be unloaded. When GetModuleFileNameExW returned, the process crashed because the DLL was unloaded. I've fixed this to wait until after the DLL has fully loaded to call GetModuleFileNameExW, which prevents the crash. (I'm a Software Engineer who works on TAEF at Microsoft.) The fix will ship in a future VS 2015 update.
To work around this, disable TAEF by setting the registry value named "Value" to 0 in the HKEY_CURRENT_USER\SOFTWARE\Microsoft\VisualStudio\14.0_Config\FeatureFlags\TestingTools\UnitTesting\Taef key. (VSTest has two implementations in VS 2015. When TAEF is disabled, the old implementation from VS 2013 is used. The TAEF-based implementation is new in VS 2015.)
See this link
The entry by mmanela on Sep 1, 2015
Setting the registry key mentioned in this link resolved the issue. Microsoft told me this was a workaround and not a fix.
User contributions licensed under CC BY-SA 3.0
|
https://windows-hexerror.linestarve.com/q/so35163843-vstestcode-coverage-broken-after-sp1-for-visual-studio-2015-installed
|
CC-MAIN-2021-39
|
refinedweb
| 497
| 61.02
|
Overview.
A shell is a program that repeatedly:
Your task in this 2-person group assignment is to write your own xyshell program, where x and y are the first initials of the people in your group.
(For more information about job control -- which you don't have to do except to support starting a program in the background -- watch this short 5 minute video.)
Details.
This project is to write an object-oriented C++ shell program. Each non-trivial object in the project should be written as a class. Your driver program for the project should be no more complicated than this:
#include "XYShell.h" int main() { XYShell myShell; myShell.run(); }One possible breakdown of the functionality is as follows:
To execute system programs, your shell should use the fork(), waitpid(), and execve() system calls: The fork() call clones the current process to create a child process that is a complete copy of its parent. The child process should then use execve() to execute the user's command.
If no ampersand has been given on the command-line, the parent should use waitpid() to wait until the child terminates; otherwise (an ampersand was given, indicating the command was to be run in the background), it should return to the top of its loop, prompt the user, and await their next command. You may also find the sched_yield() system call to be useful in synchronizing the parent and child processes.
You should read the UNIX manual pages (section 2) for the details of the various system calls listed above, especially what header files must be #include-d to use them. There are also may be WWW tutorials/examples available that you may find useful.
To illustrate:
$ ./xyshellshould run your shell, which then displays its prompt and awaits a user command:
/home/vtn2/proj/shell/$When the user types a command, the program should perform that command and then prompt for the next one:
/home/vtn2/proj/shell$ ps PID TTY TIME CMD 2208 pts/2 00:00:00 bash 2428 pts/2 00:00:00 xyshell 2485 pts/2 00:00:00 ps /home/vtn2/proj/shell/$If the user types a command with an ampersand, the program should not wait before prompting for the next command:
/home/vtn2/proj/shell$ ps & /home/vtn2/proj/shell$ PID TTY TIME CMD 2208 pts/2 00:00:00 bash 2428 pts/2 00:00:00 xyshell 2485 pts/2 00:00:00 ps
Those doing the course for honors:.
Design and build an Environment class that captures your shell's environment, so that you can run X-Window programs. When your child process invokes execve(), send your Environment class a message that returns a char** vector containing the enviroment, and pass that vector as the third argument to execve().
Plan of Action.
0. Divide the work between you and your partner. One person should be responsible for the CommandLine class and the other should be responsible for the Path and Prompt classes. You should aim to have these classes done at the end of the first week. You should work together the second week to build your XYShell class integrating the other classes.
1. The person building CommandLine should read up on argc and argv (these are covered in C++ An Introduction to Computing most C books, and a number of on-line sites), as well as the system calls that are useful for this class. Then implement the class.
2. The person building the Path class should read up on the system calls it uses. Then implement the class. Then do the same for the Prompt class.
3. Together, read up on fork(), execve(), and waitpid(). Design the algorithm for your run() method; then build your XYShell class (replacing X andY with your initials). cd, exit, and pwd are the only built-in shell commands your shell needs to recognize.
Feel free to discuss the project, the Unix manual pages, and the system calls with myself or your classmates. You are not to look at anyone else's source code.
Your program should be reasonably efficient in terms of its use of both time and space. It should also be fully documented, with an opening (header) comment that gives all of the usual information (who, what, where, when, and why), descriptive identifiers, judicious use of white space, and in-line comments explaining any 'tricky' parts. Write hospitable code -- code that someone else is comfortable reading. Indent perfectly. Beauty counts!
Turn in:
Use the program script to create a file in which you exercise your program as necessary to show its functionality (e.g., ls, cd .., pwd, ls -a, ls -l /home/cs/, pwd, ps -ax &, ..., followed by exit). Include at least one invalid command to show that your program handles such commands gracefully.
Submit 3 things to
/home/cs/232/current/<yourid>/homework04/:
Only one of the partners in the group needs to turn in their submission, but make sure your login-ids and names are in the grading.txt file and in a comment in each submitted file.
Due date: Wednesday, Mar. 7, 11:59:59.999 p.m.
|
https://cs.calvin.edu/courses/cs/232/assignments/homework04/homework.html
|
CC-MAIN-2018-13
|
refinedweb
| 859
| 68.2
|
- Factories and the Recurring Need to Copy Objects
- Linked Java Structures
- Running the Supplied Code
Linked Java Structures
Linked structures represent one of those areas of programming that strike terror into the hearts of many developers! This of course harks back to the bad old days when procedural languages (such as C and Pascal) were the only development option. One company I worked for even went so far as to issue a blanket ban on the use of pointers in any of its products! So great was the fear of overwriting pointers or accidentally de-referencing null pointers.
In Java, there really aren't such things as pointers in an explicit sense. However, when you want to use linked Java data structures, you get as close to pointers as is possible with this language. For this reason, pointers (or links) provide a powerful tool in the hands of those who understand them.
Continuing the analogy with documents, let's create a simple node type that represents an individual document as illustrated in Listing 10.
public class ALinkedNode { private String descriptor; private int value; private ALinkedNode linkToNext; public ALinkedNode() { descriptor = null; value = 0; linkToNext = null; } public ALinkedNode(String newDescriptor, int newValue, ALinkedNode link) { descriptor = newDescriptor; value = newValue; linkToNext = link; } public void setDescriptor(String newDescriptor) { descriptor = newDescriptor; } public void setValue(int newValue) { value = newValue; } public void setLink(ALinkedNode newLink) { linkToNext = newLink; } public String getDescriptor() { return descriptor; } public int getValue() { return value; } public ALinkedNode getNext() { return linkToNext; } }
Listing 10 A Simple Linked Node Class
Every instance of the ALinkedNode class has the following private data members:
- String descriptor;
- int value;
- ALinkedNode linkToNext;
An instance of the ALinkedNode class could represent a single document, such as an invoice. Each object of ALinkedNode has a textual descriptor, a value data member, and a pointer to the next instance of ALinkedNode in the list.
So, the invoice document could point to another document that details the work associated with the invoice. In other words, it's once again a simple document management system modeled as a linked list. Let's now see a class that uses the linked node class; let's look at a linked list class in Listing 11.
public class ALinkedList { private ALinkedNode head; public ALinkedList() { head = null; } public void addToTop(String descriptor, int value) { head = new ALinkedNode(descriptor, value, head); } public boolean deleteHeadNode() { if (head != null) { head = head.getNext(); return true; } else return false; } public void displayList() { ALinkedNode position = head; System.out.println("Start of linked list:"); while (position != null) { System.out.println(position.getDescriptor() + " " + position.getValue()); position = position.getNext(); } System.out.println("End of linked list."); } public boolean isEmpty() { return (head == null); } public static void main(String[] args) { ALinkedList list = new ALinkedList(); list.addToTop("Invoice", 1); list.addToTop("Receipt", 2); list.addToTop("Statement", 3); list.displayList(); list.deleteHeadNode(); while (list.deleteHeadNode()) ; list.displayList(); } }
Listing 11 A Linked List Class
The code in Listing 11 is fairly straightforward. Each instance of ALinkedList contains a private data member called head that points to the beginning of the linked list. All manipulation of the list class (ALinkedList) makes use of the head data member. Perhaps, the most difficult use of head is in the method addToTop():
head = new ALinkedNode(descriptor, value, head);
This line instantiates a new object of the class AlinkedNode and assigns the current value of head to its linkToNext data member. It's instructive to consider the method deleteHeadNode(), which contains the code in Listing 12.
if (head != null) { head = head.getNext(); return true; } else return false;
Listing 12 Reassignment of head
It's perhaps not entirely obvious from Listing 12, but the following line goes to the very heart of Java:
head = head.getNext();
What happens here is that the object head is updated so that it points to the next item in the list. This means that the original item pointed to by head goes out of scope. Once this happens, the object has (correctly) leaked out of your program (in C or C++, this situation is an error that is often extremely difficult to find)! The garbage collector will then at some point return the memory from this object. So, from your perspective the original entity pointed by head has disappeared.
The other methods in Listing 11 relate to the display of the linked list members and checking if the list if empty or not.
|
https://www.informit.com/articles/article.aspx?p=1075258&seqNum=2
|
CC-MAIN-2021-49
|
refinedweb
| 727
| 52.7
|
Hello,
I find a strange information. I use Intel(R) Instrinsic Guide produced by Intel.Software.
And I can read that :
Synopsis __m128d _mm_sqrt_sd (__m128d a)
#include "emmintrin.h"
Instruction:
sqrtsd CPUID Feature Flag: SSE2
Description:
Computes the square root of the lower double-precision floating-point value of a.
Operation:
r0 := sqrt(a0)
r1 := a1
Latency & Throughput Information
CPUID(s) Parameters Latency Throughput
0F_03 xmm, xmm 39 39
0F_02 xmm, xmm 38 38
06_2A xmm, xmm 22 22
06_25/2C/1A/1E/1F/2E xmm, xmm 34 30
06_17/1D xmm, xmm 31 25
06_0F xmm, xmm 60 57
06_0E xmm, xmm 58 57
06_0D xmm, xmm 58 57
Ok ! Interesting. But when I read emmintrin.h I saw that :
extern __m128d __ICL_INTRINCC _mm_sqrt_sd(__m128d, __m128d);
How it's possible ? A sqrt scalar for double with two value... ? What is behavior expected ?
Thanks for you answers
Regards
PS : Intel Parallel Studio XE 2013
Intel(R) Guide Instrinsic 2.8
Link Copied
|
https://community.intel.com/t5/Intel-C-Compiler/Error-on-Documentation-or-in-Intel-Header-SSE-emmintrin-h/td-p/951754
|
CC-MAIN-2021-31
|
refinedweb
| 162
| 66.64
|
Created on 2012-01-26 21:00 by haypo, last changed 2012-03-02 21:38 by haypo. This issue is now closed.
Attached patch adds an optional format argument to time.time(), time.clock(), time.wallclock(), time.clock_gettime() and time.clock_getres() to get the timestamp as a different format. By default, the float type is still used, but it will be possible to pass "format" to get the timestamp as a decimal.Decimal object.
Some advantages of using decimal.Decimal instead of float:
- no loss of precision during conversion from base 2 to base 10 (converting the float to a string)
- the resolution of the clock is stored in the Decimal object
- for big number, Decimal doesn't loose precision
Using Decimal is also motivated by the fact than Python 3.3 has now access to clock having a resolution of 1 nanosecond: clock_gettime(CLOCK_REALTIME). Well, it doesn't mean that the clock is accurate, but Python should not loose precision just because it uses floatting point instead of integers.
About the API: I chose to add a string argument to allow maybe later to support user defined formats, or at least add new builtin formats. For example, someone proposed 128 bits float in the issue #11457.
--
The internal format is:
typedef struct {
time_t seconds;
/* floatpart can be zero */
size_t floatpart;
/* divisor cannot be zero */
size_t divisor;
/* log10 of the clock resoltion, the real resolution is 10^resolution,
resolution is in range [-9; 0] */
int resolution;
} _PyTime_t;
I don't know if size_t is big enough to store any "floatpart" value: time.clock() uses a LARGE_INTEGER internally. I tested my patch on Linux 32 and 64 bits, but not yet on Windows.
The internal function encoding a timestamp to Decimal caches the resolution objects (10^resolution, common clock resolutions: 10^-6, 10^-9) in a dict, and a Context object.
I computed that 22 decimal digits should be enough to compute a timestamp of 10,000 years. Extract of my patch:
"Use 12 decimal digits to store 10,000 years in seconds + 9 decimal digits for the floating part in nanoseconds + 1 decimal digit to round correctly"
>>> str(int(3600*24*365.25*10000))
'315576000000'
>>> len(str(int(3600*24*365.25*10000)))
12
--
See also the issue #11457 which is linked to this topic, but not exactly the same because it concerns a low level function (os.stat()).
Some examples of the API:
$ .')
Windows code (win32_clock) was wrong in time_decimal-2.patch: it is fixed in patch version 3.
Some tests on Windows made me realize that time.time() has a resolution of 1 millisecond (10^-3) and not of a microsecond (10^-6) on Windows! It is time to use GetSystemTimeAsFileTime! => see issue #13845.
Can we pick an API for this functionality that does not follow the worst of design anti-patterns? Constant arguments, varying return type, hidden import, and the list can go on.
What is wrong with simply creating a new module, say "hirestime" with functions called decimal_time(), float_time(), datetime_time() and whatever else you would like. Let's keep the good old 'time' module simple.
Well, creating a separate module is an anti-pattern in itself. calendar vs. time vs. datetime, anyone?
I would instead propose separate functions: decimal_time, decimal_clock... or, if you prefer, time_decimal and so on.
On Fri, Jan 27, 2012 at 5:17 PM, Antoine Pitrou <report@bugs.python.org> wrote:
> Well, creating a separate module is an anti-pattern in itself. calendar vs. time vs. datetime, anyone?
Are you serious? Since the invention of structural programming,
creating a separate module for distinct functionality has been one of
the most powerful design techniques. If I recall correctly, most of
the original GoF patterns were about separating functionality into a
separate module or a separate class. The calendar module is indeed a
historical odd-ball, but what is wrong with time and datetime?
> Are you serious? Since the invention of structural programming,
> creating a separate module for distinct functionality has been one of
> the most powerful design techniques.
Yes, I'm serious, and I don't see what structural programming or design
patterns have to do with it.
And we're not even talking about "distinct functionality", we're talking
about the exact same functionality, except that the return type has
slightly different implementation characteristics. Doing a module split
is foolish.
> Constant arguments
What do you call a constant argument? "float" and "decimal"? You would prefer a constant like time.FLOAT_FORMAT? Or maybe a boolean (decimal=True)?
I chose a string because my first idea was to add a registry to support other format, maybe user defined formats, like the one used by Unicode codecs.
If we choose to not support other formats, but only float and decimal, a simpler API can be designed.
Another possible format would be "tuple": (intpart: int, floatpart: int, divisor: int), a low level type used to "implement" other user-defined types. Using such tuple, you have all information (clock value and clock resolution) without losing information.
> varying return type
I agree that it is something uncommon in Python. I know os.listdir(bytes)->bytes and os.listdir(str)->str. I suppose that there are other functions with a different result type depending on the input.
I am not attached to my API, it was just a proposition.
> hidden import
Ah? I wouldn't call it hidden because I don't see how a function can return a decimal.Decimal object without importing it. If you consider that it is surprising (unexepected), it can be documented.
> and the list can go on.
What else?
> What is wrong with simply creating a new module, say "hirestime"
> with functions called decimal_time(), float_time(), datetime_time()
> and whatever else you would like.
Hum, adding a new module would need to duplicate code. The idea of adding an argument is also to simplify the implementation: most code is shared. We can still share a lot of code if we choose to add a new function in th time module instead of adding a new argument to existing functions.
> Let's keep the good old 'time' module simple.
What is complex in my patch? It doesn't break backward compatibility and should have a low (or null) overhead in runtime speed if the format is not set.
--
I notified something surprising in my patch: "t1=time.time("decimal"); t2=time.time("decimal"); t2-t1" returns something bigger than 20 ms... That's because the "import decimal" is done after reading the first clock value, and not before.
Patch version 4, minor update:
- remove the resolution field of _PyTime_t and remove int_log10(): use 1/divisior as the resolution to support divisor different than a power of 10 (e.g. the cpu frequency on Windows)
- inline and remove _PyTime_FromTimespec()
> Another possible format would be "tuple"
Or, I forgot an obvious format: "datetime"!
Version 5:
- add "datetime" and "timespec" formats: datetime.datetime object and (sec: int, nsec: int)
- add timestamp optional format to os.stat(), os.lstat(), os.fstat(), os.fstatat()
- support passing the timestamp format as a keyword: time.time(format="decimal")
I am not really conviced by the usefulness of "timespec" format, but it was just an example for #11457.
The "datetime" format is surprising for time.clock() and time.wallclock(), these timestamps use an arbitrary start. I suppose that time.clock(format="datetime") and time.wallclock(format="datetime") should raise a ValueError.
On Sun, Jan 29, 2012 at 6:42 PM, STINNER Victor <report@bugs.python.org> wrote:
..
> What do you call a constant argument? "float" and "decimal"?
> You would prefer a constant like time.FLOAT_FORMAT?
> Or maybe a boolean (decimal=True)?
Yes. This was explained on python-dev not so long ago:
The problem is not with the type of the argument (although boolean or
enum argument type is often a symptom pointing to the issue.) The
issue is that an argument that is never given a variable value at the
call site is usually a sign of an awkward API. For example, what
would you prefer:
math('log', x)
or
math.log(x)
?
>
> I chose a string because my first idea was to add a registry to support other format,
> maybe user defined formats, like the one used by Unicode codecs.
With all my respect for MAL, codecs are not my favorite part of the
python library.
One possibility (still awkward IMO) would be to use the return type as
the format specifier. This would at least require the user to import
datetime or decimal before calling time() with corresponding format.
Few users would tolerate I/O delay when they want to get time with
nanosecond precision.
> One possibility (still awkward IMO) would be to use the return type as
> the format specifier.
Yeah, I already thaught to this idea. The API would be:
- time.time(format=float)
- time.time(format=decimal.Decimal)
- time.time(format=datetime.datetime)
- time.time(format=?) # for timespec, but I don't think that we need timespec in Python which is a object oriented language, we can use better than low level strutures
- os.stat(path, format=decimal.Decimal)
- etc.
I have to write a function checking that obj is decimal.Decimal or datetime.datetime without importing the module. I suppose that it is possible by checking obj type (it must be a class) and then obj.__module__.
> This would at least require the user to import
> datetime or decimal before calling time() with corresponding
> format.
Another possibility is what I proposed before in the issue #11457: take a callback argument.
The callback prototype would be:
def myformat(seconds, floatpart, divisor):
return ...
Each module can implements its own converter and time can provide some builtin converts (because I don't want to add something related to time in the decimal module for example).
But I don't really like this idea because it requires to decide the API of the low level structure of a timestamp (which may change later), and it doesn't really solve the issue of "import decimal" if the converter is in the time module.
On Mon, Jan 30, 2012 at 6:15 PM, STINNER Victor <report.
Patch version 6:
- timestamp format is now a type instead of a string, e.g. time.time(int)
- add int and datetime.timedelta formats, remove timespec format
- complete the documentation
- fix integer overflows, convert correctly time_t to PyLong
There are now 5 timestamp formats:
- int
- float
- decimal.Decimal
- datetime.datetime
- datetime.timedelta
I consider the patch as ready to be commited, or at least ready for a
review ;-) There is no more FIXME or known limitation. Well, now the
most important part is to decide the API and the list of timestamp
formats.
The patch should be tested on Linux, FreeBSD and Windows, 32 and 64
bits to check assertions on type sizes:
assert(sizeof(clock_t) <= sizeof(size_t));
assert(sizeof(LONGLONG) <= sizeof(size_t));
assert(sizeof(time_t) <= sizeof(PY_LONG_LONG));
Hum, it looks like _PyTime_AsDecimal() is wrong is ts->divisor is not a power of 10. The exponent must be computed with Context(1), not Context(26). Something simpler can maybe be used, I don't know even the decimal API.
Patch version 7:
- Drop datetime.datetime and datetime.timedelta types
- Conversion to decimal now uses a context with 1 digit to compute
exponent=1/denominator to avoid issue on t.quantize(exponent)
- Rename the "format" argument to "timestamp" in the time module
- Rename _PyTime_AsFormat() to _PyTime_Convert()
- Update the doc
As expected, size_t is too small on Windows 32 bits.
Patch version 8: _PyTime_t uses Py_LONG_LONG if available, instead of size_t, for numerator and denominator.
(Resend patch version 8 without the git diff format to support review on Rietveld.)
Oops, win32_pyclock() was disabled (for tests) in patch version 8. Fixed in version 9.
Patch version 10:
- deprecate os.stat_float_times()
- fix docstring of os.*stat() functions
- add a reference to the PEP
- add a comment to indicate that _PyTime_gettimeofday() ignores
integer overflow
Even if some people dislike the idea of adding datetime.datetime type, here is a patch implementing it (it requires time_decimal-XX.patch). The patch is at least a proof-of-concept that it is possible to change the internal structure without touching the public API.
Example:
$ ./python
>>> import datetime, os, time
>>> open("x", "wb").close(); print(datetime.datetime.now())
2012-02-04 01:17:27.593834
>>> print(os.stat("x", timestamp=datetime.datetime).st_ctime)
2012-02-04 00:17:27.592284+00:00
>>> print(time.time(timestamp=datetime.datetime))
2012-02-04 00:18:21.329012+00:00
>>> time.clock(timestamp=datetime.datetime)
ValueError: clock has an unspecified starting point
>>> print(time.clock_gettime(time.CLOCK_REALTIME, timestamp=datetime.datetime))
2012-02-04 00:21:37.815663+00:00
>>> print(time.clock_gettime(time.CLOCK_MONOTONIC, timestamp=datetime.datetime))
ValueError: clock has an unspecified starting point
As you can see: conversion to datetime.datetime fails with ValueError('clock has an unspecified starting point') for some functions, sometimes depending on the function argument (clock_gettime).
Hum, time_decimal-10.patch contains a debug message:
+ print("la")
- return posix_do_stat(self, args, "O&:stat", STAT, "U:stat", win32_stat_w);
+ return posix_do_stat(self, args, kw, "O&|O:stat", STAT, "U:stat", win32_stat_w);
The second format string should also be updated to "U|O:stat".
Updated patch (version 11).
os.stat().st_birthtime should depend on the timestamp argument.
A timestamp optional argument should also be added to os.wait3() and os.wait4() for the utime and stime fields of the rusage tuple.
I created the issue #13964 to cleanup the API of os.*utime*() functions.
fill_time() should use denominator=1 if the OS doesn't support timestamp with a subsecond resolution. See also issue #13964.
Patch version 12:
* os.stat().st_birthtime uses also the timestamp argument
* Add an optional timestamp argument to os.wait3() and os.wait4(): change type of utime and stime attributes of the resource usage
* os.stat() changes the timestamp resolution depending if nanosecond resolution is available or not
I realized that resource.getrusage() should also be modified. I will maybe do that in another version of the patch, or maybe change resource usage in another patch.
Patch version 13:
- os.utime(path) sets the access and modification time using the currenet time with a subsecond resolution (e.g. microsecond resolution on Linux)
- os.*utime*() functions uses _PyTime_t type and functions
- add many functions to manipulate timeval, timespec and FILETIME types with _PyTime_t, add _PyTime_SetDenominator() function for that
- coding style: follow PEP 7 rules for braces
So more functions (including os.*utime*()) "accept" Decimal, but using an implicit conversion to float.
Patch version 14:
- rewrite the conversion from float to _PyTime_t: use base 2 with high precision and then simplify the fraction. The conversion from decimal.Decimal uses base 10 and do also simplify the fraction.
- write tests on functions converting _PyTime_t using _testcapi
- add timestamp argument to signal.getitimer(), signal.setitimer(), resource.getrusage()
- signal.sigtimedwait() uses _PyTime_t and now expects a number and no more a tuple (function added to Python 3.3, so no backward compatibility issue). See also the issue #13964
- time.sleep() uses _PyTime_t. See also the issue #13981 (use nanosleep())
- datetime.datetime.now() and datetime.datetime.utcnow() uses _PyTime_t
- catch integer overflow on _PyTime_AsTimeval(), _PyTime_AsTimespec() and more
This patch gives you an overview of the whole PEP 410 implementation, but it should not be applied in one shot. It would be better to commit it step by step:
- add _PyTime_t API
- use _PyTime_t for the time module
- use _PyTime_t for the os module
- use _PyTime_t for the more modules
- etc.
We can start by adding the API and use it in the time module, and then rediscuss changes on other modules.
Patch version 15:
- round "correctly"
- datetime.date.fromtimestamp() and datetime.datetime.fromtimestamp() reuses _PyTime_t API to support decimal.Decimal without loss of precision
- add more tests
(Oops, I attached the wrong patch.)
New try, set the version to 16 to avoid the confusion.
test_time is failing on Windows.
Here is the version 17 of my patch. This version is mostly complete and so can be reviewed. Summary of the patch:
- Add a _PyTime_t structure to store any timestamp in any resolution, universal structure used by all functions manipulating timestamps instead of C double to avoid loss of precision
- Add many functions to create timestamp (set _PyTime_t structure) or to get a timestamp in a specific format (int, float, Decimal, timeval or timespec structure, in milliseconds, etc.)
- Round to nearest with ties going away from zero (rounding method called "ROUND_HALF_UP" in Decimal)
- Functions creating timestamps get a new optional timestamp argument to specify the requested return type, e.g. time.time(timestamp=int) returns an int
- Functions getting timestamps argument now also support decimal.Decimal
- Raise an OverflowError instead of a ValueError if a timestamp cannot be stored in a C time_t type
The patch is huge, but as I wrote before, I will split it into smaller parts:
- Add _PyTime_t API
- Use the new API in the time module
- Use the new API in the os module
- etc.
Changes in the version 17 of my patch:
- tested on Linux 32/64 bits, OpenBSD 64 bits, FreeBSD 64 bits, Windows 64 bits
- fix portability issues (for various time_t and C long sizes)
Patch version 18:
- Fix a loss of precision in _PyTime_SetDenominator()
- Add more tests on integer overflow
I also updated the patch adding datetime.datetime support because some people are interested by the type, even I don't think that it is interesting to add it. datetime.datetime is only usable with time.time(), os.*stat() and time.clock_gettime(), whereas it is incompatible with all other functions.
TODO:
- the conversion from Decimal to _PyTime_t does still use a cast to
float and so lose precision
- the PEP must be accepted :-)
The PEP has been rejected, so I close the issue.
|
http://bugs.python.org/issue13882
|
CC-MAIN-2015-48
|
refinedweb
| 2,970
| 57.37
|
Preface: This document was originally written in 2003, before the IRI spec was an RFC. Some of this has since been addressed in the RFC.
Summary: There is a discrepancy between namespaces and URI specs about what identifiers are equivalent. The ony reason this has not caused a problem is that in practice the test cases (two equivalent but not equal unicode character sequences being used) has not occurred in practice. Using IRIs maliciously could however deliberately introduce a bug which could cause a security problem.
Using relationship notation (why not use N3?) to discuss the inconsistencies between some current thinkings about IRIs, URIs, and for example namespace names.
1. URI identity is shared by all parties. Within a given context (*), there is a single (inverse functional) relationship between an ASCII string a and a thing x identified by a string s taken as a uri is uri(x, a).
2. The users of any specification which mention URIs, when one can prove that the two are equivalent by reading [scheme-independent] specs, then one can use one in place of another. That is, when URI (or IRI) strings are deemed "equivalent" then they must refer to the same object.
3. We should be able to use the same software to parse and compare URIs wherever they are used, eg in namespace names or in hypertext links.
Let us formalize the concepts in the documents we are talking about.
uri(x,a) => A(a)
where A(a) means that it is a sequence of ASCII characters (grounded in ANSI X3.4-1986).
The ANSI spec gives a 1:1 mapping ascii(a, s) from the set A of ASCII character to the set S of septets (integers between 0 and 127 inclusive).
Let sames(s1, s2) be the "strcmp" relation between two strings which are septet for septet identical.
Consider the equivalence relation ea(a1, a2) which we use here to indicate that two uris identify the same thing. It (is symmetric and transitive and) has properties
ea(a1, a2) & uri(t1, a1) => uri(t1, a2)
for all a1, a2 (for some t uri(t, a1) & uri(t, a2)) <=> ea(a1, a2)
A(a) <=> ea(a,a)
Now in fact we are going to deal with the ASCII encoded septets for which a similar equivalnce holds
es(s1, s2) <=> Exists a1, s2 such that ascii(a1, s1) & ascii(a2, s2) & ea(a1, a2)
The URI spec mentions two uses of hexadecimal encoding. Hex encoding relates octet strings to septet strings. When the URI spec was written, the significance of the octets greater than 127 was not defined.
It implies that if you see %HH in a URI you should consider it as an encoding of an octet. There is (a the level of this spec) the notion that the URI is an encoding of a string of octets. Those from 0-127 are considered as representing ASCII characters. There is no assumption about what the others represent. The IRI spec will later take advantage of this.
hexify(s1, s2) is true if the difference if any between s1 and s2 is only that for one or more characters in s1 are replaced in s2 by their %HH or %hh encoding, and ascii(s2).
ascii(s) => hexify(s, s)
hexify(s, s)
There are another 128 characters in this notional "extended" set, each of which has a hex encoding.
(DanC: hexify(s+ c, s+hexify(c))
hexify('A') = '%65'
corrollary: hexify(s1, s2) => ascii(s2))
I take hexify to be a subrelation of equality. That is, the URI spec authorizes one to use s2 where you would have used s2. In some cases such as 7-bit transport such as HTTP you have to. It is important that hexification preseves the identity of the resource.
hexify(t, s1) & hexify(t, s2) => es(s1, s2)
{ for some s, hexify(t1, s) & hexify(t2, s) } <=> et(t1, t2)
Note that equivalence is preserved by the interchange of "%20" with " ", but not by interchange of "%2F" with "/".
URI encoding maps octets into URIs
@@ relative
rel(s, b, r) many-many relation between ascii strings, that r is a relative URI reference for s relative to b. Implication of spec is
rel(s1, b, r) . rel(s2, b, r) => e(s1, s2)
abs(s) <=> forAll b: rel(s, b, s)
UTF-8 [Unicode 3.2] gives us a relation utf8(i, s)
Note by the way that
ascii(s) => utf8(s,s)
utf8(i, s) is true if i is a string of unicode characters, and s is an extended ASCII string of octets, and the relationship is as specified in the utf-8 specification.
sameu(i1, i2)
is true whenever the two unicode strings convey exactly the same series of glyphs and/or control characters. There are strings which are not identical
This says that (basically, with some work on corner cases etc) there should be a convention that any 8-bit string which is not ASCII which can be interpreted as a UTF-8 encoding should be interpreted as a uitf-8 encoding.
What does that mean? I take it to mean that you can encode it and de-encode it.
There is a cannonicalization function which the IRI spec uses, defined in @@, which allows a particular
ucan(i,i)
Axioms are that it is a function:
ucan(i, j1) . ucan(i, j2) => strcmp(j1, j2)
for all i: can(i,i)
e(s1, s2).
There is a function (not 1:1) which we define as
iri_uri(i, s) <=> for some j, t: ucan(i, j). utf8(j, t). hexify(t, s)
IRIs are defined as the domain that function, where the range is URIs. An IRI is any unicode string which when canonicalized and utf-8 encoded and hexified is a URI.
There is a uri equivalent to every iri. There is NOT an IRI for every 8-bit string t. There is at least one IRI for every URI: itself.
For requirement 2, equivalent IRIs must identify the same
iri_uri(i1, s1). iri_uri(i1, s2). sameu(i1, i2) => e(s1, s2)
The namespaces specification 2.3 talks about identifiers being different. Specifically, "" and "" are different. Let's call these constant strings D1 and D2 for short.
ne(D1, D2)
Now "difference" is something which allows them for example to occur as different attributes in an XML element. It seems to me that this is ne is the negation of e. It is the common understanding of differentness such that two things can't be both different and the same. To make it otherwise would be very confusing and would prevent (3).
ne(s1, s2) => ~e(s1, s2)
Ouch. We have one spec saying that these are different, and another saying that they are the same.
That isn't logically compatible. The whole layering of the different forms of equality described in Tim Bray's draft finding is of the form
e_uri(s,t) => e(s,t)
e_http(s,t) => e_uri(s,t)
and so on. None of the specs until namespaces say "these are different".
So if you accept the requirements above, and you accept any of the equivalences we have to throw out thatpart of XML namespaces.
In general there are two ways of operating:
1. ignore the equivalences like the namespace spec. This causes a bug if anyone uses two identfiers which are diffrent strings but equivalent. The only practical way of doing that is to make any non-canonical IRIs or URIs illegal. This means IRIs cannot be used except in their trivial URI form.
2. Transmit in any form, receiver makes right. Receiver must compare equivalnce-sware or must cannonicalize before intrenal use (whichhas the same effect).
3. Make IRIs be just unicode strings. Scratch the axiom that hexifying leaves a valid and equivalent IRI. Allow the hexified forms to be used to identify quite different things, in IRIs. Allow IRIs to be converted into URIs, but NOT allow any place where URIs and IRIs can be used interchangebly. This works toward a DanC-proposed world of unicdoe character string comparison. It does not allow a smooth transiition for existing browsers etc whcih mix URIs and IRIs.
There are NOT very many actual uses of D1 and D2, because there aren't really any motivations for making them.
-This is why we haven't had a big problem recently.
There ARE motivations for using (non-uri) IRIs. people are infact using them though maybe not for namespaces yet.
- This is why endorsing IRIs forces us to fix this.
There ARE lot sof applications which canonicalize URIs in various ways.
Theer IS software which compares namespaces character-for-charcter.
There are NOT many if any uses of different IRIs or different URIs for the same namespace.
We should continue the recommendation not to use URIs or IRIs which are equivalent but arbitrarily different strings. The easist way of ensuing this is to use a cannonical form. We can therefore deprocate the transmission or use of non-canonical forms.
We should switch as soon as possible to canonicalizing IRIs in all applications before comparison (or using equiavlence-aware comparisons). The Namespaces spec should change to say when things are the same. the constraint in XML to constrain that attributes cannot occur twice should be made more complicated. It should say that you can't have two occurrences which are the same attribute name, or two attrributes which are equivalent in any way, leaving I regret some fuzziness. For example, you can't use the xhtml1.0 and xml1.1 namespaces in the same document to put two src attributes on an image! they arenot even the same namespace, but clearly they are equivalent at the application level. It should be clear that the fact that strings are different is not a guarantee that the namespaces are different. The parser just isn't expected to spot this. But I think the parser ought to be allowed to consistently cannonicalize. That makes life much easier for the application. DanC wanted to be able to do strcmp, and he can if the parser canonicalizes.
We should then in a few years be able to relax the constraint on not transmitting multiple different forms.
We need a very good IRI cannonicalization test suite.
We should formalize with names the various functions above, and make sure there are good working coded implmentations of them in the mjor languages. A standard API will help. URI working group stuff.
timbl
2003/04
context
The foundational architecture of the web is that there is a global context common to all publically published documents, in which each URI is agreed by everyone to identify the same thing. In practice of course, things break and people are confused and misled. Those making formal systems often restrict the scope of data to that in which this ideal approximation can be taken to hold in practcie as well as in theory.
The fact that the use of uris varies with time (sad but true) (we are NOT talking about living documents or concepts whose reopresentations change, here, but really reuse of the same URI for a totally different concept) means that to model things over a relatively long time one might want to model the time varying nature:
u(x, s, t)
This time modelling can be done and has been done in many ways, but is not addressed here.
|
http://www.w3.org/2003/04/iri
|
CC-MAIN-2016-36
|
refinedweb
| 1,909
| 63.29
|
David Daney <ddaney@avtrex.com> writes:
> Richard Sandiford wrote:
>> David Daney <ddaney@avtrex.com> writes:
>>> Ralf Baechle wrote:
>>>> On Wed, Jun 11, 2008 at 10:04:25AM -0700, David Daney wrote:
>>>>
>>>>> The third operand to 'ins' must be a constant int, not a register.
>>>>>
>>>>> Signed-off-by: David Daney <ddaney@avtrex.com>
>>>>> ---
>>>>> include/asm-mips/bitops.h | 6 +++---
>>>>> 1 files changed, 3 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/include/asm-mips/bitops.h b/include/asm-mips/bitops.h
>>>>> index 6427247..9a7274b 100644
>>>>> --- a/include/asm-mips/bitops.h
>>>>> +++ b/include/asm-mips/bitops.h
>>>>> @@ -82,7 +82,7 @@ static inline void set_bit(unsigned long nr, volatile
>>>>> unsigned long *addr)
>>>>> "2: b 1b \n"
>>>>> " .previous \n"
>>>>> : "=&r" (temp), "=m" (*m)
>>>>> - : "ir" (bit), "m" (*m), "r" (~0));
>>>>> + : "i" (bit), "m" (*m), "r" (~0));
>>>>> #endif /* CONFIG_CPU_MIPSR2 */
>>>>> } else if (cpu_has_llsc) {
>>>>> __asm__ __volatile__(
>>>> An old trick to get gcc to do the right thing. Basically at the stage when
>>>> gcc is verifying the constraints it may not yet know that it can optimize
>>>> things into an "i" argument, so compilation may fail if "r" isn't in the
>>>> constraints. However we happen to know that due to the way the code is
>>>> written gcc will always be able to make use of the "i" constraint so no
>>>> code using "r" should ever be created.
>>>>
>>>> The trick is a bit ugly; I think it was used first in asm-i386/io.h ages
>>>> ago
>>>> and I would be happy if we could get rid of it without creating new
>>>> problems.
>>>> Maybe a gcc hacker here can tell more?
>>> It is not nice to lie to GCC.
>>>
>>> CCing GCC and Richard in hopes that a wider audience may shed some light on
>>> the issue.
>>
>> You _might_ be able to use "i#r" instead of "ri", but I wouldn't
>> really recommend it. Even if it works now, I don't think there's
>> any guarantee it will in future.
>>
>> There are tricks you could pull to detect the problem at compile time
>> rather than assembly time, but that's probably not a big win. And again,
>> I wouldn't recommend them.
>>
>> I'm not saying anything you don't know here, but if the argument is
>> always a syntactic constant, the safest bet would be to apply David's
>> patch and also convert the function into a macro. I notice some other
>> ports use macros rather than inline functions here. I assume you've
>> deliberately rejected macros as being too ugly though.
>
> I am still a little unclear on this.
>
> To restate the question:
>
> static inline void f(unsigned nr, unsigned *p)
> {
> unsigned short bit = nr & 5;
>
> if (__builtin_constant_p(bit)) {
> __asm__ __volatile__ (" foo %0, %1" : "=m" (*p) : "i" (bit));
> }
> else {
> // Do something else.
> }
> }
> .
> .
> .
> f(3, some_pointer);
> .
> .
> .
>
> Among the versions of GCC that can build the current kernel, will any
> fail on this code because the "i" constraint cannot be matched when
> expanded to RTL?
Someone will point this out if I don't, so for avoidance of doubt:
this needs to be always_inline. It also isn't guaranteed to work
with "bit" being a separate statement. I'm not truly sure it's
guaranteed to work even with:
__asm__ __volatile__ (" foo %0, %1" : "=m" (*p) : "i" (nr & 5));
but I think we'd try hard to make sure it does.
I think Maciej said that 3.2 was the minimum current version.
Even with those two issues sorted out, I don't think you can
rely on this sort of thing with compilers that used RTL inlining.
(always_inline does go back to 3.2, in case you're wondering.)
Richard
|
https://www.linux-mips.org/archives/linux-mips/2008-06/msg00137.html
|
CC-MAIN-2018-05
|
refinedweb
| 604
| 72.26
|
One of the powerful capabilities in WMI is allowing authenticated users and applications to perform management tasks on a remote computer through DCOM. This is particularly useful in the fan-out scenario where developers can write applications to monitor a group of workstations and servers from a single machine using WMI.
For example, a developer who wants to write an application to collect the free disk space on all computers in an organization can use one of the WMI client APIs – WMI COM API, System.Management in .NET, WMI Scripting API, or WMI PowerShell cmdlets – to collect the information from all the machines. Among these four client APIs, WMI COM API offers developers the greatest control of the way a WMI remote connection is established and closed.
In this blog, I will talk about three different ways of connecting to a remote machine using WMI to perform multiple WMI operations, and their performance differences. Suppose I have ten machines and I need to periodically poll the status of the hard drives on each machine using WMI. I can choose one of the following approaches to write my application:
1) Connecting using Explicit Credential
My application calls IWbemLocator::ConnectServer method to establish a DCOM connection with each remote machine using an explicit credential I specified. The credential could be the local administrator of the remote machine. The application polls the hard drive status through a WMI provider and then closes the connection. The connect-poll-close process is repeated continuously to get the fresh status from all remote machines.
2) Connecting using Default Credential
Like (1) above, my application connects to each remote machine, polls the hard drive status through a WMI provider, and closes the connection. The only difference in this approach is that no explicit credentials are specified when calling the ConnectServer method. The application uses the default credential, which could be a domain user that has administrative access to the remote machines.
3) Reusing WMI Connection
My application calls the ConnectServer method to connect to the remote machines either with explicit credential or default credential specified, and then it polls the status of the hard drives through a WMI provider. Instead of closing the connection, my application keeps it open for subsequent polls by holding on to the reference to the IWbemServices object.
Let’s find out what the differences are for these three scenarios in terms of WMI throughput [1]. I set up ten machines with a variety of Windows OS installed. Each machine is joined to the same domain and has the same hardware configuration [2]. Instead of polling the hard drive status, my WMI application repeatedly connects to each of the ten machines, and performs the same WMI object query operation on a test WMI provider. To minimize the variation due to the provider processing time, my test WMI provider always returns the same set of results for the query I use on any given machine. Figure 1 shows the average throughput of the client-side OS platforms on which my WMI application is run.
Figure 1: Fan out Remoting Throughput using WMI
Of course the absolute values of the throughput may vary depending on different factors, such as the payload of the query results, the network latency, the processor speed, etc. However, there are a couple of interesting observations in this fan-out experiment:
a) Of all three client-side OS platforms I tested, if my application uses the same WMI connection throughout the life time of the WMI operations (which is approach 3 mentioned above), the throughput measured is almost 3 times greater than if the application opens and closes the connection for each WMI operation. This is due to fewer authentication requests that the client-side OS needs to process.
b) On Windows Server 2008 only, my WMI application using default credential has throughput nearly 6 times higher compared to my application using explicit credential. Moreover, the lsass.exe process consumes considerable CPU cycles when connecting to the remote machines using explicit credential. The performance delta is partly due to a security change made in Windows Vista and Windows Server 2008.
c) On Windows Server 2008 with explicit credential, I am able to improve the throughput of my application to the level comparable to the throughput on the other two client-side OS platforms by changing the way I pass the explicit credential to my test code. Here is the pseudo-code of my change:
1. Call LogonUser function with an explicit credential
2. Call ImpersonateLoggedOnUser function with the token returned in (1)
3. Call ConnectServer to connect to remote machines using default credential
4. Perform the WMI object query
5. Close the connection
Best of all, the change keeps the lsass.exe process running like cool mint.
Andy Cheung [MSFT]
PS: Thanks to Vivek and Sunil for the help on this post.
[1] The unit of measurement for throughput I use in this blog is operation per second. An operation consists of connecting to a remote server, querying WMI objects, parsing the result set, and closing the connection. In the “Reusing WMI Connection” approach, the cost of establishing and closing a connection is incurred only once.
[2] All machines are Pentium 4, 2.4GHz, 1GB RAM. The network is Gigabit. All client-side OS platforms are 64-bit.
Is there a way to "hold" the connection when using VBscript and/or PowerShell? Thx…
In VBScript, you can reuse the SWbemLocator object to achieve higher performance when connecting to the same remote WMI namespace. If you are using WMI Powershell Cmdlets, you cannot "hold" the connection as WMI Cmdlets do not expose any references to the connection. In the managed code world, you can reuse the ManagementScope object. HTH
I encountered problems with LSASS CPU usage under Windows 2008, when running remote WMI queries from a dotnet application.
Unfortunately, reusing a connected ManagementScope object did not help. The LSASS CPU usage rises at every call of ManagementObjectSearcher::Get().
Was this LSASS performance problem ever fixed for Windows 2008?
|
https://blogs.msdn.microsoft.com/wmi/2009/06/26/wmi-improving-your-wmi-application-performance-in-fan-out-scenario/
|
CC-MAIN-2017-13
|
refinedweb
| 1,006
| 52.6
|
.
interpretAsHandler :: (forall t. Event t a -> Event t b) -> AddHandler a -> AddHandler bSource
Simple way to write a single event handler with functional reactive programming.
Building event networks with input/output
After having read all about
Events and
Behaviors,
you want to hook them up to an existing event-based framework,
like
wxHaskell or
Gtk2Hs.
How do you do that?
This Reactive.Banana.Implementation module allows you to obtain input events from external sources and it allows you perform output in reaction to events.
In constrast, the functions from Reactive.Banana.Model allow you to express the output events in terms of the input events. This expression is called an event graph.
An event network is an event graph together with inputs and outputs.
To build an event network,
describe the inputs, outputs and event graph in the
NetworkDescription monad
and use the
compile function to obtain an event network from that.
To activate an event network, use the
actuate function.
The network will register its input event handlers and start producing output.
A typical setup looks like this:
main = do -- initialize your GUI framework window <- newWindow ... -- describe the event network let networkDescription :: forall t. NetworkDescription t () networkDescription = do -- input: obtain Event from functions that register event handlers emouse <- fromAddHandler $ registerMouseEvent window ekeyboard <- fromAddHandler $ registerKeyEvent window -- input: obtain Behavior from changes btext <- fromChanges "" $ registerTextChange editBox -- input: obtain Behavior from mutable data by polling bdie <- fromPoll $ randomRIO (1,6) -- express event graph let behavior1 = accumB ... ... event15 = union event13 event14 -- output: animate some event occurences reactimate $ fmap print event15 reactimate $ fmap drawCircle eventCircle -- compile network description into a network network <- compile networkDescription -- register handlers and start producing outputs actuate network
In short,.
type AddHandler a = (a -> IO ()) -> IO (IO ())Source
A value of type
AddHandler a is just a facility for registering
callback functions, also known as event handlers.
The type is a bit mysterious, it works like this:
do unregisterMyHandler <- addHandler myHandler
The argument is an event handler that will be registered. The return value is an action that unregisters this very event handler again.
fromAddHandler :: (Behavior t a)Source
Input,
obtain a
Behavior by frequently polling mutable data, like the current time.
The resulting
Behavior will be updated on whenever the event
network processes an input event.
This function is occasionally useful, but
the recommended way to obtain
Behaviors is by using
fromChanges.
Ideally, the argument IO action just polls a mutable variable, it should not perform expensive computations. Neither should its side effects affect the event network significantly.
reactimate :: Event t (IO ()) -> NetworkDescription t ()Source
Output.
Execute the
IO action whenever the event occurs.
Note: If two events occur very close to each other,
there is no guarantee that the
reactimates for one
event will have finished before the ones for the next event start executing.
This does not affect the values of events and behaviors,
it only means that the
reactimate for different events may interleave.
Fortuantely, this is a very rare occurrence, and only happens if
- you call an event handler from inside
reactimate,
- or you use concurrency.
In these cases, the
reactimates follow the control flow
of your event-based framework.
initial :: Behavior t a -> NetworkDescription t aSource
changes :: Behavior t a -> NetworkDescription t (Event t a)Source
Output,
observe when a
Behavior changes.
Strictly speaking, a
Behavior denotes a value that varies *continuously* in time,
so there is no well-defined event which indicates when the behavior changes.
Still, for reasons of efficiency, the library provides a way to observe
changes when the behavior is a step function, for instance as
created by
stepper. There are no formal guarantees,
but the idea is that
changes (stepper x e) = return (calm e)
actuate :: EventNetwork -> IO ()Source
Actuate an event network. The inputs will register their event handlers, so that the networks starts to produce outputs in response to input events.
pause :: EventNetwork -> IO ()Source
Pause an event network. Immediately stop producing output and unregister all event handlers for inputs. Hence, the network stops responding to input events, but it's state will be preserved.
You can resume the network with
actuate.
Note: You can stop a network even while it is processing events,
i.e. you can use
pause as an argument to
reactimate.
The network will not stop immediately though, only after
the current event has been processed completely.
Utilities
This section collects a few convenience functions for unusual use cases. For instance:
- The event-based framework you want to hook into is poorly designed
- You have to write your own event loop and roll a little event framework
newAddHandler :: IO (AddHandler a, a -> IO ())Source
Build a facility to register and unregister event handlers.
newEvent :: NetworkDescription t (Event t a, a -> IO ())Source
Build an
Event together with an
IO action that can
fire occurrences of this event. Variant of
newAddHandler.
This function is mainly useful for passing callback functions
inside a
reactimate.
|
http://hackage.haskell.org/package/reactive-banana-0.5.0.1/docs/Reactive-Banana-Frameworks.html
|
CC-MAIN-2013-48
|
refinedweb
| 819
| 54.12
|
Introduction:
- This program will explain you how to list out all the Emirp numbers from the given starting and ending limits.
- The inputs required are two numbers from two text fields and a Submit button
Requirements are:
- Please find following link for program requirements.
“ _______________________________________”
Determine Emirp Numbers Emirp Number application:
- The first page of Web application looks complicated.
- So, let us delete the files which are already given and we will create new files as per our requirements.
- Before deleting anything, let us first understand the directory structure of Struts 2 applications.
- You have been given following :
- Web Application1
- the source packages folder. And the path to find these and connect these pages is to be placed in “configuration files” folder.
- Now the web application is empty and will run nothing. So to begin from scratch, let us create a folder named “jsp” inside the web pages folder. And create “first.jsp” by selecting new Java server page document.
- As the page is JSP (that is Java Server Page = HTML+ Java), we can embed HTML code in JSP page.
- The requirements here are of a Label to display instructions for entering two numbers, two text fields and a submit button.
- Thus the first.jsp code will be:
// first.jsp <%-- Document : first Created on : Nov 14, 2014, 12:49:46 PM Author : Infinity --%> <%@page <title>Emirp Number</title> </head> <body> <s:form <s:label <s:textfield <s:textfield <s:submit </s:form> </body> </html>
- The same form can be created by HTML tags instead of struts tags.
- After running this jsp, following output can be seen:
- On button click, the page is transferred to “click” action page which is still unavailable. So error message will appear.
- We now need a result page which displays the retrieved emirp number on submit button click. So create another jsp page named as “next.jsp” inside jsp folder.
- Add the following statement to the result page for displaying user name :
The emirp numbers <%-- Document : next Created on : Nov 14, 2014, 1:19:42 PM Author : Infinity --%> <%@page <title>Answer Page</title> </head> <body> <h1> Emirp Numbers from <s:property</s:property> to <s:property</s:property> are : <s:property</s:property> </h1> </body> </html>
- Now to create action pages, create a folder named jsp_action inside source packages. The folder here is known as package.
- Create action class “emirp_check” inside “jsp_action” package as seen in previous tutorial.
- variables here are variable “number1”, “number2”, “flag”, “flag1”, “answer”, “c” and “c1” (check the name attribute of text field in our form).
- Here :
- Answer variable : stores all the emirp number
- Flag variable : Boolean variable used to determine whether the number prime or not
- Flag1 variable : Boolean variable used to determine whether the reverse number is prime or not
- Counter c variable: integer variable used to count the iterations for prime number
- Counter c1 variable: integer variable used to count the iterations for reverse prime number. Logic for Emirp Numbers:
- So the required getter and setter methods are as follows:; } ublic; }
- Now, on button click this class will be executed. We need following methods:
- check(): that returns string of “success” if the number is emirp. This method will in-turn call other methods for first determining whether the numbers are prime, then reverse the number and then again determine whether the reverse number is prime or not.
- check_prime(): this method returns true (Boolean) if the number is prime. The number is passed as an integer argument to this function
- rev(): this method returns the integral reverse of the input number as arguments to it.
- So the action class (emirp_check.java) will be as follows:
//emirp_check.java package jsp_action; import com.opensymphony.xwork2.ActionSupport; public class emirp_check extends ActionSupport { private int number1, number2, c = 0,c1 = 0; private String answer = ""; private boolean flag, flag1;; } public; } public String check() { int value_temp; for(int i = number1 ; i <= number2 ; i++) { flag = check_prime(i); if(flag) { value_temp = rev(i); flag1 = check_prime(value_temp); if(flag1) { answer += " " + String.valueOf(i); } } } return "success"; } boolean check_prime(int x) { c=0; for(int i = 1 ; i <= x ; i++) { if(x % i == 0) { c++; } } if(c == 2) { return true; } else { return false; } } int rev(int i) { int temp = i; int rev = 0; while(i != 0) { rev = rev * 10; rev = rev + i % 10; i = i/10; } return rev; } }
-:
- In our example, struts.xml can written as:
<!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" ""> <struts> <!-- Configuration for the default package. --> <package name = "default" extends = "struts-default"> <action name = "click" class = "jsp_action.emirp_check" method = "check"> <result name = "success"_4<<
Figure : first successful run of application
- Enter two numbers in the text fields and click on the button. The activity page is executed, which results in success if emir numbers are found and from the <result> tag, the value for success is executed. That is, the result page (next.jsp) is displayed.
Figure : enter two numbers for list of emirp numbers
Figure : Result page (next.jsp) – displays all the emirp numbers.
|
http://www.wideskills.com/struts/program-to-list-emirp-numbers-list-entered-user
|
CC-MAIN-2019-51
|
refinedweb
| 832
| 63.29
|
Introduction: Tweeting Weather Station Instructables
Intel Edison
with Arduino Breakout Board1
MQ2 Combustible Gas Sensor
1
YL-83
Rain Sensor1
SL-HS-220
Temperature & Humidity Sensor1
Resistor
32K
4.7K
3 Metal Standoff 1inch1
Resistor
32K
4.7K2
Wood Sheet A4 Size
Can later be cut in size3
Metal Standoff
1inch
Step 2: Electrical Design.
Sensors.
Attachments.
Step 6: Twitter Configuration.
Second Prize in the
Intel® IoT Invitational
Participated in the
Time Contest
Participated in the
Epilog Contest VII
Be the First to Share
Recommendations
24 Comments
6 years ago
Alternative for Edison?
Reply 6 years ago
Instead of recommending another board, I'll give you a couple of reasons why I used the Edison for this project:
1. The Edison runs Yocto Linux with development support for Arduino IDE, Eclipse (C, C++, Python). In other words it has a broad range of compatible programming languages.
2. The Intel Edison XDK has also support for HTML5 and NodeJS. This means that you can create a website/web base app that can display the weather from your TWIST. It increases the functionality and adaptablility of TWIST by integrating it with other websites apart from Twitter.
3. The availability of multiple programming languages & web based functionality makes easier to create API's for TWIST and other projects too.
4. In the future I plan on using the XDK to develop and API for TWIST.
5. The Intel Edison Dashboard also allows you ti display the sensor data in the form graphs and charts.
6. You can incorporate the tiny Edison module(excluding the breakout board) on your custom PCB's without loosing the WiFi or Bluetooth functionalities. This also helps large scale manufacturing or mass producing
Reply 6 years ago
Of course, You could use a Raspi for this purpose. But the code in this Instructable will not be compatible with the Raspi.
Reply 6 years ago
is it possible to substitute this project with arduino uno or Any other arduino board instead of Intel Edison?pls help?
Reply 6 years ago
The Raspberry Pi or a Particle Photon should do well.
6 years ago
Hi jonathan may i have your help. I am also doing a project using edison board, now i use example ReadAnalogVolatge to test out but I have a problem that when i send the voltage to twitter it only show one time and only display 0 V.
Reply 6 years ago
You should store the analog value of the sensor's output in a variable. And then send the value of that variable to Twitter.
For example:
float X= analogRead();
Reply 6 years ago
Are there any other errors?
6 years ago
Why wont this compile? I'm trying to incorporate the DHL11 sensor:
#include <Twitter.h>
#include <SPI.h>
#include "DHT.h"
#include <Ethernet.h>
#undef int() //inorder to make the stdlib.h work
#include <Stdlib.h>
float thermsen = A0;
float humidsen = A1;
int rainsen = 2;
int indicator = 13;
#define Humidity
#define DHTPIN 3
#define DHTTYPE DHT11
DHT dht(DHTPIN, DHTTYPE);
#define MQ_PIN (2) //define which analog input channel you are going to use
#define RL_VALUE (4
char stringMsg;
void setup() {
Serial.begin(115200);
pinMode(A0, INPUT);
pinMode(A1, INPUT);
pinMode(2, INPUT);
pinMode(13, OUTPUT);
/**********************************************************/");
dht.begin();
}
void loop()
{ dht11();
float h;
float t;
float f;
//Humidity
stringMsg += "Humidity:";
char dht11humidity[10];
dtostrf(h,1,2,dht11humidity);
stringMsg += dht11humidity;
stringMsg += "%RH";
//Temperature Celsius
stringMsg += " Temperature:";
char dht11celsius[10];
dtostrf(t,1,2,dht11celsius);
stringMsg += dht11celsius;
stringMsg += "°C";
//Temperature Celsius
stringMsg += " Temperature:";
char dht11farhen[10];
dtostrf(f,1,2,dht11farhen);
stringMsg += dht11farhen;
stringMsg += "°F";}
//Manual Type-casting for sensor readings
char *dtostrf (double val, signed char width, unsigned char prec, char *sout)
{
char fmt[100];
sprintf(fmt, "%%%d.%df", width, prec);
sprintf(sout, fmt, val);
return sout;
}
{
tweetMessage();
delay(3000);
}
//Manual Type-casting for sensor readings
char *dtostrf (double val, signed char width, unsigned char prec, char *sout)
{
char fmt[100];
sprintf(fmt, "%%%d.%df", width, prec);
sprintf(sout, fmt, val);
return sout;
}
//Twitter Message
void tweetMessage() {
Twitter twitter("4310592022-mfWjQTbSIQy9lFni7EZxcp93d1JRzyR4vtWrmno"); //Twitter Token
humidity();
float humid;
//Twitter message
String stringMsg = "Humidity:";
char tmp[10];
dtostrf(humid, 1, 2, tmp);
stringMsg += tmp;
stringMsg += "%RH";
temp_now();
int Temperature;
//Twitter message
stringMsg += " Temperature:";
char nowtemp[10];
dtostrf(Temperature, 1, 0, nowtemp);
stringMsg += nowtemp;
stringMsg += "°C";
MQ2printval();
float MQ2tweet;
//Twitter message
stringMsg += " CO level:";
char nowMQ2[10];
dtostrf(MQ2tweet, 1, 2, nowMQ2);
stringMsg += nowMQ2;
stringMsg += "ppm";
if (digitalRead(2) == HIGH)
{
stringMsg += " Rain Alert";
}
stringMsg += " #Betatesting #raintest #IOTweatherstation #Mumbai #Bandra #CarterRoad ";
//Convert our message to a character array //Twiiter Character Limit. Converts/limits message to 140 characters.
char msg[140];
stringMsg.toCharArray(msg, 140);
//Tweet that sucker!
if (twitter.post(msg))
{
int status = twitter.wait();
if (status == 200)
{
Serial.println("OK.");
Serial.println("Message Tweeted");
}
else
{ //Connection Test
Serial.print("failed : code ");
Serial.println("Message not Tweeted");
Serial.println(status);
}
}
else
{
Serial.println("connection failed.");
Serial.println("Message not Tweeted");
}
digitalWrite(13, HIGH); // LED Indicator Feedback Code Working.
}
/********Serial print MQ2 Value****************/
void MQ2printval()
{
float MQ2tweet = (MQGetGasPercentage(MQRead(MQ_PIN) / Ro, GAS_LPG));");
}
/****************** MQResistanceCalculation ****************************************
Input: raw_adc - raw value read from adc, which represents the voltage
Output: the calculated sensor resistance
Remarks: The sensor and the load resistor forms a voltage divider. Given the voltage
across the load resistor and its resistance, the resistance of the sensor
could be derived.
************************************************************************************/
float MQ])));
}
/******Humidity***********/
void humidity()
{
int humidSensorValue = analogRead(A1);
// Convert the analog reading (which goes from 0 - 1023) to a voltage (0 - 5V):
// Multiply by 1000 so that map() function can work properly. map() does not count numbers after the decimal.
float humidvoltage = humidSensorValue * (5.0 / 1023.0) * 1000;
// When Humidity is 20%RH Voltage is 660mV. When Humidity is 95%RH Voltage is 3135mV.
float humid = map(humidvoltage, 660, 3135, 20, 95);
// print out the value of humidity:
Serial.print("Humidity:");
Serial.print(humid);
Serial.println();
}
/**************Temperature*************/
void temp_now()
{
float tempSensorValue = analogRead(A0);
// Convert the analog reading (which goes from 0 - 1023) to a voltage (0 - 5V):
float voltage = tempSensorValue * (5.0 / 1023.0);
// print out the value you read:
//Serial.print("Voltage(V)= ");
//Serial.print(voltage);
//Serial.println();
int Temperature;
if ((voltage > 2.521) && (voltage < 2.585))
{
Temperature = 0;
}
if ((voltage > 2.585) && (voltage < 2.648))
{
Temperature = 1 ;
}
if ((voltage > 2.648) && (voltage < 2.711))
{
Temperature = 2;
}
if ((voltage > 2.711) && (voltage < 2.773))
{
Temperature = 3;
;
}
if ((voltage > 2.773) && (voltage < 2.834))
{
Temperature = 4;
}
if ((voltage > 2.834) && (voltage < 2.894))
{
Temperature = 5;
}
if ((voltage > 2.894) && (voltage < 2.95))
{
Temperature = 6;
}
if ((voltage > 2.95) && (voltage < 3.01))
{
Temperature = 7;
}
if ((voltage > 3.01) && (voltage < 3.07))
{
Temperature = 8;
}
if ((voltage > 3.07) && (voltage < 3.13))
{
Temperature = 9;
}
if ((voltage > 3.13) && (voltage < 3.18))
{
Temperature = 10;
}
if ((voltage > 3.18) && (voltage < 3.24))
{
Temperature = 11;
}
if ((voltage > 3.24) && (voltage < 3.29))
{
Temperature = 12;
}
if ((voltage > 3.29) && (voltage < 3.34))
{
Temperature = 13;
}
if ((voltage > 3.34) && (voltage < 3.40))
{
Temperature = 14;
}
if ((voltage > 3.40) && (voltage < 3.45))
{
Temperature = 15;
}
if ((voltage > 3.45) && (voltage < 3.5))
{
Temperature = 16;
}
if ((voltage > 3.5) && (voltage < 3.54))
{
Temperature = 17;
}
if ((voltage > 3.54) && (voltage < 3.59))
{
Temperature = 18;
}
if ((voltage > 3.59) && (voltage < 3.64))
{
Temperature = 19;
}
if ((voltage > 3.64) && (voltage < 3.68))
{
Temperature = 20;
}
if ((voltage > 3.68) && (voltage < 3.72))
{
Temperature = 21;
}
if ((voltage > 3.72) && (voltage < 3.76))
{
Temperature = 22;
}
if ((voltage > 3.76) && (voltage < 3.81))
{
Temperature = 23;
}
if ((voltage > 3.81) && (voltage < 3.85))
{
Temperature = 24;
}
if ((voltage > 3.85) && (voltage < 3.88))
{
Temperature = 25;
}
if ((voltage > 3.88) && (voltage < 3.92))
{
Temperature = 26;
}
if ((voltage > 3.92) && (voltage < 3.96))
{
Temperature = 27;
}
if ((voltage > 3.96) && (voltage < 3.99))
{
Temperature = 28;
}
if ((voltage > 3.99) && (voltage < 4.03))
{
Temperature = 29;
}
if ((voltage > 4.03) && (voltage < 4.06))
{
Temperature = 30;
}
if ((voltage > 4.06) && (voltage < 4.09))
{
Temperature = 31;
}
if ((voltage > 4.09) && (voltage < 4.12))
{
Temperature = 32;
}
if ((voltage > 4.12) && (voltage < 4.15))
{
Temperature = 33;
}
if ((voltage > 4.15) && (voltage < 4.18))
{
Temperature = 34;
}
if ((voltage > 4.18) && (voltage < 4.21))
{
Temperature = 35;
}
if ((voltage > 4.21) && (voltage < 4.24))
{
Temperature = 36;
}
if ((voltage > 4.24) && (voltage < 4.26))
{
Temperature = 37;
}
if ((voltage > 4.26) && (voltage < 4.29))
{
Temperature = 38;
}
if ((voltage > 4.29) && (voltage < 4.31))
{
Temperature = 39;
}
if ((voltage > 4.31) && (voltage < 4.34))
{
Temperature = 40;
}
if ((voltage > 4.34) && (voltage < 4.36))
{
Temperature = 41;
}
if ((voltage > 4.36) && (voltage < 4.38))
{
Temperature = 42;
}
if ((voltage > 4.38) && (voltage < 4.4))
{
Temperature = 43;
}
if ((voltage > 4.4) && (voltage < 4.42))
{
Temperature = 44;
}
if ((voltage > 4.42) && (voltage < 4.44))
{
Temperature = 45;
}
if ((voltage > 4.44) && (voltage < 4.46))
{
Temperature = 46;
}
if ((voltage > 4.46) && (voltage < 4.48))
{
Temperature = 47;
}
if ((voltage > 4.48) && (voltage < 4.5))
{
Temperature = 48;
}
if ((voltage > 4.5) && (voltage < 4.51))
{
Temperature = 49;
}
if ((voltage > 4.51) && (voltage < 4.52))
{
Temperature = 50;
}
Serial.print("Temperature:");
Serial.print(Temperature);
Serial.println();
}
void dht11() {
//");
Serial.print("Temperature: ");
Serial.print(t);
Serial.print(" *C ");
Serial.print(f);
Serial.print(" *F\t");
Serial.print("Heat index: ");
Serial.print(hic);
Serial.print(" *C ");
Serial.print(hif);
Serial.println(" *F");
}
Reply 6 years ago
What is the error you are getting?
6 years ago
Awesome weather station. And thanks for including the CAD file.
Reply 6 years ago
Follow me for more awesome Instructables!
Reply 6 years ago
Do contribute the changes you make to the CAD files(if any) to the TWIST repository.
6 years ago
hey, I was just wondering would it be possible to use a DHT11 instead of a sl-hs-220 sensor?
Reply 6 years ago
Hey I just completed the code & schematics for the DHT11 sensor and I have added it to the repository. You can check it out on step 8 by clicking on its Completed status. If you have any queries let me know.
Reply 6 years ago
TWIST is an open source platform with a public-repository. This means anyone contribute and add code and schematics for different sensors.
The DHT11 is compatible and can be used instead of the SL-HS-220.I have added this to the Sensor Repository & it's current status is Proposed. I will start working on the code and schematics for the DHT11 sensor. You can also contribute you code/schematics(if any) to the sensor repository. Step 8 will show you the repository status of the sensor.
6 years ago
Hmmm. Waterproof?
Or before the first rain ? Rain sensor for assembly, from which probably will drain water and to detect =)
Reply 6 years ago
I used Silicon glue as a sealant around all the sensors. I applied the glue from the bottom of the faceplate so that it doesn't ruin the aesthetics of the faceplate. The silicon glue should prevent water or rain from contact with any of the other electronics including the Edison.
6 years ago
Awesome GIF and great use of HTML!! =D
Reply 6 years ago
Here's a sweet Instructable by Nodcah which helped me with the GIF, table and all the other HTML in my Instructable:
|
https://www.instructables.com/TWIST-DIY-Tweeting-Weather-Station/
|
CC-MAIN-2022-40
|
refinedweb
| 1,848
| 54.29
|
#include <iostream>
#include <vector>
#include <stack>
Hi!
In the Standart Template Library I am stading stack and queues.
One way to instantiate is creating a stack that uses an underlying container , in this case vector,
Can anyone give me an example about how to use this?
I mean how can I use the function members of the container vector in the next code?
What is the difference between a simply stack of integers and the stack of integers containes in a vector?
Thanks
using namespace std;
int main()
{
stack<int,vector<int>> mystack; //vector ???????
mystack.push(2);
mystack.push(8);
cout<<mystack.top();
return 0;
}
Originally Posted by Jose M
how can I use the function members of the container vector in the next code?
The second parameter of a stack definition allows you to define which container the stack is supposed to use internally. The default is a deque and you can change it to for example a vector as you suggest.
But this doesn't influence the stack interface, it will remain the same. A stack is a stack. But the properties of the stack will reflect that of the internal container. For example a deque and a vector handles memory differently when they grow/shrink and the stack will adopt the respective behaviour.
I recommend The C++ Standard Library, second edition, by Josuttis as a very good reference for topics like this.
ok thanks... I undertand now
View Tag Cloud
Forum Rules
|
http://forums.codeguru.com/showthread.php?543421-STL-stack-queue&p=2147089
|
CC-MAIN-2017-04
|
refinedweb
| 245
| 64.1
|
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
Hi everybody!
Did you guys installed latest Xcode 9.3 from Apple and tried to compile your projects with SAP Cloud Platform SDK components? I did... And got
Module compiled with Swift 4.0.2 cannot be imported in Swift 4.1
import SAPFoundation
There is a new build/patch available at SAP Store that is compatible with Xcode 9.3. You should download SAP Cloud Platform SDK for iOS (Xcode 9.3) 2.0 SP01 PL05
Regards
JK
PS: With same patch you can work on Xcode 9.4 beta too.
Thank you, Jitendra! You saved my work!
The updated version works fine. Just to mention for others that it will be required to remove and re-add SDK Components into your Embedded Binaries Xcode section, as well as clean up after that.
|
https://answers.sap.com/questions/482806/sap-cloud-platform-sdk-stop-working-after-latest-x.html
|
CC-MAIN-2018-34
|
refinedweb
| 150
| 78.35
|
This is a situation that might arise in a programmer’s every day project activities. It mainly tests some one’s grip on recursion too. At the same time is a pretty small problem. Hence by all means, it serves as a great programming interview question and to the best of my direct knowledge, has been asked at around many big guns among others.
Also it has often been what many people stumbled against on this forum
So to do a bit of revision, a permutation of a string is basically achieved by reordering the position of elements. It is also famously known that given a string of length n, there are n! (factorial) permutations i.e. if we have a string of length 3 i.e. “xyz” here is a list of all permutations
xyz xzy yxz yzx zxy zyx
As you can see there are no more possibilities and these are total of 6 permutations i.e. 3! = 3X2.
Also please bear in mind that permutation is different from a combination i.e. combination usually says like given 4 players, how many ways are there to pick a team of 3 players. This would be combination in which ORDER of elements does not matter. Two combinations always differ by at least one element. Whereas no such restriction applies to permutations in which ORDER matters.
So getting back, our problem says, if we are given “xyz” we are required to print all permutations as the case above shows.
Looking at the outputs, can you devise an algorithm? Any ideas?
Like I said before, this sounds like a problem that can be done naturally with recursion. Well, until now we haven’t seen a relationship or a recursive pattern.
Let’s take a closer look, given a string of 3 elements i.e. “xyz”, how did you go about printing all permutations on paper?
Here is how we went,
“x followed by all permutations of y and z” (yz and zy)
“y followed by all permutations of x and z” (xz and zx)
And finally “z followed by all permutations of x and y” (xy and yx)
So to break down the problem, we fix one element in the initial position and try to change the rest of them for all possible situations. To achieve this, we call the same function recursively again, with one less element since the first one’s possibilities are already figured out.
To do this in code practically, we have a function called permutation. It takes two understandable arguments, a string (char array in c) and its length (excluding the terminating null).
However, there is one more parameter that we will need. Because we have to move ahead in the array with every call i.e. first we fix first element, then second and so on. So this basically means we are going to increase the starting index of array.
Now we are passing a character array, so each time we have to add another parameter called “starting or current index” of the permutations to be explored in the current call.
If this index is 0, for the example above it means x is fixed and we generate further permutations of y and z, when this is 1, it means first two places are fixed and it is the third and onwards positions that need to be moved.
Next we get inside our actual function. It has two main parts.
The first part prints a permutation when it is ready. We will get to this in a minute.
The second part is the meat of the function. It runs a loop from our current index to size of array. In each iteration of that loop it swaps the current index and loop variable. Right at that point it calls the permutation function again recursively to generate all further permutations of this swapped state. Once that call is done, it swaps back the variables to original state. This is required because for e.g. after we are done with generating for x (yz and zy) we want to start from the original state again i.e. original state was xyz so now we move y to the first place by swapping it with x, so y followed by xz and then zx would be the sequence. Code follows
void swap(char *fir, char *sec) { char temp = *fir; *fir = *sec; *sec = temp; } /* arr is the string, curr is the current index to start permutation from and size is sizeof the arr */ void permutation(char * arr, int curr, int size) { if(curr == size-1) { for(int a=0; a<size; a++) cout << arr[a] << "\t"; cout << endl; } else { for(int i=curr; i<size; i++) { swap(&arr[curr], &arr[i]); permutation(arr, curr+1, size); swap(&arr[curr], &arr[i]); } } } int main() { char str[] = "abcd"; permutation(str, 0, sizeof(str)-1); return 0; }
Following is the output of the above code run with string “abcd”’
There was no special reason for making swap a separate function and pass the array elements as pointers and swap inside the function. It just makes the main loop appear more tidy and understandable. Feel free to insert two 3 assignments each for the swaps if you are not comfortable with the pointer manipulation.
As you can see with each new call to permutation, we are increasing the current index by 1. Soon there will be a time when it reaches to one less than the size of string. This is our base case as there is only one more element whose position we cannot swap with another. All previous possible orderings have received a separate recursive function call. It is at this point that we print the entire permutation from index 0 up to size. This part is in the beginning of the function under the if block.
The running time of this algorithm is of course n! Simple analysis tells that there is a for loop running from start to end of string. For each element of the string there is a recursive call which keeps decreasing string length by one with every successive call.
The standard c++ STL also provides a built in function for achieving the same task (generating next permutation) though it uses vector (c++ arrays) and iterators. However, it is pretty handy so I thought of adding a sample of its usage along
#include<iostream> #include<vector> #include<algorithm> using namespace std; vector<int> vec; void print() { //for(int i=0; i<vec.size(); i++) // cout << vec[i] << " "; for(vector<int>::iterator it = vec.begin(); it != vec.end(); it++) cout << *it << "\t"; cout << endl; } int main() { for(int i = 1; i < 5 ; i++) { vec.push_back(i); cout << vec[i-1] << "\t"; } cout << endl; while(next_permutation(vec.begin(), vec.end()) ){ print(); // do some processing on vec } return 0; }
And here is the output of above STL function usage. We have basically inserted numbers 1234 and generated all of their permutations
|
http://forum.codecall.net/topic/64715-how-to-generate-all-permutations/
|
CC-MAIN-2016-40
|
refinedweb
| 1,162
| 70.02
|
$ */ /****************************************************************** Copyright 1992, 1993, 1994 by FUJITSU LIMITED @@ <URL:> and <URL:>) includes fonts that can display many Unicode characters; they can also be used by ps-print and ps-mule to print Unicode characters. Another cause of this for specific characters is fonts which have a missing glyph and no default character. This is known to occur for character number 160 (no-break space) in some fonts, such as Lucida but Emacs sets the display table for the unibyte and Latin-1 version of this character to display a space. ** Under X11, some characters appear improperly aligned in their lines. You may have bad X11 fonts; try installing the intlfonts distribution or the etl-unicode collection (see the previous entry). ** Certain fonts make each line take one pixel more than it "should". This is because these fonts contain characters a little taller than the font's nominal height. Emacs needs to make sure that lines do not overlap. ** Loading fonts is very slow. You might be getting scalable fonts instead of precomputed bitmaps. Known scalable font directories are "Type1" and "Speedo". A font directory contains scalable fonts if it contains the file "fonts.scale". If this is so, re-order your X windows font path to put the scalable font directories last. See the documentation of `xset' for details. With some X servers, it may be necessary to take the scalable font directories out of your path entirely, at least for Emacs 19.26. Changes in the future may make this unnecessary. ** Font Lock displays portions of the buffer in incorrect faces. By far the most frequent cause of this is a parenthesis `(' or a brace `{' in column zero. Font Lock assumes that such a paren is outside of any comment or string. This is of course not true in general, but the vast majority of well-formatted program source files don't have such parens, and therefore this assumption is used to allow optimizations in Font Lock's syntactical analysis. These optimizations avoid some pathological cases where jit-lock, the Just-in-Time fontification introduced with Emacs 21.1, could significantly slow down scrolling through the buffer, especially scrolling backwards, and also jumping to the end of a very large buffer. Beginning with version 22.1, a parenthesis or a brace in column zero is highlighted in bold-red face if it is inside a string or a comment, to indicate that it could interfere with Font Lock (and also with indentation) and should be moved or escaped with a backslash. If you don't use large buffers, or have a very fast machine which makes the delays insignificant, you can avoid the incorrect fontification by setting the variable `font-lock-beginning-of-syntax-function' to a nil value. (This must be done _after_ turning on Font Lock.) Another alternative is to avoid a paren in column zero. For example, in a Lisp string you could precede the paren with a backslash. ** With certain fonts, when the cursor appears on a character, the character doesn't appear--you get a solid box instead. One user on a Linux-based GNU system reported that this problem went away with installation of a new X server. The failing server was XFree86 3.1.1. XFree86 3.1.2 works. ** Characters are displayed as empty boxes or with wrong font under X. This can occur when two different versions of FontConfig are used. For example, XFree86 4.3.0 has one version and Gnome usually comes with a newer version. Emacs compiled with --with-gtk will then use the newer version. In most cases the problem can be temporarily fixed by stopping the application that has the error (it can be Emacs or any other application), removing ~/.fonts.cache-1, and then start the application again. If removing ~/.fonts.cache-1 and restarting doesn't help, the application with problem must be recompiled with the same version of FontConfig as the rest of the system uses. For KDE, it is sufficient to recompile Qt. ** Emacs pauses for several seconds when changing the default font. This has been reported for fvwm 2.2.5 and the window manager of KDE 2.1. The reason for the pause is Xt waiting for a ConfigureNotify event from the window manager, which the window manager doesn't send. Xt stops waiting after a default timeout of usually 5 seconds. A workaround for this is to add something like emacs.waitForWM: false to your X resources. Alternatively, add `(wait-for-wm . nil)' to a frame's parameter list, like this: (modify-frame-parameters nil '((wait-for-wm . nil))) (this should go into your `.emacs' file). ** Underlines appear at the wrong position. This is caused by fonts having a wrong UNDERLINE_POSITION property. Examples are the font 7x13 on XFree prior to version 4.1, or the jmk neep font from the Debian xfonts-jmk package. To circumvent this problem, set x-use-underline-position-properties to nil in your `.emacs'. To see what is the value of UNDERLINE_POSITION defined by the font, type `xlsfonts -lll FONT' and look at the font's UNDERLINE_POSITION property. ** When using Exceed, fonts sometimes appear too tall. When the display is set to an Exceed X-server and fonts are specified (either explicitly with the -fn option or implicitly with X resources) then the fonts may appear "too tall". The actual character sizes are correct but there is too much vertical spacing between rows, which gives the appearance of "double spacing". To prevent this, turn off the Exceed's "automatic font substitution" feature (in the font part of the configuration window). * Internationalization problems ** M-{ does not work on a Spanish PC keyboard. Many Spanish keyboards seem to ignore that combination. Emacs can't do anything about it. ** Characters from the mule-unicode charsets aren't displayed under X. XFree86 4 contains many fonts in iso10646-1 encoding which have minimal character repertoires (whereas the encoding part of the font name is meant to be a reasonable indication of the repertoire according to the XLFD spec). Emacs may choose one of these to display characters from the mule-unicode charsets and then typically won't be able to find the glyphs to display many characters. (Check with C-u C-x = .) To avoid this, you may need to use a fontset which sets the font for the mule-unicode sets explicitly. E.g. to use GNU unifont, include in the fontset spec: ** The UTF-8/16/7 coding systems don't encode CJK (Far Eastern) characters. Emacs directly supports the Unicode BMP whose code points are in the ranges 0000-33ff and e000-ffff, and indirectly supports the parts of CJK characters belonging to these legacy charsets: GB2312, Big5, JISX0208, JISX0212, JISX0213-1, JISX0213-2, KSC5601 The latter support is done in Utf-Translate-Cjk mode (turned on by default). Which Unicode CJK characters are decoded into which Emacs charset is decided by the current language environment. For instance, in Chinese-GB, most of them are decoded into chinese-gb2312. If you read UTF-8 data with code points outside these ranges, the characters appear in the buffer as raw bytes of the original UTF-8 (composed into a single quasi-character) and they will be written back correctly as UTF-8, assuming you don't break the composed sequences. If you read such characters from UTF-16 or UTF-7 data, they are substituted with the Unicode `replacement character', and you lose information. ** Mule-UCS loads very slowly. Changes to Emacs internals interact badly with Mule-UCS's `un-define' library, which is the usual interface to Mule-UCS. Apply the following patch to Mule-UCS 0.84 and rebuild it. That will help, though loading will still be slower than in Emacs 20. (Some distributions, such as Debian, may already have applied such a patch.) --- lisp/un-define.el 6 Mar 2001 22:41:38 -0000 1.30 +++ lisp/un-define.el 19 Apr 2002 18:34:26 -0000 @@ -610,13 +624,21 @@ by calling post-read-conversion and pre- (mapcar (lambda (x) - ))) + (if (fboundp 'register-char-codings) + ;; Mule 5, where we don't need the eol-type specified and + ;; register-char-codings may be very slow for these coding + ;; system definitions. + (let ((y (cadr x))) + (mucs-define-coding-system + (car x) (nth 1 y) (nth 2 y) + (nth 3 y) (nth 4 y) (nth 5 y))) + ))) `((utf-8 (utf-8-unix ?u "UTF-8 coding system" Note that Emacs has native support for Unicode, roughly equivalent to Mule-UCS's, so you may not need it. ** Mule-UCS compilation problem. Emacs of old versions and XEmacs byte-compile the form `(progn progn ...)' the same way as `(progn ...)', but Emacs of version 21.3 and the later process that form just as interpreter does, that is, as `progn' variable reference. Apply the following patch to Mule-UCS 0.84 to make it compiled by the latest Emacs. --- mucs-ccl.el 2 Sep 2005 00:42:23 -0000 1.1.1.1 +++ mucs-ccl.el 2 Sep 2005 01:31:51 -0000 1.3 @@ -639,10 +639,14 @@ ))) + ;; The only way the function is used in this package is included + ;; in `mucs-package-definition-end-hook' value, where it must + ;; return (possibly empty) *list* of forms. Do this. Do not rely + ;; on byte compiler to remove extra `progn's in `(progn ...)' + ;; form. + `((setq mucs-ccl-facility-alist + (quote ,mucs-ccl-facility-alist)) + ,@result))) ;;; Add hook for embedding translation informations to a package. (add-hook 'mucs-package-definition-end-hook ** Accented ISO-8859-1 characters are displayed as | or _. Try other font set sizes (S-mouse-1). If the problem persists with other sizes as well, your text is corrupted, probably through software that is not 8-bit clean. If the problem goes away with another font size, it's probably because some fonts pretend to be ISO-8859-1 fonts when they are really ASCII fonts. In particular the schumacher-clean fonts have this bug in some versions of X. To see what glyphs are included in a font, use `xfd', like this: xfd -fn -schumacher-clean-medium-r-normal--12-120-75-75-c-60-iso8859-1 If this shows only ASCII glyphs, the font is indeed the source of the problem. The solution is to remove the corresponding lines from the appropriate `fonts.alias' file, then run `mkfontdir' in that directory, and then run `xset fp rehash'. ** The `oc-unicode' package doesn't work with Emacs 21. This package tries to define more private charsets than there are free slots now. The current built-in Unicode support is actually more flexible. (Use option `utf-translate-cjk-mode' if you need CJK support.) Files encoded as emacs-mule using oc-unicode aren't generally read correctly by Emacs 21. ** After a while, Emacs slips into unibyte mode. The VM mail package, which is not part of Emacs, sometimes does (standard-display-european t) That should be changed to (standard-display-european 1 t) * X runtime problems ** X keyboard problems *** You "lose characters" after typing Compose Character key. This is because the Compose Character key is defined as the keysym Multi_key, and Emacs (seeing that) does the proper X11 character-composition processing. If you don't want your Compose key to do that, you can redefine it with xmodmap. For example, here's one way to turn it into a Meta key: xmodmap -e "keysym Multi_key = Meta_L" If all users at your site of a particular keyboard prefer Meta to Compose, you can make the remapping happen automatically by adding the xmodmap command to the xdm setup script for that display. *** Using X Windows, control-shift-leftbutton makes Emacs hang. Use the shell command `xset bc' to make the old X Menu package work. *** C-SPC fails to work on Fedora GNU/Linux (or with fcitx input method). Fedora Core 4 steals the C-SPC key by default for the `iiimx' program which is the input method for some languages. It blocks Emacs users from using the C-SPC key for `set-mark-command'. One solutions is to remove the `<Ctrl>space' from the `Iiimx' file which can be found in the `/usr/lib/X11/app-defaults' directory. However, that requires root access. Another is to specify `Emacs*useXIM: false' in your X resources. Another is to build Emacs with the `--without-xim' configure option. The same problem happens on any other system if you are using fcitx (Chinese input method) which by default use C-SPC for toggling. If you want to use fcitx with Emacs, you have two choices. Toggle fcitx by another key (e.g. C-\) by modifying ~/.fcitx/config, or be accustomed to use C-@ for `set-mark-command'. *** M-SPC seems to be ignored as input. See if your X server is set up to use this as a command for character composition. *** The S-C-t key combination doesn't get passed to Emacs on X. This happens because some X configurations assign the Ctrl-Shift-t combination the same meaning as the Multi_key. The offending definition is in the file `...lib/X11/locale/iso8859-1/Compose'; there might be other similar combinations which are grabbed by X for similar purposes. We think that this can be countermanded with the `xmodmap' utility, if you want to be able to bind one of these key sequences within Emacs. *** Under X, C-v and/or other keys don't work. These may have been intercepted by your window manager. In particular, AfterStep 1.6 is reported to steal C-v in its default configuration. Various Meta keys are also likely to be taken by the configuration of the `feel'. See the WM's documentation for how to change this. *** Clicking C-mouse-2 in the scroll bar doesn't split the window. This currently doesn't work with scroll-bar widgets (and we don't know a good way of implementing it with widgets). If Emacs is configured --without-toolkit-scroll-bars, C-mouse-2 on the scroll bar does work. *** Inability to send an Alt-modified key, when Emacs is communicating directly with an X server. If you have tried to bind an Alt-modified key as a command, and it does not work to type the command, the first thing you should check is whether the key is getting through to Emacs. To do this, type C-h c followed by the Alt-modified key. C-h c should say what kind of event it read. If it says it read an Alt-modified key, then make sure you have made the key binding correctly. If C-h c reports an event that doesn't have the Alt modifier, it may be because your X server has no key for the Alt modifier. The X server that comes from MIT does not set up the Alt modifier by default. If your keyboard has keys named Alt, you can enable them as follows: xmodmap -e 'add mod2 = Alt_L' xmodmap -e 'add mod2 = Alt_R' If the keyboard has just one key named Alt, then only one of those commands is needed. The modifier `mod2' is a reasonable choice if you are using an unmodified MIT version of X. Otherwise, choose any modifier bit not otherwise used. If your keyboard does not have keys named Alt, you can use some other keys. Use the keysym command in xmodmap to turn a function key (or some other 'spare' key) into Alt_L or into Alt_R, and then use the commands show above to make them modifier keys. Note that if you have Alt keys but no Meta keys, Emacs translates Alt into Meta. This is because of the great importance of Meta in Emacs. ** Window-manager and toolkit-related problems *** Gnome: Emacs receives input directly from the keyboard, bypassing XIM. This seems to happen when gnome-settings-daemon version 2.12 or later is running. If gnome-settings-daemon is not running, Emacs receives input through XIM without any problem. Furthermore, this seems only to happen in *.UTF-8 locales; zh_CN.GB2312 and zh_CN.GBK locales, for example, work fine. A bug report has been filed in the Gnome bugzilla: *** Gnome: Emacs' xterm-mouse-mode doesn't work on the Gnome terminal. A symptom of this bug is that double-clicks insert a control sequence into the buffer. The reason this happens is an apparent incompatibility of the Gnome terminal with Xterm, which also affects other programs using the Xterm mouse interface. A problem report has been filed. *** KDE: When running on KDE, colors or fonts are not as specified for Emacs, or messed up. For example, you could see background you set for Emacs only in the empty portions of the Emacs display, while characters have some other background. This happens because KDE's defaults apply its color and font definitions even to applications that weren't compiled for KDE. The solution is to uncheck the "Apply fonts and colors to non-KDE apps" option in Preferences->Look&Feel->Style (KDE 2). In KDE 3, this option is in the "Colors" section, rather than "Style". Alternatively, if you do want the KDE defaults to apply to other applications, but not to Emacs, you could modify the file `Emacs.ad' (should be in the `/usr/share/apps/kdisplay/app-defaults/' directory) so that it doesn't set the default background and foreground only for Emacs. For example, make sure the following resources are either not present or commented out: Emacs.default.attributeForeground Emacs.default.attributeBackground Emacs*Foreground Emacs*Background ***. *** CDE: Frames may cover dialogs they created when using CDE. This can happen if you have "Allow Primary Windows On Top" enabled which seems to be the default in the Common Desktop Environment. To change, go in to "Desktop Controls" -> "Window Style Manager" and uncheck "Allow Primary Windows On Top". *** Xaw3d : When using Xaw3d scroll bars without arrows, the very first mouse click in a scroll bar might be ignored by the scroll bar widget. This is probably a bug in Xaw3d; when Xaw3d is compiled with arrows, the problem disappears. *** Xaw: There are known binary incompatibilities between Xaw, Xaw3d, neXtaw, XawM and the few other derivatives of Xaw. So when you compile with one of these, it may not work to dynamically link with another one. For example, strange problems, such as Emacs exiting when you type "C-x 1", were reported when Emacs compiled with Xaw3d and libXaw was used with neXtaw at run time. The solution is to rebuild Emacs with the toolkit version you actually want to use, or set LD_PRELOAD to preload the same toolkit version you built Emacs with. *** Open Motif: Problems with file dialogs in Emacs built with Open Motif. When Emacs 21 is built with Open Motif 2.1, it can happen that the graphical file dialog boxes do not work properly. The "OK", "Filter" and "Cancel" buttons do not respond to mouse clicks. Dragging the file dialog window usually causes the buttons to work again. The solution is to use LessTif instead. LessTif is a free replacement for Motif. See the file INSTALL for information on how to do this. Another workaround is not to use the mouse to trigger file prompts, but to use the keyboard. This way, you will be prompted for a file in the minibuffer instead of a graphical file dialog. *** LessTif: Problems in Emacs built with LessTif. The problems seem to depend on the version of LessTif and the Motif emulation for which it is set up. Only the Motif 1.2 emulation seems to be stable enough in LessTif. LessTif 0.92-17's Motif 1.2 emulation seems to work okay on FreeBSD. On GNU/Linux systems, lesstif-0.92.6 configured with "./configure --enable-build-12 --enable-default-12" is reported to be the most successful. The binary GNU/Linux package lesstif-devel-0.92.0-1.i386.rpm was reported to have problems with menu placement. On some systems, even with Motif 1.2 emulation, Emacs occasionally locks up, grabbing all mouse and keyboard events. We still don't know what causes these problems; they are not reproducible by Emacs developers. *** Motif: The Motif version of Emacs paints the screen a solid color. This has been observed to result from the following X resource: Emacs*default.attributeFont: -*-courier-medium-r-*-*-*-140-*-*-*-*-iso8859-* That the resource has this effect indicates a bug in something, but we do not yet know what. If it is an Emacs bug, we hope someone can explain what the bug is so we can fix it. In the mean time, removing the resource prevents the problem. ** General X problems *** Redisplay using X11 is much slower than previous Emacs versions. We've noticed that certain X servers draw the text much slower when scroll bars are on the left. We don't know why this happens. If this happens to you, you can work around it by putting the scroll bars on the right (as they were in Emacs 19). Here's how to do this: (set-scroll-bar-mode 'right) If you're not sure whether (or how much) this problem affects you, try that and see how much difference it makes. To set things back to normal, do (set-scroll-bar-mode 'left) *** Error messages about undefined colors on X. The messages might say something like this: Unable to load color "grey95" (typically, in the `*Messages*' buffer), or something like this: Error while displaying tooltip: (error Undefined color lightyellow) These problems could happen if some other X program has used up too many colors of the X palette, leaving Emacs with insufficient system resources to load all the colors it needs. A solution is to exit the offending X programs before starting Emacs. "undefined color" messages can also occur if the RgbPath entry in the X configuration file is incorrect, or the rgb.txt file is not where X expects to find it. *** Improving performance with slow X connections. There are several ways to improve this performance, any subset of which can be carried out at the same time: 1) If you don't need X Input Methods (XIM) for entering text in some language you use, you can improve performance on WAN links by using the X resource useXIM to turn off use of XIM. This does not affect the use of Emacs' own input methods, which are part of the Leim package. 2) If the connection is very slow, you might also want to consider switching off scroll bars, menu bar, and tool bar. Adding the following forms to your .emacs file will accomplish that, but only after the the initial frame is displayed: (scroll-bar-mode -1) (menu-bar-mode -1) (tool-bar-mode -1) For still quicker startup, put these X resources in your .Xdefaults file: Emacs.verticalScrollBars: off Emacs.menuBar: off Emacs.toolBar: off 3) Use ssh to forward the X connection, and enable compression on this forwarded X connection (ssh -XC remotehostname emacs ...). 4) Use lbxproxy on the remote end of the connection. This is an interface to the low bandwidth X extension in most modern X servers, which improves performance dramatically, at the slight expense of correctness of the X protocol. lbxproxy acheives the performance gain by grouping several X requests in one TCP packet and sending them off together, instead of requiring a round-trip for each X request in a separate packet. The switches that seem to work best for emacs are: -noatomsfile -nowinattr -cheaterrors -cheatevents Note that the -nograbcmap option is known to cause problems. For more about lbxproxy, see: 5) If copying and killing is slow, try to disable the interaction with the native system's clipboard by adding these lines to your .emacs file: (setq interprogram-cut-function nil) (setq interprogram-paste-function nil) *** Emacs gives the error, Couldn't find per display information. This can result if the X server runs out of memory because Emacs uses a large number of fonts. On systems where this happens, C-h h is likely to cause it. We do not know of a way to prevent the problem. *** Emacs does not notice when you release the mouse. There are reports that this happened with (some) Microsoft mice and that replacing the mouse made it stop. *** You can't select from submenus (in the X toolkit version). On certain systems, mouse-tracking and selection in top-level menus works properly with the X toolkit, but neither of them works when you bring up a submenu (such as Bookmarks or Compare or Apply Patch, in the Files menu). This works on most systems. There is speculation that the failure is due to bugs in old versions of X toolkit libraries, but no one really knows. If someone debugs this and finds the precise cause, perhaps a workaround can be found. *** An error message such as `X protocol error: BadMatch (invalid parameter attributes) on protocol request 93'. This comes from having an invalid X resource, such as emacs*Cursor: black (which is invalid because it specifies a color name for something that isn't a color.) The fix is to correct your X resources. *** Slow startup on X11R6 with X windows. If Emacs takes two minutes to start up on X11R6, see if your X resources specify any Adobe fonts. That causes the type-1 font renderer to start up, even if the font you asked for is not a type-1 font. One way to avoid this problem is to eliminate the type-1 fonts from your font path, like this: xset -fp /usr/X11R6/lib/X11/fonts/Type1/ *** Pull-down menus appear in the wrong place, in the toolkit version of Emacs. An X resource of this form can cause the problem: Emacs*geometry: 80x55+0+0 This resource is supposed to apply, and does apply, to the menus individually as well as to Emacs frames. If that is not what you want, rewrite the resource. To check thoroughly for such resource specifications, use `xrdb -query' to see what resources the X server records, and also look at the user's ~/.Xdefaults and ~/.Xdefaults-* files. *** Emacs running under X Windows does not handle mouse clicks. *** `emacs -geometry 80x20' finds a file named `80x20'. One cause of such problems is having (setq term-file-prefix nil) in your .emacs file. Another cause is a bad value of EMACSLOADPATH in the environment. *** Emacs fails to get default settings from X Windows server. The X library in X11R4 has a bug; it interchanges the 2nd and 3rd arguments to XGetDefaults. Define the macro XBACKWARDS in config.h to tell Emacs to compensate for this. I don't believe there is any way Emacs can determine for itself whether this problem is present on a given system. *** X Windows doesn't work if DISPLAY uses a hostname. People have reported kernel bugs in certain systems that cause Emacs not to work with X Windows if DISPLAY is set using a host name. But the problem does not occur if DISPLAY is set to `unix:0.0'. I think the bug has to do with SIGIO or FIONREAD. You may be able to compensate for the bug by doing (set-input-mode nil nil). However, that has the disadvantage of turning off interrupts, so that you are unable to quit out of a Lisp program by typing C-g. The easy way to do this is to put (setq x-sigio-bug t) in your site-init.el file. * Runtime problems on character terminals ** Emacs spontaneously displays "I-search: " at the bottom of the screen. This means that Control-S/Control-Q (XON/XOFF) "flow control" is being used. C-s/C-q flow control is bad for Emacs editors because it takes away C-s and C-q as user commands. Since editors do not output long streams of text without user commands, there is no need for a user-issuable "stop output" command in an editor; therefore, a properly designed flow control mechanism would transmit all possible input characters without interference. Designing such a mechanism is easy, for a person with at least half a brain. There are three possible reasons why flow control could be taking place: 1) Terminal has not been told to disable flow control 2) Insufficient padding for the terminal in use 3) Some sort of terminal concentrator or line switch is responsible First of all, many terminals have a set-up mode which controls whether they generate XON/XOFF flow control characters. This must be set to "no XON/XOFF" in order for Emacs to work. Sometimes there is an escape sequence that the computer can send to turn flow control off and on. If so, perhaps the termcap `ti' string should turn flow control off, and the `te' string should turn it on. Once the terminal has been told "no flow control", you may find it needs more padding. The amount of padding Emacs sends is controlled by the termcap entry for the terminal in use, and by the output baud rate as known by the kernel. The shell command `stty' will print your output baud rate; `stty' with suitable arguments will set it if it is wrong. Setting to a higher speed causes increased padding. If the results are wrong for the correct speed, there is probably a problem in the termcap entry. You must speak to a local Unix wizard to fix this. Perhaps you are just using the wrong terminal type. For terminals that lack a "no flow control" mode, sometimes just giving lots of padding will prevent actual generation of flow control codes. You might as well try it. If you are really unlucky, your terminal is connected to the computer through a concentrator which sends XON/XOFF flow control to the computer, or it insists on sending flow control itself no matter how much padding you give it. Unless you can figure out how to turn flow control off on this concentrator (again, refer to your local wizard), you are screwed! You should have the terminal or concentrator replaced with a properly designed one. In the mean time, some drastic measures can make Emacs semi-work. You can make Emacs ignore C-s and C-q and let the operating system handle them. To do this on a per-session basis, just type M-x enable-flow-control RET. You will see a message that C-\ and C-^ are now translated to C-s and C-q. (Use the same command M-x enable-flow-control to turn *off* this special mode. It toggles flow control handling.) If C-\ and C-^ are inconvenient for you (for example, if one of them is the escape character of your terminal concentrator), you can choose other characters by setting the variables flow-control-c-s-replacement and flow-control-c-q-replacement. But choose carefully, since all other control characters are already used by emacs. IMPORTANT: if you type C-s by accident while flow control is enabled, Emacs output will freeze, and you will have to remember to type C-q in order to continue. If you work in an environment where a majority of terminals of a certain type are flow control hobbled, you can use the function `enable-flow-control-on' to turn on this flow control avoidance scheme automatically. Here is an example: (enable-flow-control-on "vt200" "vt300" "vt101" "vt131") If this isn't quite correct (e.g. you have a mixture of flow-control hobbled and good vt200 terminals), you can still run enable-flow-control manually. I have no intention of ever redesigning the Emacs command set for the assumption that terminals use C-s/C-q flow control. XON/XOFF flow control technique is a bad design, and terminals that need it are bad merchandise and should not be purchased. Now that X is becoming widespread, XON/XOFF seems to be on the way out. If you can get some use out of GNU Emacs on inferior terminals, more power to you, but I will not make Emacs worse for properly designed systems for the sake of inferior systems. ** Control-S and Control-Q commands are ignored completely. For some reason, your system is using brain-damaged C-s/C-q flow control despite Emacs's attempts to turn it off. Perhaps your terminal is connected to the computer through a concentrator that wants to use flow control. You should first try to tell the concentrator not to use flow control. If you succeed in this, try making the terminal work without flow control, as described in the preceding section. If that line of approach is not successful, map some other characters into C-s and C-q using keyboard-translate-table. The example above shows how to do this with C-^ and C-\. ** Screen is updated wrong, but only on one kind of terminal. This could mean that the termcap entry you are using for that terminal is wrong, or it could mean that Emacs has a bug handing the combination of features specified for that terminal. The first step in tracking this down is to record what characters Emacs is sending to the terminal. Execute the Lisp expression (open-termscript "./emacs-script") to make Emacs write all terminal output into the file ~/emacs-script as well; then do what makes the screen update wrong, and look at the file and decode the characters using the manual for the terminal. There are several possibilities: 1) The characters sent are correct, according to the terminal manual. In this case, there is no obvious bug in Emacs, and most likely you need more padding, or possibly the terminal manual is wrong. 2) The characters sent are incorrect, due to an obscure aspect of the terminal behavior not described in an obvious way by termcap. This case is hard. It will be necessary to think of a way for Emacs to distinguish between terminals with this kind of behavior and other terminals that behave subtly differently but are classified the same by termcap; or else find an algorithm for Emacs to use that avoids the difference. Such changes must be tested on many kinds of terminals. 3) The termcap entry is wrong. See the file etc/TERMS for information on changes that are known to be needed in commonly used termcap entries for certain terminals. 4) The characters sent are incorrect, and clearly cannot be right for any terminal with the termcap entry you were using. This is unambiguously an Emacs bug, and can probably be fixed in termcap.c, tparam.c, term.c, scroll.c, cm.c or dispnew.c. ** Control-S and Control-Q commands are ignored completely on a net connection. Some versions of rlogin (and possibly telnet) do not pass flow control characters to the remote system to which they connect. On such systems, emacs on the remote system cannot disable flow control on the local system.. If none of these methods work, the best solution is to type M-x enable-flow-control at the beginning of your emacs session, or if you expect the problem to continue, add a line such as the following to your .emacs (on the host running rlogind): (enable-flow-control-on "vt200" "vt300" "vt101" "vt131") See the entry about spontaneous display of I-search (above) for more info. ** Output from Control-V is slow. On many bit-map terminals, scrolling operations are fairly slow. Often the termcap entry for the type of terminal in use fails to inform Emacs of this. The two lines at the bottom of the screen before a Control-V command are supposed to appear at the top after the Control-V command. If Emacs thinks scrolling the lines is fast, it will scroll them to the top of the screen. If scrolling is slow but Emacs thinks it is fast, the usual reason is that the termcap entry for the terminal you are using does not specify any padding time for the `al' and `dl' strings. Emacs concludes that these operations take only as much time as it takes to send the commands at whatever line speed you are using. You must fix the termcap entry to specify, for the `al' and `dl', as much time as the operations really take. Currently Emacs thinks in terms of serial lines which send characters at a fixed rate, so that any operation which takes time for the terminal to execute must also be padded. With bit-map terminals operated across networks, often the network provides some sort of flow control so that padding is never needed no matter how slow an operation is. You must still specify a padding time if you want Emacs to realize that the operation takes a long time. This will cause padding characters to be sent unnecessarily, but they do not really cost much. They will be transmitted while the scrolling is happening and then discarded quickly by the terminal. Most bit-map terminals provide commands for inserting or deleting multiple lines at once. Define the `AL' and `DL' strings in the termcap entry to say how to do these things, and you will have fast output without wasted padding characters. These strings should each contain a single %-spec saying how to send the number of lines to be scrolled. These %-specs are like those in the termcap `cm' string. You should also define the `IC' and `DC' strings if your terminal has a command to insert or delete multiple characters. These take the number of positions to insert or delete as an argument. A `cs' string to set the scrolling region will reduce the amount of motion you see on the screen when part of the screen is scrolled. ** You type Control-H (Backspace) expecting to delete characters. Put `stty dec' in your .login file and your problems will disappear after a day or two. The choice of Backspace for erasure was based on confusion, caused by the fact that backspacing causes erasure (later, when you type another character) on most display terminals. But it is a mistake. Deletion of text is not the same thing as backspacing followed by failure to overprint. I do not wish to propagate this confusion by conforming to it. For this reason, I believe `stty dec' is the right mode to use, and I have designed Emacs to go with that. If there were a thousand other control characters, I would define Control-h to delete as well; but there are not very many other control characters, and I think that providing the most mnemonic possible Help character is more important than adapting to people who don't use `stty dec'. If you are obstinate about confusing buggy overprinting with deletion, you can redefine Backspace in your .emacs file: (global-set-key "\b" 'delete-backward-char) You can probably access help-command via f1. ** Colors are not available on a tty or in xterm. Emacs 21 supports colors on character terminals and terminal emulators, but this support relies on the terminfo or termcap database entry to specify that the display supports color. Emacs looks at the "Co" capability for the terminal to find out how many colors are supported; it should be non-zero to activate the color support within Emacs. (Most color terminals support 8 or 16 colors.) If your system uses terminfo, the name of the capability equivalent to "Co" is "colors". In addition to the "Co" capability, Emacs needs the "op" (for ``original pair'') capability, which tells how to switch the terminal back to the default foreground and background colors. Emacs will not use colors if this capability is not defined. If your terminal entry doesn't provide such a capability, try using the ANSI standard escape sequence \E[00m (that is, define a new termcap/terminfo entry and make it use your current terminal's entry plus \E[00m for the "op" capability). Finally, the "NC" capability (terminfo name: "ncv") tells Emacs which attributes cannot be used with colors. Setting this capability incorrectly might have the effect of disabling colors; try setting this capability to `0' (zero) and see if that helps. Emacs uses the database entry for the terminal whose name is the value of the environment variable TERM. With `xterm', a common terminal entry that supports color is `xterm-color', so setting TERM's value to `xterm-color' might activate the color support on an xterm-compatible emulator. Beginning with version 22.1, Emacs supports the --color command-line option which may be used to force Emacs to use one of a few popular modes for getting colors on a tty. For example, --color=ansi8 sets up for using the ANSI-standard escape sequences that support 8 colors. Some modes do not use colors unless you turn on the Font-lock mode. Some people have long ago set their `~/.emacs' files to turn on Font-lock on X only, so they won't see colors on a tty. The recommended way of turning on Font-lock is by typing "M-x global-font-lock-mode RET" or by customizing the variable `global-font-lock-mode'. * Runtime problems specific to individual Unix variants ** GNU/Linux *** GNU/Linux: Process output is corrupted. There is a bug in Linux kernel 2.6.10 PTYs that can cause emacs to read corrupted process output. *** GNU/Linux: Remote access to CVS with SSH causes file corruption. If you access a remote CVS repository via SSH, files may be corrupted due to bad interaction between CVS, SSH, and libc. To fix the problem, save the following script into a file, make it executable, and set CVS_RSH environment variable to the file name of the script: #!/bin/bash exec 2> >(exec cat >&2 2>/dev/null) exec ssh "$@" *** GNU/Linux: On Linux-based GNU systems using libc versions 5.4.19 through 5.4.22, Emacs crashes at startup with a segmentation fault. This problem happens if libc defines the symbol __malloc_initialized. One known solution is to upgrade to a newer libc version. 5.4.33 is known to work. *** GNU/Linux: After upgrading to a newer version of Emacs, the Meta key stops working. This was reported to happen on a GNU/Linux system distributed by Mandrake. The reason is that the previous version of Emacs was modified by Mandrake to make the Alt key act as the Meta key, on a keyboard where the Windows key is the one which produces the Meta modifier. A user who started using a newer version of Emacs, which was not hacked by Mandrake, expected the Alt key to continue to act as Meta, and was astonished when that didn't happen. The solution is to find out what key on your keyboard produces the Meta modifier, and use that key instead. Try all of the keys to the left and to the right of the space bar, together with the `x' key, and see which combination produces "M-x" in the echo area. You can also use the `xmodmap' utility to show all the keys which produce a Meta modifier: xmodmap -pk | egrep -i "meta|alt" A more convenient way of finding out which keys produce a Meta modifier is to use the `xkbprint' utility, if it's available on your system: xkbprint 0:0 /tmp/k.ps This produces a PostScript file `/tmp/k.ps' with a picture of your keyboard; printing that file on a PostScript printer will show what keys can serve as Meta. The `xkeycaps' also shows a visual representation of the current keyboard settings. It also allows to modify them. *** GNU/Linux: slow startup on Linux-based GNU systems. People using systems based on the Linux kernel sometimes report that startup takes 10 to 15 seconds longer than `usual'. This is because Emacs looks up the host name when it starts. Normally, this takes negligible time; the extra delay is due to improper system configuration. This problem can occur for both networked and non-networked machines. Here is how to fix the configuration. It requires being root. **** Networked Case. First, make sure the files `/etc/hosts' and `/etc/host.conf' both exist. The first line in the `/etc/hosts' file should look like this (replace HOSTNAME with your host name): 127.0.0.1 HOSTNAME Also make sure that the `/etc/host.conf' files contains the following lines: order hosts, bind multi on Any changes, permanent and temporary, to the host name should be indicated in the `/etc/hosts' file, since it acts a limited local database of addresses and names (e.g., some SLIP connections dynamically allocate ip addresses). **** Non-Networked Case. The solution described in the networked case applies here as well. However, if you never intend to network your machine, you can use a simpler solution: create an empty `/etc/host.conf' file. The command `touch /etc/host.conf' suffices to create the file. The `/etc/hosts' file is not necessary with this approach. *** GNU/Linux:. Alternatively, if you want a blinking underscore as your Emacs cursor, change the "cvvis" capability to send the "\E[?25h\E[?0c" command. *** GNU/Linux: Error messages `internal facep []' happen on GNU/Linux systems. There is a report that replacing libc.so.5.0.9 with libc.so.5.2.16 caused this to start happening. People are not sure why, but the problem seems unlikely to be in Emacs itself. Some suspect that it is actually Xlib which won't work with libc.so.5.2.16. Using the old library version is a workaround. ** Mac OS X *** Mac OS X (Carbon): Environment Variables from dotfiles are ignored.. *** Mac OS X (Carbon): Process output truncated when using ptys. There appears to be a problem with the implementation of pty's on the Mac OS X that causes process output to be truncated. To avoid this, leave process-connection-type set to its default value of nil. *** Mac OS X 10.3.9 (Carbon): QuickTime 7.0.4 updater breaks build. On the above environment, build fails at the link stage with the message like "Undefined symbols: _HICopyAccessibilityActionDescription referenced from QuickTime expected to be defined in Carbon". A workaround is to use QuickTime 7.0.1 reinstaller. ** FreeBSD *** FreeBSD 2.1.5: useless symbolic links remain in /tmp or other directories that have the +t bit. This is because of a kernel bug in FreeBSD 2.1.5 (fixed in 2.2). Emacs uses symbolic links to implement file locks. In a directory with +t bit, the directory owner becomes the owner of the symbolic link, so that it cannot be removed by anyone else. If you don't like those useless links, you can let Emacs not to using file lock by adding #undef CLASH_DETECTION to config.h. *** FreeBSD: Getting a Meta key on the console. By default, neither Alt nor any other key acts as a Meta key on FreeBSD, but this can be changed using kbdcontrol(1). Dump the current keymap to a file with the command $ kbdcontrol -d >emacs.kbd Edit emacs.kbd, and give the key you want to be the Meta key the definition `meta'. For instance, if your keyboard has a ``Windows'' key with scan code 105, change the line for scan code 105 in emacs.kbd to look like this 105 meta meta meta meta meta meta meta meta O to make the Windows key the Meta key. Load the new keymap with $ kbdcontrol -l emacs.kbd ** HP-UX *** HP/UX : Shell mode gives the message, "`tty`: Ambiguous". christos@theory.tn.cornell.edu says: The problem is that in your .cshrc you have something that tries to execute `tty`. If you are not running the shell on a real tty then tty will print "not a tty". Csh expects one word in some places, but tty is giving it back 3. The solution is to add a pair of quotes around `tty` to make it a single word: if (`tty` == "/dev/console") should be changed to: if ("`tty`" == "/dev/console") Even better, move things that set up terminal sections out of .cshrc and into .login. *** HP/UX: `Pid xxx killed due to text modification or page I/O error'. On HP/UX, you can get that error when the Emacs executable is on an NFS file system. HP/UX responds this way if it tries to swap in a page and does not get a response from the server within a timeout whose default value is just ten seconds. If this happens to you, extend the timeout period. *** HP/UX: The right Alt key works wrong on German HP keyboards (and perhaps other non-English HP keyboards too). This is because HP-UX defines the modifiers wrong in X. Here is a shell script to fix the problem; be sure that it is run after VUE configures the X server.: "Cannot find callback list" messages from dialog boxes in Emacs built with Motif. This problem resulted from a bug in GCC 2.4.5. Newer GCC versions such as 2.7.0 fix the problem. *** HP/UX: Emacs does not recognize the AltGr key. To fix this, set up a file ~/.dt/sessions/sessionetc with executable rights, containing this text: -------------------------------- 11.0: Emacs makes HP/UX 11.0 crash. This is a bug in HPUX; HPUX patch PHKL_16260 is said to fix it. ** AIX *** AIX: Trouble using ptys. People often install the pty devices on AIX incorrectly. Use `smit pty' to reinstall them properly. *** AIXterm: Your Delete key sends a Backspace to the terminal. The solution is to include in your .Xdefaults the lines: *aixterm.Translations: #override <Key>BackSpace: string(0x7f) aixterm*ttyModes: erase ^? This makes your Backspace key send DEL (ASCII 127). *** AIX: If linking fails because libXbsd isn't found, check if you are compiling with the system's `cc' and CFLAGS containing `-O5'. If so, you have hit a compiler bug. Please make sure to re-configure Emacs so that it isn't compiled with `-O5'. *** AIX 4.3.x or 4.4: Compiling fails. This could happen if you use /bin/c89 as your compiler, instead of the default `cc'. /bin/c89 treats certain warnings, such as benign redefinitions of macros, as errors, and fails the build. A solution is to use the default compiler `cc'. *** AIX 4: Some programs fail when run in a Shell buffer with an error message like No terminfo entry for "unknown". On AIX, many terminal type definitions are not installed by default. `unknown' is one of them. Install the "Special Generic Terminal Definitions" to make them defined. ** Solaris We list bugs in current versions here. Solaris 2.x and 4.x are covered in the section on legacy systems. *** On Solaris, C-x doesn't get through to Emacs when you use the console. This is a Solaris feature (at least on Intel x86 cpus). Type C-r C-r C-t, to toggle whether C-x gets through to Emacs. *** Problem with remote X server on Suns. On a Sun, running Emacs on one machine with the X server on another may not work if you have used the unshared system libraries. This is because the unshared libraries fail to use YP for host name lookup. As a result, the host name you specify may not be recognized. *** Solaris 2,6: Emacs crashes with SIGBUS or SIGSEGV on Solaris after you delete a frame. We suspect that this is a bug in the X libraries provided by Sun. There is a report that one of these patches fixes the bug and makes the problem stop: 105216-01 105393-01 105518-01 105621-01 105665-01 105615-02 105216-02 105667-01 105401-08 105615-03 105621-02 105686-02 105736-01 105755-03 106033-01 105379-01 105786-01 105181-04 105379-03 105786-04 105845-01 105284-05 105669-02 105837-01 105837-02 105558-01 106125-02 105407-01 Another person using a newer system (kernel patch level Generic_105181-06) suspects that the bug was fixed by one of these more recent patches: 106040-07 SunOS 5.6: X Input & Output Method patch 106222-01 OpenWindows 3.6: filemgr (ff.core) fixes 105284-12 Motif 1.2.7: sparc Runtime library patch *** Solaris 7 or 8: Emacs reports a BadAtom error (from X) This happens when Emacs was built on some other version of Solaris. Rebuild it on Solaris 8. *** When using M-x dbx with the SparcWorks debugger, the `up' and `down' commands do not move the arrow in Emacs. You can fix this by adding the following line to `~/.dbxinit': dbxenv output_short_file_name off *** On Solaris, CTRL-t is ignored by Emacs when you use the fr.ISO-8859-15 locale (and maybe other related locales). You can fix this by editing the file: /usr/openwin/lib/locale/iso8859-15/Compose Near the bottom there is a line that reads: Ctrl<t> <quotedbl> <Y> : "\276" threequarters that should read: Ctrl<T> <quotedbl> <Y> : "\276" threequarters Note the lower case <t>. Changing this line should make C-t work. ** Irix *** Irix 6.5: Emacs crashes on the SGI R10K, when compiled with GCC. This seems to be fixed in GCC 2.95. *** Irix: Trouble using ptys, or running out of ptys. The program mkpts (which may be in `/usr/adm' or `/usr/sbin') needs to be set-UID to root, or non-root programs like Emacs will not be able to allocate ptys reliably. * Runtime problems specific to MS-Windows ** Windows 95 and networking. To support server sockets, Emacs 22.1 loads ws2_32.dll. If this file is missing, all Emacs networking features are disabled. Old versions of Windows 95 may not have the required DLL. To use Emacs' networking features on Windows 95, you must install the "Windows Socket 2" update available from MicroSoft's support Web. ** Emacs exits with "X protocol error" when run with an X server for MS-Windows. A certain X server for Windows had a bug which caused this. Supposedly the newer 32-bit version of this server doesn't have the problem. ** Known problems with the MS-Windows port of Emacs 22.1 Using create-fontset-from-ascii-font or the --font startup parameter with a Chinese, Japanese or Korean font leads to display problems. Use a Latin-only font as your default font. If you want control over which font is used to display Chinese, Japanese or Korean character, use create-fontset-from-fontset-spec to define a fontset. Frames are not refreshed while the File or Font dialog or a pop-up menu is displayed. This also means help text for pop-up menus is not displayed at all. This is because message handling under Windows is synchronous, so we cannot handle repaint (or any other) messages while waiting for a system function to return the result of the dialog or pop-up menu interaction. Windows 95 and Windows NT up to version 4.0 do not support help text for menus. Help text is only available in later versions of Windows.. There are problems with display if mouse-tracking is enabled and the mouse is moved off a frame, over another frame then back over the first frame. A workaround is to click the left mouse button inside the frame after moving back into it. Some minor flickering still persists during mouse-tracking, although not as severely as in 21.1. An inactive cursor remains in an active window after the Windows Manager driven switch of the focus, until a key is pressed. Windows input methods are not recognized by Emacs. However, some of these input methods cause the keyboard to send characters encoded in the appropriate coding system (e.g., ISO 8859-1 for Latin-1 characters, ISO 8859-8 for Hebrew characters, etc.). To make these input methods work with Emacs, set the keyboard coding system to the appropriate value after you activate the Windows input method. For example, if you activate the Hebrew input method, type this: C-x RET k hebrew-iso-8bit RET (Emacs ought to recognize the Windows language-change event and set up the appropriate keyboard encoding automatically, but it doesn't do that yet.) In addition, to use these Windows input methods, you should set your "Language for non-Unicode programs" (on Windows XP, this is on the Advanced tab of Regional Settings) to the language of the input method. To bind keys that produce non-ASCII characters with modifiers, you must specify raw byte codes. For instance, if you want to bind META-a-grave to a command, you need to specify this in your `~/.emacs': (global-set-key [?\M-\340] ...) The above example is for the Latin-1 environment where the byte code of the encoded a-grave is 340 octal. For other environments, use the encoding appropriate to that environment. The %b specifier for format-time-string does not produce abbreviated month names with consistent widths for some locales on some versions you proceed to type another non-modifier key before you let go of Alt and Shift, the Alt and Shift act as modifiers in the usual way. A more permanent work around is to change it to another key combination, or disable it in the keyboard control panel. ** Cygwin build of Emacs hangs after rebasing Cygwin DLLs Usually, on Cygwin, one needs to rebase the DLLs if an application aborts with a message like this: C:\cygwin\bin\python.exe: *** unable to remap C:\cygwin\bin\cygssl.dll to same address as parent(0xDF0000) != 0xE00000 However, since Cygwin DLL 1.5.17 was released, after such rebasing, Emacs hangs. This was reported to happen for Emacs 21.2 and also for the pretest of Emacs 22.1 on Cygwin.. ** Interrupting Cygwin port of Bash from Emacs doesn't work. Cygwin 1.x builds of the ported Bash cannot be interrupted from the MS-Windows version of Emacs. This is due to some change in the Bash port or in the Cygwin library which apparently make Bash ignore the keyboard interrupt event sent by Emacs to Bash. (Older Cygwin ports of Bash, up to b20.1, did receive SIGINT from Emacs.) ** Accessing remote files with ange-ftp hangs the MS-Windows version of Emacs. If the FTP client is the Cygwin port of GNU `ftp', this appears to be due to some bug in the Cygwin DLL or some incompatibility between it and the implementation of asynchronous subprocesses in the Windows port of Emacs. Specifically, some parts of the FTP server responses are not flushed out, apparently due to buffering issues, which confuses ange-ftp. The solution is to downgrade to an older version of the Cygwin DLL (version 1.3.2 was reported to solve the problem), or use the stock Windows FTP client, usually found in the `C:\WINDOWS' or 'C:\WINNT' directory. To force ange-ftp use the stock Windows client, set the variable `ange-ftp-ftp-program-name' to the absolute file name of the client's executable. For example: (setq ange-ftp-ftp-program-name "c:/windows/") If you want to stick with the Cygwin FTP client, you can work around this problem by putting this in your `.emacs' file: (setq ange-ftp-ftp-program-args '("-i" "-n" "-g" "-v" "--prompt" "") ** lpr commands don't work on MS-Windows with some cheap printers. This problem may also strike other platforms, but the solution is likely to be a global one, and not Emacs specific. Many cheap inkjet, and even some cheap laser printers, do not print plain text anymore, they will only print through graphical printer drivers. A workaround on MS-Windows is to use Windows' basic built in editor to print (this is possibly the only useful purpose it has): (setq printer-name "") ;; notepad takes the default (setq lpr-command "notepad") ;; notepad (setq lpr-switches nil) ;; not needed (setq lpr-printer-switch "/P") ;; run notepad as batch printer ** Antivirus software interacts badly with the MS-Windows version of Emacs. The usual manifestation of these problems is that subprocesses don't work or even wedge the entire system. In particular, "M-x shell RET" was reported to fail to work. But other commands also sometimes don't work when an antivirus package is installed. The solution is to switch the antivirus software to a less aggressive mode (e.g., disable the ``auto-protect'' feature), or even uninstall or disable it entirely. ** Pressing the mouse button on MS-Windows does not give a mouse-2 event. This is usually a problem with the mouse driver. Because most Windows programs do not do anything useful with the middle mouse button, many mouse drivers allow you to define the wheel press to do something different. Some drivers do not even have the option to generate a middle button press. In such cases, setting the wheel press to "scroll" sometimes works if you press the button twice. Trying a generic mouse driver might help. ** Scrolling the mouse wheel on MS-Windows always scrolls the top window. This is another common problem with mouse drivers. Instead of generating scroll events, some mouse drivers try to fake scroll bar movement. But they are not intelligent enough to handle multiple scroll bars within a frame. Trying a generic mouse driver might help. ** Mail sent through Microsoft Exchange in some encodings appears to be mangled and is not seen correctly in Rmail or Gnus. We don't know exactly what happens, but it isn't an Emacs problem in cases we've seen. ** On MS-Windows, you cannot use the right-hand ALT key and the left-hand CTRL key together to type a Control-Meta character. This is a consequence of a misfeature beyond Emacs's control. Under Windows, the AltGr key on international keyboards generates key events with the modifiers Right-Alt and Left-Ctrl. Since Emacs cannot distinguish AltGr from an explicit Right-Alt and Left-Ctrl combination, whenever it sees Right-Alt and Left-Ctrl it assumes that AltGr has been pressed. The variable `w32-recognize-altgr' can be set to nil to tell Emacs that AltGr is really Ctrl and Alt. ** Under some X-servers running on MS-Windows, Emacs' display is incorrect. The symptoms are that Emacs does not completely erase blank areas of the screen during scrolling or some other screen operations (e.g., selective display or when killing a region). M-x recenter will cause the screen to be completely redisplayed and the "extra" characters will disappear. This is known to occur under Exceed 6, and possibly earlier versions as well; it is reportedly solved in version 6.2.0.16 and later. The problem lies in the X-server settings. There are reports that you can solve the problem with Exceed by running `Xconfig' from within NT, choosing "X selection", then un-checking the boxes "auto-copy X selection" and "auto-paste to X selection". Of this does not work, please inform bug-gnu-emacs@gnu.org. Then please call support for your X-server and see if you can get a fix. If you do, please send it to bug-gnu-emacs@gnu.org so we can list it here. * Build-time problems ** Configuration *** The `configure' script doesn't find the jpeg library. There are reports that this happens on some systems because the linker by default only looks for shared libraries, but jpeg distribution by default only installs a nonshared version of the library, `libjpeg.a'. If this is the problem, you can configure the jpeg library with the `--enable-shared' option and then rebuild libjpeg. This produces a shared version of libjpeg, which you need to install. Finally, rerun the Emacs configure script, which should now find the jpeg library. Alternatively, modify the generated src/Makefile to link the .a file explicitly, and edit src/config.h to define HAVE_JPEG. *** `configure' warns ``accepted by the compiler, rejected by the preprocessor''. This indicates a mismatch between the C compiler and preprocessor that configure is using. For example, on Solaris 10 trying to use CC=/opt/SUNWspro/bin/cc (the Sun Studio compiler) together with CPP=/usr/ccs/lib/cpp can result in errors of this form (you may also see the error ``"/usr/include/sys/isa_defs.h", line 500: undefined control''). The solution is to tell configure to use the correct C preprocessor for your C compiler ( /proc/sys/kernel/exec-shield When Exec-shield is enabled, building Emacs will segfault during the execution of this command: ./temacs --batch --load loadup [dump|bootstrap] To work around this problem, it is necessary to temporarily disable Exec-shield while building Emacs, or, on x86, by using the `setarch' command when running temacs like this: may fail even if you turn off exec-shield. In this case, use the -R option to the setarch command: setarch i386 -R ./temacs --batch --load loadup [dump|bootstrap] or setarch i386 -R make bootstrap *** Fatal signal in the command temacs -l loadup inc dump. This command is the final stage of building Emacs. It is run by the Makefile in the src subdirectory, or by build.com on VMS. It has been known to get fatal errors due to insufficient swapping space available on the machine. On 68000s, it has also happened because of bugs in the subroutine `alloca'. Verify that `alloca' works right, even for large blocks (many pages). *** test-distrib says that the distribution has been clobbered. *** or, temacs prints "Command key out of range 0-127". *** or, temacs runs and dumps emacs, but emacs totally fails to work. *** or, temacs gets errors dumping emacs. This can be because the .elc files have been garbled. Do not be fooled by the fact that most of a .elc file is text: these are binary files and can contain all 256 byte values. In particular `shar' cannot be used for transmitting GNU Emacs. It typically truncates "lines". What appear to be "lines" in a binary file can of course be of any length. Even once `shar' itself is made to work correctly, `sh' discards null characters when unpacking the shell archive. I have also seen character \177 changed into \377. I do not know what transfer means caused this problem. Various network file transfer programs are suspected of clobbering the high bit. If you have a copy of Emacs that has been damaged in its nonprinting characters, you can fix them: 1) Record the names of all the .elc files. 2) Delete all the .elc files. 3) Recompile alloc.c with a value of PURESIZE twice as large. (See puresize.h.) You might as well save the old alloc.o. 4) Remake emacs. It should work now. 5) Running emacs, do Meta-x byte-compile-file repeatedly to recreate all the .elc files that used to exist. You may need to increase the value of the variable max-lisp-eval-depth to succeed in running the compiler interpreted on certain .el files. 400 was sufficient as of last report. 6) Reinstall the old alloc.o (undoing changes to alloc.c if any) and remake temacs. 7) Remake emacs. It should work now, with valid .elc files. *** temacs prints "Pure Lisp storage exhausted". This means that the Lisp code loaded from the .elc and .el files during temacs -l loadup inc dump took up more space than was allocated. This could be caused by 1) adding code to the preloaded Lisp files 2) adding more preloaded files in loadup.el 3) having a site-init.el or site-load.el which loads files. Note that ANY site-init.el or site-load.el is nonstandard; if you have received Emacs from some other site and it contains a site-init.el or site-load.el file, consider deleting that file. 4) getting the wrong .el or .elc files (not from the directory you expected). 5) deleting some .elc files that are supposed to exist. This would cause the source files (.el files) to be loaded instead. They take up more room, so you lose. 6) a bug in the Emacs distribution which underestimates the space required. If the need for more space is legitimate, change the definition of PURESIZE in puresize.h. But in some of the cases listed above, this problem is a consequence of something else that is wrong. Be sure to check and fix the real problem. *** Linux: Emacs crashes when dumping itself on Mac PPC running Yellow Dog GNU/Linux. The crashes happen inside the function Fmake_symbol; here's a typical C backtrace printed by GDB: 0x190c0c0 in Fmake_symbol () (gdb) where #0 0x190c0c0 in Fmake_symbol () #1 0x1942ca4 in init_obarray () #2 0x18b3500 in main () #3 0x114371c in __libc_start_main (argc=5, argv=0x7ffff5b4, envp=0x7ffff5cc, This could happen because GCC version 2.95 and later changed the base of the load address to 0x10000000. Emacs needs to be told about this, but we currently cannot do that automatically, because that breaks other versions of GNU/Linux on the MacPPC. Until we find a way to distinguish between the Yellow Dog and the other varieties of GNU/Linux systems on the PPC, you will have to manually uncomment the following section near the end of the file src/m/macppc.h in the Emacs distribution: */ Remove the "#if 0" and "#endif" directives which surround this, save the file, and then reconfigure and rebuild Emacs. The dumping process should now succeed. *** OpenBSD 4.0 macppc: Segfault during dumping. The build aborts with signal 11 when the command `./temacs --batch --load loadup bootstrap' tries to load files.el. A workaround seems to be to reduce the level of compiler optimization used during the build (from -O2 to -O1). It is possible this is an OpenBSD GCC problem specific to the macppc architecture, possibly only occurring with older versions of GCC (e.g. 3.3.5). ** Installation *** Installing Emacs gets an error running `install-info'. You need to install a recent version of Texinfo; that package supplies the `install-info' command. *** Installing to a directory with spaces in the name fails. For example, if you call configure with a directory-related option with spaces in the value, eg --enable-locallisppath='/path/with\ spaces'. Using directory paths with spaces is not supported at this time: you must re-configure without using spaces. *** Installing to a directory with non-ASCII characters in the name fails. Installation may fail, or the Emacs executable may not start correctly, if a directory name containing non-ASCII characters is used as a `configure' argument (e.g. `--prefix'). The problem can also occur if a non-ASCII directory is specified in the EMACSLOADPATH envvar. ***. This was reported to happen when Emacs is built in a directory mounted via NFS, for some combinations of NFS client and NFS server. Usually, the file `emacs' produced in these cases is full of binary null characters, and the `file' utility says: emacs: ASCII text, with no line terminators We don't know what exactly causes this failure. A work-around is to build Emacs in a directory on a local disk. *** The dumped Emacs crashes when run, trying to write pure data. Two causes have been seen for such problems. 1) On a system where getpagesize is not a system call, it is defined as a macro. If the definition (in both unexec.c and malloc.c) is wrong, it can cause problems like this. You might be able to find the correct value in the man page for a.out (5). 2) Some systems allocate variables declared static among the initialized variables. Emacs makes all initialized variables in SunOS 4.1.4 stopped shipping on Sep 30 1998. **** SunOS: You get linker errors ld: Undefined symbol _get_wmShellWidgetClass _get_applicationShellWidgetClass **** Sun 4.0.x: M-x shell persistently reports "Process shell exited abnormally with code 1". This happened on Suns as a result of what is said to be a bug in Sunos version 4.0.x. The only fix was to reboot the machine. **** SunOS4.1.1 and SunOS4.1.3: Mail is lost when sent to local aliases. Many emacs mail user agents (VM and rmail, for instance) use the sendmail.el library. This library can arrange for mail to be delivered by passing messages to the /usr/lib/sendmail (usually) program . In doing so, it passes the '-t' flag to sendmail, which means that the name of the recipient of the message is not on the command line and, therefore, that sendmail must parse the message to obtain the destination address. There is a bug in the SunOS4.1.1 and SunOS4.1.3 versions of sendmail. In short, when given the -t flag, the SunOS sendmail won't recognize non-local (i.e. NIS) aliases. It has been reported that the Solaris 2.x versions of sendmail do not have this bug. For those using SunOS 4.1, the best fix is to install sendmail V8 or IDA sendmail (which have other advantages over the regular sendmail as well). At the time of this writing, these official versions are available: Sendmail V8 on in /ucb/sendmail: sendmail.8.6.9.base.tar.Z (the base system source & documentation) sendmail.8.6.9.cf.tar.Z (configuration files) sendmail.8.6.9.misc.tar.Z (miscellaneous support programs) sendmail.8.6.9.xdoc.tar.Z (extended documentation, with postscript) IDA sendmail on vixen.cso.uiuc.edu in /pub: sendmail-5.67b+IDA-1.5.tar.gz **** Sunos 4: You get the error ld: Undefined symbol __lib_version. This is the result of using cc or gcc with the shared library meant for acc (the Sunpro compiler). Check your LD_LIBRARY_PATH and delete /usr/lang/SC2.0.1 or some similar directory. **** SunOS 4.1.3: Emacs unpredictably crashes in _yp_dobind_soft. This happens if you configure Emacs specifying just `sparc-sun-sunos4' on a system that is version 4.1.3. You must specify the precise version number (or let configure figure out the configuration, which it can do perfectly well for SunOS). **** Sunos 4.1.3: Emacs gets hung shortly after startup. We think this is due to a bug in Sunos. The word is that one of these Sunos patches fixes the bug: We don't know which of these patches really matter. If you find out which ones, please inform bug-gnu-emacs@gnu.org. **** SunOS 4: Emacs processes keep going after you kill the X server (or log out, if you logged in using X). Someone reported that recompiling with GCC 2.7.0 fixed this problem. The fix to this is to install patch 100573 for OpenWindows 3.0 or link libXmu statically. ****. *** Apollo Domain **** Shell mode ignores interrupts on Apollo Domain. You may find that M-x shell prints the following message: Warning: no access to tty; thus no job control in this shell... This can happen if there are not enough ptys on your system. Here is how to make more of them. % cd /dev % ls pty* # shows how many pty's you have. I had 8, named pty0 to pty7) % /etc/crpty 8 # creates eight new pty's ***. The compiler was reported to crash while compiling syntax.c with the following message: cc: Internal compiler error: program cc1obj got fatal signal 11 To work around this, replace the macros UPDATE_SYNTAX_TABLE_FORWARD, INC_BOTH, and INC_FROM with functions. To this end, first define 3 functions, one each for every macro. Here's an example: static int update_syntax_table_forward(int from) { return(UPDATE_SYNTAX_TABLE_FORWARD(from)); }/*update_syntax_table_forward*/ Then replace all references to UPDATE_SYNTAX_TABLE_FORWARD in syntax.c with a call to the function update_syntax_table_forward. *** Solaris 2.x **** Strange results from format %d in a few cases, on a Sun. Sun compiler version SC3.0 has been found to miscompile part of editfns.c. The workaround is to compile with some other compiler such as GCC. **** On Solaris, Emacs dumps core if lisp-complete-symbol is called. If you compile Emacs with the -fast or -xO4 option with version 3.0.2 of the Sun C compiler, Emacs dumps core when lisp-complete-symbol is called. The problem does not happen if you compile with GCC. **** On Solaris, Emacs crashes if you use (display-time). are using GCC 2.7.2.3 (or earlier) on Solaris 2.6 (or later); this does not work without patching. To run GCC 2.7.2.3 on Solaris 2.6 or later, you must patch fixinc.svr4 and reinstall GCC from scratch as described in the Solaris FAQ <>. A better fix is to upgrade to GCC 2.8.1 or later. ****. **** Solaris 2.x: Emacs dumps core when built with Motif. The Solaris Motif libraries are buggy, at least up through Solaris 2.5.1. Install the current Motif runtime library patch appropriate for your host. (Make sure the patch is current; some older patch versions still have the bug.) You should install the other patches recommended by Sun for your host, too. You can obtain Sun patches from; look for files with names ending in `.PatchReport' to see which patches are currently recommended for your host. On Solaris 2.6, Emacs is said to work with Motif when Solaris patch 105284-12 is installed, but fail when 105284-15 is installed. 105284-18 might fix it again. **** Solaris 2.6 and 7: the Compose key does not work. This is a bug in Motif in Solaris. Supposedly it has been fixed for the next major release of Solaris. However, if someone with Sun support complains to Sun about the bug, they may release a patch. If you do this, mention Sun bug #4188711. One workaround is to use a locale that allows non-ASCII characters. For example, before invoking emacs, set the LC_ALL environment variable to "en_US" (American English). The directory /usr/lib/locale lists the supported locales; any locale other than "C" or "POSIX" should do. pen@lysator.liu.se says (Feb 1998) that the Compose key does work if you link with the MIT X11 libraries instead of the Solaris X11 libraries. *** HP/UX versions before 11.0 HP/UX 9 was end-of-lifed in December 1998. HP/UX 10 was end-of-lifed in May 1999. **** HP/UX 9: Emacs. *** HP/UX 10: Large file support is disabled. See the comments in src/s/hpux10.h. *** HP/UX: Emacs is slow using X11R5. This happens if you use the MIT versions of the X libraries--it doesn't run as fast as HP's version. People sometimes use the version because they see the HP version doesn't have the libraries libXaw.a,. So far it appears that running `tset' triggers this problem (when TERM is vt100, at least). If you do not run `tset', then Emacs displays. Try defining BROKEN_FIONREAD in your config.h file. If this solves the problem, please send a bug report to tell us this is needed; be sure to say exactly what type of machine and system you are using. **** SVr4: After running emacs once, subsequent invocations crash. Some versions of SVR4 have a serious bug in the implementation of the mmap () system call in the kernel; this causes emacs to run correctly the first time, and then crash when run a second time. Contact your vendor and ask for the mmap bug fix; in the mean time, you may be able to work around the problem by adding a line to your operating system description file (whose name is reported by the configure script) that reads: #define SYSTEM_MALLOC.. **** UnixWare 2.1: Error 12 (virtual memory exceeded) when dumping Emacs. Paul Abrahams (abrahams@acm.org) reports that with the installed virtual memory settings for UnixWare 2.1.2, an Error 12 occurs during the "make" that builds Emacs, when running temacs to dump emacs. That error indicates that the per-process virtual memory limit has been exceeded. The default limit is probably 32MB. Raising the virtual memory limit to 40MB should make it possible to finish building Emacs. You can do this with the command `ulimit' (sh) or `limit' (csh). But you have to be root to do it. According to Martin Sohnius, you can also retune this in the kernel: # /etc/conf/bin/idtune SDATLIM 33554432 ## soft data size limit # /etc/conf/bin/idtune HDATLIM 33554432 ## hard " # /etc/conf/bin/idtune SVMMSIZE unlimited ## soft process size limit # /etc/conf/bin/idtune HVMMSIZE unlimited ## hard " # /etc/conf/bin/idbuild -B `perl -de 0' just hangs when executed in an Emacs subshell. The fault lies with Perl (indirectly with Windows NT/95). The problem is that the Perl debugger explicitly opens a connection to "CON", which is the DOS/NT equivalent of "/dev/tty", for interacting with the user. On Unix, this is okay, because Emacs (or the shell?) creates a pseudo-tty so that /dev/tty is really the pipe Emacs is using to communicate with the subprocess. On NT, this fails because CON always refers to the handle for the relevant console (approximately equivalent to a tty), and cannot be redirected to refer to the pipe Emacs assigned to the subprocess as stdin. A workaround is to modify perldb.pl to use STDIN/STDOUT instead of CON. For Perl 4: *** PERL/LIB/PERLDB.PL.orig Wed May 26 08:24:18 1993 --- PERL/LIB/PERLDB.PL Mon Jul 01 15:28:16 1996 *************** *** 68,74 **** $rcfile=".perldb"; } else { ! $console = "con"; $rcfile="perldb.ini"; } --- 68,74 ---- $rcfile=".perldb"; } else { ! $console = ""; $rcfile="perldb.ini"; } For Perl 5: *** perl/5.001/lib/perl5db.pl.orig Sun Jun 04 21:13:40 1995 --- perl/5.001/lib/perl5db.pl Mon Jul 01 17:00:08 1996 *************** *** 22,28 **** $rcfile=".perldb"; } elsif (-e "con") { ! $console = "con"; $rcfile="perldb.ini"; } else { --- 22,28 ---- $rcfile=".perldb"; } elsif (-e "con") { ! $console = ""; $rcfile="perldb.ini"; } else { *** MS-Windows 95: Alt-f6 does not get through to Emacs. This character seems to be trapped by the kernel in Windows 95. You can enter M-f6 by typing ESC f6. *** MS-Windows 95/98/ME: subprocesses do not terminate properly. This is a limitation of the Operating System, and can cause problems when shutting down Windows. Ensure that all subprocesses are exited cleanly before exiting Emacs. For more details, see the FAQ at. *** MS-Windows 95/98/ME: crashes when Emacs invokes non-existent programs. When a program you are trying to run is not found on the PATH, Windows might respond by crashing or locking up your system. In particular, this has been reported when trying to compile a Java program in JDEE when javac.exe is installed, but not on the system PATH. ** MS-DOS *** When compiling with DJGPP on MS-Windows NT, "config msdos" fails. If the error message is "VDM has been already loaded", this is because Windows has a program called `redir.exe' that is incompatible with a program by the same name supplied with DJGPP, which is used by config.bat. To resolve this, move the DJGPP's `bin' subdirectory to the front of your PATH environment variable. *** When compiling with DJGPP on MS-Windows 95, Make fails for some targets like make-docfile. This can happen if long file name support (the setting of environment variable LFN) when Emacs distribution was unpacked and during compilation are not the same. See the MSDOG section of INSTALL for the explanation of how to avoid this problem. *** Emacs compiled with DJGPP complains at startup: "Wrong type of argument: internal-facep, msdos-menu-active-face" This can happen if you define an environment variable `TERM'. Emacs on MSDOS uses an internal terminal emulator which is disabled if the value of `TERM' is anything but the string "internal". Emacs then works as if its terminal were a dumb glass teletype that doesn't support faces. To work around this, arrange for `TERM' to be undefined when Emacs runs. The best way to do that is to add an [emacs] section to the DJGPP.ENV file which defines an empty value for `TERM'; this way, only Emacs gets the empty value, while the rest of your system works as before. *** MS-DOS: Emacs crashes at startup. Some users report that Emacs 19.29 requires dpmi memory management, and crashes on startup if the system does not have it. We don't yet know why this happens--perhaps these machines don't have enough real memory, or perhaps something is wrong in Emacs or the compiler. However, arranging to use dpmi support is a workaround. You can find out if you have a dpmi host by running go32 without arguments; it will tell you if it uses dpmi memory. For more information about dpmi memory, consult the djgpp FAQ. (djgpp is the GNU C compiler as packaged for MSDOS.) Compiling Emacs under MSDOS is extremely sensitive for proper memory configuration. If you experience problems during compilation, consider removing some or all memory resident programs (notably disk caches) and make sure that your memory managers are properly configured. See the djgpp faq for configuration hints. *** Emacs compiled with DJGPP for MS-DOS/MS-Windows cannot access files in the directory with the special name `dev' under the root of any drive, e.g. `c:/dev'. This is an unfortunate side-effect of the support for Unix-style device names such as /dev/null in the DJGPP runtime library. A work-around is to rename the problem directory to another name. *** MS-DOS+DJGPP: Problems on MS-DOG if DJGPP v2.0 is used to compile Emacs. There are two DJGPP library bugs which cause problems: * Running `shell-command' (or `compile', or `grep') you get `Searching for program: permission denied (EACCES), c:/command.com'; * After you shell to DOS, Ctrl-Break kills Emacs. To work around these bugs, you can use two files in the msdos subdirectory: `is_exec.c' and `sigaction.c'. Compile them and link them into the Emacs executable `temacs'; then they will replace the incorrect library functions. *** MS-DOS: Emacs compiled for MSDOS cannot find some Lisp files, or other run-time support files, when long filename support is enabled. Usually, this problem will manifest itself when Emacs exits immediately after flashing the startup screen, because it cannot find the Lisp files it needs to load at startup. Redirect Emacs stdout and stderr to a file to see the error message printed by Emacs. Another manifestation of this problem is that Emacs is unable to load the support for editing program sources in languages such as C and Lisp. This can happen if the Emacs distribution was unzipped without LFN support, thus causing long filenames to be truncated to the first 6 characters and a numeric tail that Windows 95 normally attaches to it. You should unzip the files again with a utility that supports long filenames (such as djtar from DJGPP or InfoZip's UnZip program compiled with DJGPP v2). The MSDOG section of the file INSTALL explains this issue in more detail. Another possible reason for such failures is that Emacs compiled for MSDOS is used on Windows NT, where long file names are not supported by this version of Emacs, but the distribution was unpacked by an unzip program that preserved the long file names instead of truncating them to DOS 8+3 limits. To be useful on NT, the MSDOS port of Emacs must be unzipped by a DOS utility, so that long file names are properly truncated. ** Archaic window managers and toolkits *** OpenLook: Under OpenLook, the Emacs window disappears when you type M-q. Some versions of the Open Look window manager interpret M-q as a quit command for whatever window you are typing at. If you want to use Emacs with that window manager, you should try to configure the window manager to use some other command. You can disable the shortcut keys entirely by adding this line to ~/.OWdefaults: OpenWindows.WindowMenuAccelerators: False **** twm: A position you specified in .Xdefaults is ignored, using twm. twm normally ignores "program-specified" positions. You can tell it to obey them with this command in your `.twmrc' file: UsePPosition "on" #allow clients to request a position ** Bugs related to old DEC hardware *** The Compose key on a DEC keyboard does not work as Meta key. This shell command should fix it: xmodmap -e 'keycode 0xb1 = Meta_L' *** Keyboard input gets confused after a beep when using a DECserver as a concentrator. This problem seems to be a matter of configuring the DECserver to use 7 bit characters rather than 8 bit characters. * Build problems on legacy systems ** BSD/386 1.0: --with-x-toolkit option configures wrong. This problem is due to bugs in the shell in version 1.0 of BSD/386. The workaround is to edit the configure file to use some other shell, such as bash. ** Digital Unix 4.0: Emacs fails to build, giving error message Invalid dimension for the charset-ID 160 This is due to a bug or an installation problem in GCC 2.8.0. Installing a more recent version of GCC fixes the problem. ** Digital Unix 4.0: Failure in unexec while dumping emacs. This problem manifests itself as an error message unexec: Bad address, writing data section to ... The user suspects that this happened because his X libraries were built for an older system version, ./configure --x-includes=/usr/include --x-libraries=/usr/shlib `ffpa_used' or `start_float' is undefined, this probably indicates that you have compiled some libraries, such as the X libraries, with a floating point option other than the default. It's not terribly hard to make this work with small changes in crt0.c together with linking with Fcrt1.o, Wcrt1.o or Mcrt1.o. However, the easiest approach is to build Xlib with the default floating point option: -fsoft. ** SunOS: Undefined symbols _dlopen, _dlsym and/or _dlclose. If you see undefined symbols _dlopen, _dlsym, or _dlclose when linking with -lX11, compile and link against the file mit/util/misc/dlsym.c in the MIT X11R5 distribution. Alternatively, link temacs using shared libraries with s/sunos4shr.h. (This doesn't work if you use the X toolkit.) If you get the additional error that the linker could not find lib_version.o, try extracting it from X11/usr/lib/X11/libvim.a in X11R4, then use it in the link. ** SunOS4, DGUX 5.4.2: --with-x-toolkit version crashes when used with shared libraries. On some systems, including Sunos 4 and DGUX 5.4.2 and perhaps others, unexec doesn't work properly with the shared library for the X toolkit. You might be able to work around this by using a nonshared libXt.a library. The real fix is to upgrade the various versions of unexec and/or ralloc. We think this has been fixed on Sunos 4 and Solaris in version 19.29. ** HPUX 10.20: Emacs crashes during dumping on the HPPA machine. This seems to be due to a GCC bug; it is fixed in GCC 2.8.1. ** VMS: Compilation errors on VMS. You will get warnings when compiling on VMS because there are variable names longer than 32 (or whatever it is) characters. This is not an error. Ignore it. VAX C does not support #if defined(foo). Uses of this construct were removed, but some may have crept back in. They must be rewritten. There is a bug in the C compiler which fails to sign extend characters in conditional expressions. The bug is: char c = -1, d = 1; int i; i = d ? c : d; The result is i == 255; the fix is to typecast the char in the conditional expression as an (int). Known occurrences of such constructs in Emacs have been fixed. ** Vax C compiler bugs affecting Emacs. You may get one of these problems compiling Emacs: foo.c line nnn: compiler error: no table entry for op STASG foo.c: fatal error in /lib/ccom These are due to bugs in the C compiler; the code is valid C. Unfortunately, the bugs are unpredictable: the same construct may compile properly or trigger one of these bugs, depending on what else is in the source file being compiled. Even changes in header files that should not affect the file being compiled can affect whether the bug happens. In addition, sometimes files that compile correctly on one machine get this bug on another machine. As a result, it is hard for me to make sure this bug will not affect you. I have attempted to find and alter these constructs, but more can always appear. However, I can tell you how to deal with it if it should happen. The bug comes from having an indexed reference to an array of Lisp_Objects, as an argument in a function call: Lisp_Object *args; ... ... foo (5, args[i], ...)... putting the argument into a temporary variable first, as in Lisp_Object *args; Lisp_Object tem; ... tem = args[i]; ... foo (r, tem, ...)... causes the problem to go away. The `contents' field of a Lisp vector is an array of Lisp_Objects, so you may see the problem happening with indexed references to that. ** 68000 C compiler problems Various 68000 compilers have different problems. These are some that have been observed. *** Using value of assignment expression on union type loses. This means that x = y = z; or foo (x = z); does not work if x is of type Lisp_Object. *** "cannot reclaim" error. This means that an expression is too complicated. You get the correct line number in the error message. The code must be rewritten with simpler expressions. *** XCONS, XSTRING, etc macros produce incorrect code. If temacs fails to run at all, this may be the cause. Compile this test program and look at the assembler code: struct foo { char x; unsigned int y : 24; }; lose (arg) struct foo arg; { test ((int *) arg.y); } If the code is incorrect, your compiler has this problem. In the XCONS, etc., macros in lisp.h you must replace (a).u.val with ((a).u.val + coercedummy) where coercedummy is declared as int. This problem will not happen if the m-...h file for your type of machine defines NO_UNION_TYPE. That is the recommended setting now. *** C compilers lose on returning unions. I hear that some C compilers cannot handle returning a union type. Most of the functions in GNU Emacs return type Lisp_Object, which is defined as a union on some rare architectures. This problem will not happen if the m-...h file for your type of machine defines NO_UNION_TYPE.: 49fc0d95-88cb-4715-b21c-f27fb5a4764a
|
http://opensource.apple.com/source/emacs/emacs-78.2/emacs/etc/PROBLEMS
|
CC-MAIN-2014-41
|
refinedweb
| 15,781
| 65.52
|
Jeremy Kloth wrote: [snip] > Most libraries that get moved into the core are not as large as PyXML and to > top it off only part of PyXML was moved into the core. That is true. I don't quite see how it therefore follows we should use this scheme however. > This is probably the > root of most of the problems. However, PyXML had dibs on the top-level "xml" > before Python core did, so this deal was struck to keep developers for both > sides happy. Okay, if that is how it happened, but I'm not happy. :) >.. > Another issue that is solved by the current arrangement is support for > multiple Python versions with the same code base. For example, if you > develop an application for 2.3 that uses the "core" xml library, you can > easily support Python 2.0, 2.1 and 2.2 by telling those users to simply > install PyXML 0.8.2 and it will work the same (given you use only those > features of Python available in the lowest version you wish to support). > This is a big plus for end users. No, I don't think the current arrangement helps here at all; in fact it contributes to the confusion. If PyXML had its own top level namespace I could *still* easily support multiple Python versions by telling users to install PyXML. In fact, it'll be far more obvious to developers and users what's going on, as their Python core library isn't magically upgraded out from under them. Or even weirder, you upgrade your Python but you also have PyXML installed for it (or install it later), and now it may be magically *degraded*. One has to look hard to determine whether should install anything at all in the current situation. If I develop for Python 2.3 which has more features in its XML core libraries, I have to somehow find out that some version of PyXML has versions of the code that if installed may make my code work with older versions.. > Now as I understand it, you (or users you are speaking for) are having > problems with PyXML installations or lack thereof. If so, please file bug > reports! We cannot fix what we don't know is broke. No, I have witnessed several developers using this software not understanding the arrangment, which is obscure and confusing. In addition there are indeed users who get confused as well. Dependency management would be a lot more straightforward for everyone if PyXML had its own top level namespace. Regards, Martijn
|
https://mail.python.org/pipermail/xml-sig/2003-March/009096.html
|
CC-MAIN-2014-15
|
refinedweb
| 429
| 72.26
|
I.
Rule #1: Destructure your
props
One of my favorite ES6 features is destructuring. It makes assigning object properties to variables feel like much less of a chore. Let’s take a look at an example.
Say we have a dog that we want to display as a div with a class named after its breed. Inside the div is a sentence that notes the dog’s color and tells us if it’s a good dog or bad dog.
class Dog extends Component { render () { return <div className={this.props.breed}>My {this.props.color} dog is {this.props.isGoodBoy ? "good" : "bad"}</div>; } }
That technically does everything we want, but it just seems like quite a big block of code for what really is only three variables and one HTML tag.
We can break it out by assigning all of the properties of
props to local variables.
let breed = this.props.breed; let color = this.props.color; let isGoodBoy = this.props.isGoodBoy;
Using ES6, we can put it in one clean statement like this:
let { breed, color, isGoodBoy } = this.props;
To keep everything clean, we put our ternary operator (more on that later) in its own variable as well, and voila.
class Dog extends Component { render () { let { breed, color, isGoodBoy } = this.props; let identifier = isGoodBoy ? "good" : "bad"; return <div className={breed}>My {color} dog is {identifier}</div>; } }
Much easier to read.
Rule #2: One tag, one lineRule #2: One tag, one line
Now, we’ve all had that moment where we want to take our entire function and make it a mash of operators and tiny parameter names to make some uglified, superfast, unreadable utility function. However, when you’re making a stateless Component in React, you can fairly easily do the same thing while remaining clean.
class Dog extends Component { render () { let { breed, color, goodOrBad } = this.props; return <div className={breed}>My {color} dog is {goodOrBad}</div>; } }
vs.
let Dog = (breed, color, goodOrBad) => <div className={breed}>My {color} dog is {goodOrBad}</div>;
If all you’re doing is making a basic element and placing properties in an HTML tag, then don’t worry about making such a big deal of all the functions and wrappers to get an entirely separate class going. One line of code will do.
You can even get creative with some ES6 spread functions if you pass an object for your properties. Using
this.props.content will automatically put the string between the open and close tag.
let propertiesList = { className: "my-favorite-component", id: "myFav", content: "Hello world!" }; let SimpleDiv = props => <div {... props} />; let jsxVersion = <SimpleDiv props={propertiesList} />;
When to use the spread function:
- No ternary operators required
- Only passing HTML tag attributes and content
- Can be used repeatedly
When not to use the spread function:
- Dynamic properties
- Array or object properties are required
- A render that would require nested tags
Rule #3: The rule of 3’sRule #3: The rule of 3’s
If you have three or more properties, then put them on their own line both in the instance and in the render function.
This would be fine to have just one line of properties:
class GalleryImage extends Component { render () { let { imgSrc, title } = this.props; return ( <figure> <img src={imgSrc} alt={title} /> <figcaption> <p>Title: {title}</p> </figcaption> </figure> ); } }
But consider this:
class GalleryImage extends Component { render () { let { imgSrc, title, artist, clas, thumbnail, breakpoint } = this.props; return ( <figure className={clas}> <picture> <source media={`(min-width: ${breakpoint})`} ); } }
Or the render:
<GalleryImage imgSrc="./src/img/vangogh2.jpg" title="Starry Night" artist="Van Gogh" clas="portrait" thumbnail="./src/img/thumb/vangogh2.gif" breakpoint={320} />
It can get to be too much of a codeblock to read. Drop each property to the next line for a clean, readable look:
let { imgSrc, title, artist, clas, thumbnail, breakpoint } = this.props;
and:
<GalleryImage imgSrc="./src/img/vangogh2.jpg" title="Starry Night" artist="Van Gogh" clas="landscape" thumbnail="./src/img/thumb/vangogh2.gif" breakpoint={320} />
Rule #4: Too many properties?Rule #4: Too many properties?
Property management is tricky at any level, but with ES6 destructuring and React’s state-based approach, there are quite a few ways to clean up the look of a lot of properties.
Let’s say we’re making a mapping application that has a list of saved addresses and a GPS coordinate for your current location.
The current user information of position and proximity to favorite address should be in the parent Component of App like this:
class App extends Component { constructor (props) { super(props); this.state = { userLat: 0, userLon: 0, isNearFavoriteAddress: false }; } }
So, when we make an address and we want it to note how close you are to the address, we’re passing at least two properties from App.
In App
render ():
<Address ... // Information about the address currentLat={this.state.userLat} currentLong={this.state.userLon} />
In the render function for Address Component:
render () { let { houseNumber, streetName, streetDirection, city, state, zip, lat, lon, currentLat, currentLon } = this.props; return ( ... ); }
Already, you can see how this is getting unwieldy. If we take the two sets of information and break them out into their own objects, it becomes much more manageable.
In our App
constructor ():
this.state = { userPos: { lat: 0, lon: 0 }, isNearFavoriteAddress: false };
At some point before App
render ():
let addressList = []; addressList.push({ houseNumber: "1234", streetName: "Street Rd", streetDirection: "N", city: "City", state: "ST", zip: "12345", lat: "019782309834", lon: "023845075757" });
In App
render ():
<Address addressInfo={addressList[0]} userPos={this.state.userPos} />
In the render function for Address Component
render () { let { addressInfo, userPos } = this.props; let { houseNumber, streetName, streetDirection, city, state, zip, lat, lon } = addressInfo; return ( ... ); }
Much, much cleaner. React also has some great ways to ensure that object properties exist and are of a certain type using
PropTypes that we don’t normally have in JavaScript, which is just a great OOP thing anyway.
Rule #5: Dynamic renders – Mapping out arraysRule #5: Dynamic renders – Mapping out arrays
Quite often in HTML, we’re writing the same basic pieces of code over and over, just with a few key distinctions. This is why React was created in the first place. You make an object with properties that return a complex, dynamic HTML block, without having to write each part of it repeatedly.
JavaScript already has a great way to do lists of like information: arrays!
React uses the
.map() function to lay out arrays in order, using one parameter from the arrays as a
key.
render () { let pokemon = [ "Pikachu", "Squirtle", "Bulbasaur", "Charizard" ]; return ( <ul> {pokemon.map(name => <li key={name}>{name}</li>)} </ul> ); }
You can even use our handy-dandy spread functions to throw a whole list of parameters in by an object using
Object.keys() (keeping in mind that we still need a
key).
render () { let pokemon = { "Pikachu": { type: "Electric", level: 10 }, "Squirtle": { type: "Water", level: 10 }, "Bulbasaur": { type: "Grass", level: 10 }, "Charizard": { type: "Fire", level: 10 } }; return ( <ul> {Object.keys(pokemon).map(name => <Pokemon key={name} {... pokemon[name]} />)} </ul> ); }
Rule #6: Dynamic renders – React ternary operatorsRule #6: Dynamic renders – React ternary operators
In React, you can use operators to do a conditional render just like a variable declaration. In Rule #1, we looked at this for stating whether our dog was good or bad. It’s not entirely necessary to create an entire line of code to decide a one-word difference in a sentence, but when it gets to be large code blocks, it’s difficult to find those little
?‘s and
:‘s.
class SearchResult extends Component { render () { let { results } = this.props; return ( <section className="search-results"> {results.length > 0 && results.map(index => <Result key={index} {... results[index] />) } {results.length === 0 && <div className="no-results">No results</div> } </section> ); } }
Or, in true ternary fashion
class SearchResult extends Component { render () { let { results } = this.props; return ( <section className="search-results"> {results.length > 0 ? results.map(index => <Result key={index} {... results[index] />) : <div className="no-results">No results</div> } </section> ); } }
Even with our tidy result mapping, you can see how the brackets are already nesting quite densely. Now, imagine if our render had more than just one line. It can pretty quickly get unreadable. Consider an alternative:
class SearchResult extends Component { render () { let { results } = this.props; let outputJSX; if (results.length > 0) { outputJSX = ( <Fragment> {results.map(index => <Result key={index} {... results[index] />)} </Fragment> ); } else { outputJSX = <div className="no-results">No results</div>; } return <section className="search-results">{outputJSX}</section>; } }
Ultimately, the code length is about the same, but there is one key distinction: with the first example, we’re rapidly switching back and forth between two different syntaxes, making visual parsing taxing and difficult, whereas the second is simply plain JavaScript with value assignments in one, consistent language and a one-line function return in another.
The rule of thumb in this situation is that if the JavaScript you’re putting into your JSX object is more than two words (e.g.
object.property), it should be done before the
return call.
Wrap upWrap up
The combination of syntax can get messy, and these are the most obvious situations where I saw my code going off the rails. Here are the basic concepts that these all come from and can be applied to any situation that wasn’t covered here:
- Use ES6 features. Seriously. There are a lot of fantastic features that can make your job easier, faster, and much less manual.
- Only write JSX on the right side of an
=or a
return.
- Sometimes you need JavaScript in your JSX. If your JavaScript doesn’t fit on one line (like a
.map()function or ternary operator), then it should be done beforehand.
- If your code starts looking like
(<{`${()}`} />), then you’ve probably gone too far. Take the lowest level outside the current statement and do it before this one.
I think this example:
should be:
The first example isn’t destructuring props.
Rule #5 could go one step further by using
Object.entriesinstead of
Object.keys. Combined with destructuring in parameters, you can get a nice
Object.entries(obj).map(([key, val]) => This value is {val})
Very nice! I haven’t seen this one before, I’m going to have to start using it.
Nice article! I definitely recommend using prettier to help with some of the formatting tips you listed.
Side note: in the last example, I think you should be using outputJSX in the return function instead of just output.
Cheers!
Instead of “let”, use “const” when destructuring. You should use “let” when you are going to change the value, which isn’t the case here.
I’d really recommend using
constinstead of
let. This would prevent reassigning variables which goes against immutable data concepts a little I think? If the variable won’t be reassigned, it should really just be const as that’s its purpose.
Not sure why you would use
letinstead of
constwhen destructuring your props. Props are not allowed to be mutated and I would have my variable declarations reinforce that by using
const.
Hi Daniel,
I agree with Dwayne.
It’s worth mentioning in any article about React code style that Prettier has quickly become the de facto JavaScript “styleguide”.
In a practical sense, it speeds up development by allowing you to code in a “Wild West” style, and then magically cleaning that code with each save.
Check it out!
Please use
constby default and only use
letwhen it is necessary.
Don’t do this
do this
Don’t do this
Do this
Don’t do this
Do this
|
https://css-tricks.com/react-code-style-guide/
|
CC-MAIN-2022-40
|
refinedweb
| 1,911
| 64.81
|
Deleting unused Django media files
Handling Files in Django is pretty easy: you can add them to a model with only a line (for a brush-up on Django models, you can check out our article on handling data web frameworks), and the framework will handle everything for you – validations, uploading, type checking. Even serving them takes very little effort.
However, there is one thing that Django no longer does starting with version 1.3: automatically deleting files from a model when the instance is deleted.
There are good reasons for which this decision was made: in certain cases (such as rolled-back transactions or cases when a file was being referenced from multiple models) this behaviour was prone to data loss.
Nowadays, almost everyone uses AWS S3, or Google Cloud Storage, or MS Azure, or one of the many cloud-based existing solutions for storing media files without all the hassle and without having to worry that you will one day run out of space. So why even care about the fact that Django doesn’t delete files that are not used anymore? Well, first off, not everyone uses “the cloud” as a storage space for their files (maybe for security concerns or maybe just because they don’t want to). Secondly, those who do use cloud-based storage know that even though theoretically there is no size limit, the costs can become quite large by not deleting unused files.
So let’s dive right in and see which are the possible solutions for removing those nasty unused files.
1. Creating a custom management command
This first solution is actually the one being suggested in the Django documentation (see link above). This involves writing a custom management command which goes through the media files tree and checks, for each file, whether it is still being referenced from the database. Once all has been written and tested, you can schedule the command to run on a regular basis, using cron or celery.
The algorithm is quite simple and consists of four steps:
- We search for references to media files in the database — these will be stored in a set.
- We recursively create another set which comprises all physical files in the
MEDIA_ROOTdirectory.
- The difference between these sets represents files that are physically present, but are not referenced from the database — these are the files we will delete.
- In order for our cleanup to be complete, we traverse once again recursively and delete all empty directories.
Now let’s see the code in action:
import os from django.core.management.base import BaseCommand from django.apps import apps from django.db.models import Q from django.conf import settings from django.db.models import FileField class Command(BaseCommand): help = "This command deletes all media files from the MEDIA_ROOT directory which are no longer referenced by any of the models from installed_apps" def handle(self, *args, **options): all_models = apps.get_models() physical_files = set() db_files = set() # Get all files from the database for model in all_models: file_fields = [] filters = Q() for f_ in model._meta.fields: if isinstance(f_, FileField): file_fields.append(f_.name) is_null = {'{}__isnull'.format(f_.name): True} is_empty = {'{}__exact'.format(f_.name): ''} filters &= Q(**is_null) | Q(**is_empty) # only retrieve the models which have non-empty, non-null file fields if file_fields: files = model.objects.exclude(filters).values_list(*file_fields, flat=True).distinct() db_files.update(files) # Get all files from the MEDIA_ROOT, recursively media_root = getattr(settings, 'MEDIA_ROOT', None) if media_root is not None: for relative_root, dirs, files in os.walk(media_root): for file_ in files: # Compute the relative file path to the media directory, so it can be compared to the values from the db relative_file = os.path.join(os.path.relpath(relative_root, media_root), file_) physical_files.add(relative_file) # Compute the difference and delete those files deletables = physical_files - db_files if deletables: for file_ in deletables: os.remove(os.path.join(media_root, file_)) # Bottom-up - delete all empty folders for relative_root, dirs, files in os.walk(media_root, topdown=False): for dir_ in dirs: if not os.listdir(os.path.join(relative_root, dir_)): os.rmdir(os.path.join(relative_root, dir_))
2. Using signals
This is my favourite way of doing it, because it provides more control than the previous solution. We have used Django signals before and wrote about it on this blog. However, in regards of using signals for deleting unused media files, the comparison (with advantages and disadvantages) will be left for the end of this article.
There are two cases in which we will want to delete a file:
- When the model instance to which the file belongs is deleted – here we can simply use the post_delete signal, which will ensure that the instance has already been deleted from the database successfully. The code for this part is pretty straightforward:
from django.db.models import FileField from django.db.models.signals import post_delete, post_save, pre_save from django.dispatch.dispatcher import receiver LOCAL_APPS = [ 'my_app1', 'my_app2', '...' ] def delete_files(files_list): for file_ in files_list: if file_ and hasattr(file_, 'storage') and hasattr(file_, 'path'): # this accounts for different file storages (e.g. when using django-storages) storage_, path_ = file_.storage, file_.path storage_.delete(path_) @receiver(post_delete) def handle_files_on_delete(sender, instance, **kwargs): # presumably you want this behavior only for your apps, in which case you will have to specify them is_valid_app = sender._meta.app_label in LOCAL_APPS if is_valid_app: delete_files([getattr(instance, field_.name, None) for field_ in sender._meta.fields if isinstance(field_, FileField)])
- When a file is being replaced – in this case we must delete the old file and keep the new one if everything is successful. The simplest way to do it would be in the pre_save signal, when we can recover the value of the old file from the database. However, if any errors appear during the instance save, the file will be forever lost. So we have to do it in the post_save signal, once we know that everything is fine and that the instance was successfully saved in the database. But this also has a big caveat, since in the post_save signal we no longer have access to the previous values of the file field, meaning we no longer know which file(s) to delete. The final solution is to use the pre_save method to memorise the old value, and to actually perform the deletion in the post_save method. We will use a temporary cache on the model to keep the old values:
@receiver(pre_save) def set_instance_cache(sender, instance, **kwargs): # prevent errors when loading files from fixtures from_fixture = 'raw' in kwargs and kwargs['raw'] is_valid_app = sender._meta.app_label in LOCAL_APPS if is_valid_app and not from_fixture: # retrieve the old instance from the database to get old file values # for Django 1.8+, you can use the *refresh_from_db* method old_instance = sender.objects.filter(pk=instance.id).first() if old_instance is not None: # for each FileField, we will keep the original value inside an ephemeral `cache` instance.files_cache = { field_.name: getattr(old_instance, field_.name, None) for field_ in sender._meta.fields if isinstance(field_, FileField) } @receiver(post_save) def handle_files_on_update(sender, instance, **kwargs): if hasattr(instance, 'files_cache') and instance.files_cache: deletables = [] for field_name in instance.files_cache: old_file_value = instance.files_cache[field_name] new_file_value = getattr(instance, field_name, None) # only delete the files that have changed if old_file_value and old_file_value != new_file_value: deletables.append(old_file_value) delete_files(deletables) instance.files_cache = {field_name: getattr(instance, field_name, None) for field_name in instance.files_cache}
In case you are wondering “Why hasn’t anyone made a library out of this?”, they actually did. In fact, you can find several solutions which delete a file once it is no longer used, such as django-cleanup. It is up to you to decide what is best for your project.
Comparison
There are other ways to delete orphan files with Django which are not presented in this article. For example, if you know for sure you will only have so little file fields in your project, you may want to choose a more individualistic approach. Or, perhaps, you want to counter some of the effects that deleting these files has and move them to a temporary storage before permanently deleting them.
For now, let’s see how these methods work compared to each other, by enumerating their pluses and minuses:
Management command
+ A custom management command will only run ever so often and it can do so asynchronously. Hence, this solution can result in an overall better performance, since it doesn’t intervene in the request-response cycle.
+ If executed manually — meaning not inside a cron job — this could help preventing the loss of files caused by migrations/transactions.
+ By checking everything in the database, we make sure that no file is deleted if at least one mention to it exists (this takes care of the problem with multiple references).
– Customizing management commands is slightly more difficult, and passing those arguments to a cron job is rather ugly.
– If the command is not executed often enough, you could still run into storage size problems.
– This does not take care of different storage spaces (at least not in the current implementation).
– Running the command depends on the database size and on the media folder size, which can quickly become problematic once they increase.
Signals
+ Having everything implemented through signals allows for a high degree of control over the files being replaced: you can easily add extra logic which helps decide whether a file should be deleted or not (e.g. based on user account type).
+ This solution integrates nicely in the application flow and it doesn’t take too long to run, since everything is done on the spot. At the same time, it is much easier to implement for most programmers which are already accustomed to using Django signals.
– Even though we took care to handle file replacement in the post_save signal, there is still the possibility to have some errors after this signal is handled, which would result in an ‘unsuccessful’ save.
– It does not account for multiple references to the same file (even though, unless you are not directly modifying your database, this should never happen).
Conclusion
Before trying to implement a mechanism for deleting unused media files, you should always:
- consider why the Django team decided to remove this feature in the first place
- check if the solution you choose doesn’t introduce more problems than it solves
- ensure no unwanted data-loss is possible (test your code!)
It is up to you to see whether or not you need this behaviour and what is the best way for you to implement it. In this article, we are happy to have presented you with some possible solutions and hopefully this will be of help to some of our readers. In the mean time, we would love to hear your opinions/questions/suggestions so don’t hesitate to contact us using the comment section. Feel free to check out our other Python articles, including our guidelines for solving Django migration conflicts.
We transform challenges into digital experiences
Get in touch to let us know what you’re looking for. Our policy includes 14 days risk-free!Free project consultation
|
https://www.algotech.solutions/blog/python/deleting-unused-django-media-files/
|
CC-MAIN-2021-43
|
refinedweb
| 1,851
| 53.51
|
CodeRush 17.1.9 is now available, adding support for source code & XAML formatting, Microsoft Fakes support, and we have improved the unused member de-emphasis experience.
The Code Formatting feature now includes new (beta) abilities to configure Line Breaks. Two new options pages have been added:
Editor | C# | Formatting | Blank Lines — enables you to configure the number of blank lines around and within the following code elements:
Editor | C# | Formatting | Braces — enables you to configure the line breaks around and within the following code blocks:
Code Formatting styles are applied using the Code Cleanup feature.
We have added an ability to normalize whitespace inside XML comments.
CodeRush Test Runner now supports the Microsoft Fakes isolation framework.
It helps you mock the code you are testing by replacing parts of your application with the small pieces of code under the control of your tests.
Note: Microsoft Fakes framework is available only in the Enterprise version of Visual Studio. experience so that member de-emphasis is temporarily disabled when the caret is inside the member.
You can always download CodeRush for Roslyn from the Visual Studio Marketplace.
If you’re already using CodeRush for Roslyn and enjoying it, please share a review with other Visual Studio developers!
Another 30 days, another release of CodeRush for Roslyn. This release brings improvements in .NET Core 2.0 support, code formatting, code analysis, code providers, and templates.
The Unit Test Runner now runs .NET Core 2.0 tests and can calculate Code Coverage in Portable PDB debug-symbols-format projects. Also, the Unit Test Runner gets a performance boost when running .NET Core tests.
We have added a new code formatting feature that controls spacing options in your code. Now you can add or omit spaces around virtually every part of code, with more formatting options available than what you get with Visual Studio alone. Spacing style code formatting rules are applied with the Code Cleanup feature.
We have also improved XAML formatting, adding the ability to format markup extensions. You can now control the following options:
This release includes a preview of a new feature we’ve been working on, NullReferenceException Analysis. This feature identifies unprotected code that may unintentionally raise NullReferenceExceptions.
Null-reference exceptions are often thrown in unexpected, edge-case scenarios, and can easily remain undetected even after publishing your software. These code defects are challenging to find and are often discovered by customers after the application has shipped.
CodeRush for Roslyn now identifies code that may be prone to throwing a NullReferenceException. You can turn this feature on for C# (it is disabled by default) on the Editor | All Languages | Static Code Analysis options page:
Turn this feature on and let us know what you think.
We have added a new Add XML Comments code provider, which lets you instantly add new XML doc comments to members..
Templates for creating methods are now available inside members in C# (for C# version 7 and above) to create local functions:
For more information on using templates to create methods and write code quickly, see the M for Methods video in the CodeRush Feature of the Week series.
This release of CodeRush for Roslyn, our Visual Studio productivity boosting add-in includes enhancements to code coverage, code analysis, and more. This release also includes a preview release of project-wide code cleanup. unused member highlighting in the CodeRush options dialog (Editor | All Languages | Static Code Analysis | Highlight unused members).
Unused members can be safely deleted without changing program behavior.
The Use string.Format refactoring is now available on interpolated strings.
You can now run Code Cleanup for the entire project. Simply right-click the project you want to clean in the Solution Explorer and select Cleanup Project from the context menu.
As CodeRush cleans the project, a window shows progress.
If code cleanup is cancelled, all code files will remain unchanged.
Reminder: This feature is in a preview state, and may break or change the behavior of your code. If you find Code Cleanup yields unexpected results, please let us know. You can globally undo the entire operation using the Visual Studio’s Undo action (available when any document changed by code cleanup is open).
When we started working on CodeRush for Roslyn, we realized we needed a convenient way to deliver updates to our customers.
We knew that Visual Studio already provided a great tool for extension developers - the Extensions Gallery on the Visual Studio Marketplace. This public repository makes it easy to deliver polished updates to the world, but we also wanted internal developers using CodeRush for Roslyn to have the same experience in their daily work. We wanted to make it as easy for them to participate in beta testing and getting new beta updates as it is for customers to get new releases of CodeRush: Automatically.
Fortunately the Visual Studio Extensions Gallery lets us add a private source for extensions, which can be used to deliver updates for a selected group of beta testers.
I’d like to share our experiences with the Private Gallery and show how we found it to be an effective tool in creating better products for customers.
When you are working in an agile environment, it is crucial to have a rapid update cadence and a stable channel to reliably deliver those updates to testers, internal users, and customers.
In building our products at DevExpress, we use an internal build farm that executes the entirety of the test and build process automatically, running on dedicated hardware. This allows developers to remotely generate new builds in a matter of minutes. Once a build is ready, we can distribute that build using the Private Gallery. This combination of dedicated build farm plus effortless & precisely-targeted distribution allows us to easily provide updates for our beta testers daily, or more frequently if needed.
This workflow is indeed fast with barely a hit on resources, but what about quality? Since our developers are constantly pushing code out to our development branch through version control, we must take preventative steps to ensure that new code doesn’t break existing features before giving it to our testers. And since CodeRush is an extension to Visual Studio, our beta testers are also using it to create production code. So every beta we ship has to be solid. For us, we found having a comprehensive suite of unit and integration test cases goes a long way in ensuring solid builds. If one test case fails, the build farm doesn’t allow an upload to the Private Gallery.
So what does a comprehensive testing suite look like? For us, that means about 36,000 unit tests, which thoroughly covers about 75% of our code. We also have nearly 500 full-blown integration tests, which start instances of Visual Studio and ensure CodeRush for Roslyn properly loads, MEF composition binds as expected, and integrated features are operating correctly.
Having a comprehensive suite of tests is essential to being able to deliver high-quality daily builds to your testers. It is so important, that it has changed how we work. For example, developers on our team are not allowed to check in any new features without also checking in supporting test cases to prove the code works as expected (and that it doesn’t work in unexpected ways). And developers are not allowed to submit bug fixes without also submitting new test cases proving that the fix works.
So we definitely consider our test suite to be a valuable company asset, one that every developer continues to invest in as we move forward.
The first step in setting up a Private Gallery is to specify which URLs Visual Studio will use to download details about any updated extensions.
To do this, open the Options window using the Tools | Options menu, then navigate to Environment | Extensions and Updates options page. To the right of the Additional Extension Galleries list box, click the Add button, then specify name and URL for the Private Gallery:
In the example above, we’ve named our gallery “Custom Gallery” and specified as the URL.
Click OK to close the Options dialog.
Now, if you bring up the Extensions and Updates dialog, and open the Online category, Visual Studio will show our new Private Gallery:
Unfortunately, if you try to use it right now, Visual Studio will likely complain about its inability to connect to the remote server. Don’t worry. This is expected at this point because we also need to setup a server to supply an Atom Feed file for the extensions we want to distribute.
The Atom Feed file is an XML file containing a list of all the extensions available in the gallery. Each entry in the list includes essential information, such as extension ID, version details, author, etc. Official documentation on Atom Feed files already exists in the Atom Feed for a Private Gallery article, so we won’t dive into the details and structure here. However, we will show you how we dealt with the hardest part: generating and updating this file.
Check out the code in this project which updates the Atom Feed file each time you build and publish your extension. It is a console application expecting two command line parameters: the VSIX file path and the destination folder to be served by an HTTP(s) server. For example, you might add a call like this to the UpdateAtomFeed application inside your build script:
UpdateAtomFeed.exe "C:\Builds\TestExtension.vsix" "C:\GalleryServer\wwwroot"
UpdateAtomFeed.exe "C:\Builds\TestExtension.vsix" "C:\GalleryServer\wwwroot"
UpdateAtomFeed.exe "C:\Builds\TestExtension.vsix" "C:\GalleryServer\wwwroot"
The UpdateAtomFeed application copies the VSIX file to the specified server folder, then extracts the VSIX information and creates a custom.xml Atom Feed file (or updates an existing Atom Feed file if found).
Both the Atom Feed file and your VSIX files need to be hosted on a server available to your network, so let’s create a small .NET Core server application that does just that.
Start by creating a new .NET Core console application and name it “GalleryServer”. Then add the following NuGet dependencies:
Next, add the following code which creates and configures our static file server:
using System.IO; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.StaticFiles;
namespace GalleryServer { public class Startup { public void Configure(IApplicationBuilder app) { var provider = new FileExtensionContentTypeProvider(); provider.Mappings[".vsix"] = "application/vsix"; app.UseStaticFiles(new StaticFileOptions() { ContentTypeProvider = provider }); } public static void Main(string[] args) { string path = Directory.GetCurrentDirectory(); string wwwRootPath = Path.Combine(path, "wwwroot"); new WebHostBuilder() .UseKestrel() .UseWebRoot(wwwRootPath) .UseContentRoot(wwwRootPath) .UseStartup<Startup>() .Build() .Run(); } } }
This console app listens to the address and starts serving static files from wwwroot folder:
Note that in the code we have also specified a FileExtensionContentTypeProvider and registered the .vsix extension mapping as an application/vsix MIME type. This is necessary to serve VSIX files, because VSIX is not one of the standard file types.
Now when you open Tools and Extensions, Visual Studio should reveal our new test extension:
Nice. And now we can now use our Private Gallery to easily install & update new builds of this extension.
Of course, we also need to allow connections from outside our local host, so the gallery becomes available to users outside our network. On Windows we can use IIS to accomplish this.
For us, the benefits of agile development are amplified when the edit/compile/test/build/distribute/feedback cycles are short. Thanks to Visual Studio’s Private Gallery options, along with our build farm and comprehensive test cases to ensure product quality, we were able to shrink that typical feedback loop (which can lumber on for days, distracting valuable resources) down to a cycle measured in just a few hours and with very little impact on existing resources.
Full source code and samples are here.
Written by Alex Zakharov & Mark Miller
This month’s release of CodeRush for Roslyn adds new code formatting options, expands the capabilities of the source code spell checker, improves the ForEach to Linq refactoring, and ports over a useful organizational feature from CodeRush Classic.
New formatting rules let you specify exactly how you would like to align code when it needs to wrap across multiple lines. Just specify the right margin column, and then decide what to wrap and how those wrapped lines will line up.
Check out the before & after previews in the screenshot above to get a sense of how cool this is.
This is a great feature, and an easy way to improve the code readability. You can create custom wrapping rules for a wide range of expressions and initializers.
Formatting options are applied through the Format Document rule, which can be included as part of CodeRush’s Code Cleanup feature. You can specify which rules to apply on the Code Cleanup options page:
The CodeRush Spell Checker gets an update. In this release we can now:
We have also added a new toolbar button to toggle the Spell Checker on and off.
Now you can set your preferred order for member modifiers keywords (“static”, “public”, “virtual”, etc.). CodeRush for Roslyn features will maintain this order when adding or removing modifiers. For instance, you can set public and private modifiers to be placed before the static modifier. This feature is available for C# only, as Visual Studio automatically sorts modifiers in Visual Basic.
We are also synchronizing the following Code Style settings with similar code style settings found in Visual Studio:
If you change any of these settings in Visual Studio, the corresponding settings in CodeRush will be updated to match your changes. If you subsequently change the CodeRush settings, that will not alter the Visual Studio settings.
And we have added the new Attribute List Code Style & Code Cleanup rule, allowing you to combine two or more attributes when they are applied to a single member.
See the before & after previews in the screenshot above to see an example.
We have improved the ForEach to Linq refactoring so it generates cleaner code in more complex cases. The following Linq functions are now nicely supported by the query generator:
We have ported the Move Member to Region feature from CodeRush Classic. You can now create regions and move members to them easily. Simply click a member icon, select Move To Region and choose the target region.
Another 30 days, another CodeRush for Roslyn release. In this update we port a popular feature for XPO developers, add code generation options, add commands to extend the selection outwards by sibling nodes, and increase the speed of Tab to Next Reference. See the What’s New list for complete details.
As always, you can download CodeRush for Roslyn from the Visual Studio Marketplace.
If you’re already using CodeRush for Roslyn and you’re enjoying it, please share your review with other VS developers.
Also, be sure to check out our CodeRush Feature of the Week series on YouTube, where Rory Becker and I dive into CodeRush features in detail. Here’s a link to the first in the series, where you can see the Tab to Next Reference feature we tuned in this release:
Another sub-30-day sprint, another release. Here’s what’s new in this version of CodeRush for Roslyn:
We widened the accessibility of many refactorings and code providers, making them available when the caret is anywhere on the line containing the relevant code you want to change. Providers with broader accessibility include:
In this release we have excluded the PhantomJS library from the install, allowing you to specify the path to your preferred PhantomJS library. If the PhantomJS library is not found, the Jasmine Test Runner prompts you to install it through npm or NuGet.
You can download and install the latest version of CodeRush for Roslyn from the Visual Studio Marketplace. Give it a try and let us know what you think.
Another sprint, another release of CodeRush for Roslyn. You might have noticed we’ve stepped up our update frequency, with sprint/release cycles occurring in 30 day intervals.
Here’s what’s new in this release:
This release introduces the Declare menu, a quick and easy way to add needed code to your types.
These declarations are now available:
You get the idea. To try out the new Declare features, just position the caret inside the class you want to modify and press Alt+Insert.
Version 16.2.8 introduces the Clipboard History list, providing complete visual access to your ten most-recent clipboard operations. Instantly paste with the Ctrl+number shortcut. Filter through the list just by entering the text you’re looking for. Ctrl+Shift+V brings up the Clipboard History list.:
The following features have gained new capabilities to more deeply support the latest language features, including:
You can get the latest what’s new list and a list of corrected issues here.
We encourage you to download CodeRush for Roslyn and give it a try.
As always, we thank you for your continued support and feedback. Let us know what we can do to make your coding experience even better.
Today Team CodeRush completed another 45-day sprint, releasing version 16.2.7 of CodeRush for Roslyn. This shiny new version includes C# 7 support, new refactorings, and code declaration tools.
We have updated a number of features to support the new C# 7.0 language spec. The Expand\Compress Ternary refactorings, the Smart Return (‘r’ template), and our Declare providers have all been updated to exploit new language features.
You can now choose one of three styles for placing new namespace reference imports/using declarations: place new imports/using references at the top of the file, place new imports/using statements inside the active namespace, or use fully qualified type names.
XAML developers get two new refactorings: Convert Nested Element to Attribute and Convert Attribute to Nested Element. These refactorings move attribute from a XAML tag to become a child element (and vice versa, moving a child element up to become an attribute of the parent tag). These refactorings can improve XAML readability when there are excessive complicated attributes or when the nesting level gets excessively deep.
Also, for the C# & VB code behind the XAML, we’ve added two new code providers to make it easier to declare properties in classes that implement the INotifyPropertyChanged interface (more details below).
In addition to the new C# 7.0 upgrades mentioned above, we have improved and broadened a number of existing code gen/mod features, including:
To simplify the installation experience across multiple versions of Visual Studio, we now have a single CodeRush for Roslyn installer designed to install CodeRush to all supported IDE versions at once, available through the DevExpress Download Manager.
You can get the latest version of CodeRush for Roslyn from the Visual Studio Marketplace.
If you’re an existing customer, we thank you for your business. We are working hard to keep CodeRush fast & lean while we continue to ship powerful and intelligent features every 45 days.
If you’re thinking about giving CodeRush a try, we encourage you to install CodeRush today. We expect you’ll find it a powerful addition to Visual Studio, and one that’s easy to use.
As always, let us know what you think!
Team CodeRush continues our 45-day sprint/release cycle, and we have some treats for you in this update.
This is a brand new feature designed to help you quickly name new symbols (members, variables, and parameters).
The Naming Assistant window automatically opens when you start typing a new symbol name. The suggestion list is filtered as you type, and you can use a subset of the letters in the symbol you want to filter quickly. For example, in the screencast below, typing “mb” filters the suggestions to only show “modelBuilder”.
Code Metrics were a popular feature in CodeRush Classic, and we’re pleased to announce the feature has been ported to CodeRush for Roslyn.
Code Metrics reveal the complexity of each member, using one of three popular metrics (Cyclomatic Complexity, Maintenance Complexity, or Line Count).
You can enable the feature on the Editor\All Languages\Code Metrics options page:
Once enabled, metrics will appear to the left of the member declaration:
You can select a different metric with the mouse, by clicking the metric number and using the menu:
The CodeRush Decompiler gets some improvements. Anonymous methods now appear inline, nullable types are listed with the “?” modifier (e.g., “int?” instead of “Nullable<int>”), and XML documentation comments now appear in decompiled code.
The References Window now lets you jump quickly among references. When the window is open, press F8 to jump to the next item and press Shift+F8 to jump to the previous item. Also, you can now use the Jump To dialog to navigate through all currently open files.
Two new refactorings:
Now, when duplicating items in collection initializers and in parameter lists, commas are added automatically if needed.
Two improvements to make it easier to understand what’s happening in your code:
We also addressed a number of reported issues. As always, we welcome your feedback.
You can download the latest version of CodeRush for Roslyn from the Visual Studio Marketplace. Give it a try and let us know what you.
|
https://community.devexpress.com/blogs/markmiller/default.aspx
|
CC-MAIN-2017-43
|
refinedweb
| 3,562
| 53.31
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
kanban attribute : How to change the color
Hy folks,
I was wondering how is possible to change the background color of an item in the same way as for tree, calendar, etc... I tried this :
<kanban position="attributes"> <attribute name="colors">green:membership_state='none';red:membership_state</attribute></kanban>
How is it possible to condition color? Where can I found example of this in the mdoules?
Thanks
Changing the color in kanban could be done by setting a class like that :
<code>
<field name="member_color"/>
.../...
<templates>
.../...
<div t-
.../...
</div>
</code>
I created a functionnl field :
```python
'member_color': fields.function(_check_color, 'Couleur', type="integer")
.../...
def _check_color(self, cr, uid, ids, field_name, arg, context):
pp = pprint.PrettyPrinter(indent=4)
res = {}
for record in self.browse(cr, uid, ids, context):
color = 0
if record.membership_state == u'paid':
color = 4
elif record.membership_state == u'invoiced':
color = 3
elif record.membership_state == u'canceled':
color = 3
elif record.membership_state == u'waiting':
color = 5
elif record.membership_state == u'old':
color= 1
res[record.id] = color
return res
```
The principle is that the web widget that handle Kanban will load the color thanks to its css. Check ".openerp .oe_kanban_view .oe_kanban_color_X" css classes in the web css.
AFAIK The color number is limited to 0->9 caus the js method is designed like this.
Hope it could help
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Hi, Any luck? Having the same question. Many thanks!
|
https://www.odoo.com/forum/help-1/question/kanban-attribute-how-to-change-the-color-24271
|
CC-MAIN-2017-22
|
refinedweb
| 282
| 60.21
|
:
import org.perl.*;
Collection foo = Perl5.unpack(template, string);
[download]
--
perl: code of the samurai
I don't know of any projects like this, but ++ to you samurai; I think we ought to do it, and to start us off here's a first run implementation of map, applicable to grep. Its weakness is that the loop iterations and return values must be defined by the user, however I'm not sure yet how to get similar behavior to setting $_ as in Perl. In any case, here goes:
public class Mapper {
// Define an interface that each block will implement. Note that
// we use java.lang.Object here so this will work with any class.
interface MapBlock {
public Object[] run(Object[] list);
}
// Define our map method, which calls the methods declared in the
// MapBlock interface and returns the outcome.
public static Object[] map(MapBlock block, Object[] list) {
Object[] out = block.run(list);
return out;
}
// Define a main to test the idea.
public static void main(String[] args) {
// Test arguments, lowercase 'foo' and 'bar'
Object[] objects = { new String("foo"), new String("bar") };
// Call the map function, passing an anonymous class implementin
+g
// interface MapBlock containing the code we want to run.
Object[] new_list =
map(new MapBlock() {
public Object[] run(Object[] list) {
Object[] out = new Object[list.length];
for (int i = 0; i < list.length; i++) {
String str = (String)list[i];
// Create uppercase versions
out[i] = str.toUpperCase();
}
return out;
}
}, objects); // end of MapBlock, second param objects
// Test to see what our new Object array contains.
System.out.println((String)new_list[0] + (String)new_list creates a new object that implements the interface MapBlock, which amounts to an is-a relationship. The class itself anonymous, but it's-a MapBlock. That means we can create a one-off class implementing particular behavior, and because the compiler is aware of the MapBlock interface, it'll let you create a new class that implements it. This is closest we can get AFAIK to passing a block of code to a function. The problem in this version though is that all the functionality must be packed into the anonymous implementation, including the code for iterating through the list, as an interface cannot include implementation. But that means this isn't yet like map.
So, below I try the other option, subclassing another class. In this case new MapBlock creates an anonymous subclass of MapBlock, which means we can inherit a constructor and some implementation behavior. The idea now is to pack the implicit map functions, e.g., iterating over a list and placing the results into an array, into the superclass. The anonymous subclass now only has to implement the abstract method! ; )
Case in point: C programmers that program Perl as though it were C with scalars. Of course they'd be twice as efficient if they realized they could replace all their for(;;) loops with foreach, map and grep, but they concentrate so hard on making Perl feel like C that they miss them completely.
-sam.
|
http://www.perlmonks.org/index.pl/jacques?node_id=211105
|
CC-MAIN-2017-30
|
refinedweb
| 501
| 62.58
|
Hi!
I have several scripts using
asyncio and
aiohttp, which run well at home but not on PA. Although I have both modules installed and execute the scripts with PA 3.6, I get the following errors as if it were a 3.4 or 3.5 interpreter:
1) Coroutines defined by
async def some_func() are not recognized as such. Instead, upon calling that function, the interpreter prompts:
A Future or coroutine is required
2) When I decorate the coro with
@asyncio.coroutine, the interpreter recognizes it but doesn't understand the
await call inside (it works without
await):
j = await r.json() ^ SyntaxError: invalid syntax
3) The interpreter stumbles over the
async with statement wrapping
aiohttp commands:
async with aiohttp.ClientSession(headers=headers) as session: ^ SyntaxError: invalid syntax
Thank you for your support!
|
https://www.pythonanywhere.com/forums/topic/12821/
|
CC-MAIN-2018-43
|
refinedweb
| 134
| 57.37
|
explain_write_or_die - write to a file descriptor and report errors
#include <libexplain/write.h> void explain_write_or_die(int fildes, const void *data, long data_size);
The explain_write_or_die function is used to call the write(2) system call. On failure an explanation will be printed to stderr, obtained from explain_write(3), and then the process terminates by calling exit(EXIT_FAILURE). This function is intended to be used in a fashion similar to the following example: ssize_t result = explain_write_or_die(fildes, data, data_size); fildes The fildes, exactly as to be passed to the write(2) system call. data The data, exactly as to be passed to the write(2) system call. data_size The data_size, exactly as to be passed to the write(2) system call. Returns: This function only returns on success. On failure, prints an explanation and exits.
write(2) write to a file descriptor explain_write(3) explain write(2) errors exit(2) terminate the calling process
libexplain version 0.19 Copyright (C) 2008 Peter Miller explain_write_or_die(3)
|
http://huge-man-linux.net/man3/explain_write_or_die.html
|
CC-MAIN-2018-05
|
refinedweb
| 163
| 56.55
|
?
The following example may clarify some of the suggestions above. (Yes there is a size limitation so I have cut in the example ) It would be nice if this function was valid code:
I apologise for the messed up indention and the duplication above
/Csaba
I will give you $1000 to turn off VB....it's not even language..
$50 for removing line continuation mark _
$50 for removing the need for Dim
$100 for My class. It rock!
$50 for reduce bloated VB language. Get rid of crap keyword like Dim and all the un need keyword
$50 for running VB.net on everything including Xbox 360. it is not fair that only C# programmer get to have all the fun.
Things to keep
$30 The very verbosity others have complained about. End If/End Function etc. are part of the reason that VB is so readable. You get to the bottom and you know what's ending without having to trace back through an ugly mess of nested indents with curly braces. Unless you're writing VB in Notepad, there's no extra typing anyway.
$30 Case insensitivity, with the IDE keeping the case consistent.
$30 Automatic code formatting. Eliminate as much individual style as possible. If I have to pick up someone else's code and read it, I want to spend my time figuring out what steps they're taking, not stumbling through their wacky indenting style.
$10 Intellisense
Future priorities:
$20 Do even more to improve readability and minimize code style creativity. For instance a lot of coders have different ways of dealing with a line running off the screen or long lists of arguments. The IDE should take care of the wrapping. The more aspects of code style you can eliminate from the coder's control, the better.
$15 Maintain parity with C# features. I can't have C# snobs lording it over me for a whole version until VB catches up (e.g. Iterators). There should be absolutely no reason to use C#, ever, except maybe masochism.
$40 Code that writes code. I'm not so sure I want to go back to the days of Eval, but at least it was something. There ought to be some way to do dynamically generated functionality. It would fill a lot of gaps.
$25 More high-level ways to safely handle complex multi-threading scenarios.
Why do people dislike vb so much??Isn't the purpose of it to make things easy as possible for the programmer
Wow, that's so awesome, I love how people are engaging! You are helping us do our job better, thanks for doing that!
Keep posting so that we can get as many opinions as possible and identify what are really the most important features for the VB community!
$30 - Implicit casting
$40 - Case insensitivity
$30 - Great IntelliSense in VS2008
===
$30 - Maintain parity with C#
$50 - Portability to Linux :)
$20 - Do something to VB reputation
$50:
Instead of getting rid of the underscore, perhaps leave it up to the editor to add it automatically when I end a line with comma or ampersand etc. So I second the idea above to leave it up to the IDE to do the code formatting when splitting up across lines.
Automatic properties - I would have expected them in VB long time ago
I like it how it is right now. I'd pocket the $200 bucks.
I would give $100 to have an VB check to see if my system had the required componets for a new project i'm considering working with, and then retrieve them if needed for what ever system I was on. Strickly for developers systems.
$5 ability to kill My infrastructure for Class Library projects without manually editing project files.
$10 C# iterators
$20 better array and collection initialyzers
Dim a as String()()={{"a","b"},{"c"}}
Dim b as Point()={(x1,y1,z1),(x2,y2,z2)}
$10 keyboard shortcut to show types/namespaces list to write some static metod call
$10 for less bright ASP.Net color scheme. (I swear I saw it the first time I installed Beta2!)
$20 I don't know how it can be done, but... underscore!
$25 better anonymous methods, lambdas and expression trees support. And it would be really cool if I could get expression tree from my existing function and modify it
$50 - leave current features alone (except to add or fix functionality)
$20 - make the TextFieldParser class part of the BCL
$30 - add "Treat Consecutive Delimiters as One" to the TextFieldParser class (think Excel)
$40 - ability to use types declared in the current project to My.Settings
$60 - show available shadows/overloads/overrides on ALL available methods/properties within context when keyword is typed (similar to overrides keyword) instead of having to lookup the signature of the item you are trying to shadow/overload/override.
I would give $100 for a more comprehensive combo box that would do columns, column sorting along with type ahead feature.
I would spend $100 on making things simpler. For example, I tried to use VB Express 2005 to create an .EXE file for our internal use. Where did the created .EXE file go? The IDE was little help. And, no, it didn't actually produce an .EXE file but deploy and manifest files. I finally (sort of) figured it out, but with little help from the VB IDE. Another example, the process of using databases seems to have changed in a major way. Using the help features hasn't produced much of actual help, though apparently there's a powerful new super helpful feature called the Binding Component. A clue! (I hope.) Because the data samples of the 101 Code Samples are installed somewhere, but how to actually access them isn't clear ... Sigh.
|
http://blogs.msdn.com/b/vbteam/archive/2007/07/30/if-i-gave-you-200-to-spend-on-vb-how-would-you-spend-it.aspx?PageIndex=2
|
CC-MAIN-2015-32
|
refinedweb
| 971
| 72.66
|
Test::Reporter - sends test results to cpan-testers@perl.org
use Test::Reporter; my $reporter = Test::Reporter->new(); $reporter->grade('pass'); $reporter->distribution('Mail-Freshmeat-1.20'); $reporter->send() || die $reporter->errstr(); # or my $reporter = Test::Reporter->new(); $reporter->grade('fail'); $reporter->distribution('Mail-Freshmeat-1.20'); $reporter->comments('output of a failed make test goes here...'); $reporter->edit_comments(); # if you want to edit comments in an editor $reporter->send('afoxson@cpan.org') || die $reporter->errstr(); # or my $reporter = Test::Reporter->new(. Test::Reporter has wide support for various perl5's and platforms. For further information visit the below links:
CPAN Testers reports (new site)
CPAN Testers reports (old site)
The new CPAN Testers Wiki (thanks Barbie!)
The cpan-testers mailing list
Test::Reporter itself--as a project--also has several links for your visiting enjoyment:
Test::Reporter's master project page
Discussion group for Test::Reporter
The Wiki for Test::Reporter
Test::Reporter's public git source code repository.
Test::Reporter on CPAN
UNFORTUNATELY, WE ARE UNABLE TO ACCEPT TICKETS FILED WITH RT.
Please file all bug reports and enhancement requests at our Google Code issue tracker. Thank you for your support and understanding.
If you happen to--for some strange reason--be looking for primordial versions of Test::Reporter, you can almost certainly find them at the above 2 links.
Optional. Gets or sets the e-mail address that the reports will be sent to. By default, this is set to cpan-testers@perl.org. You shouldn't need this unless the CPAN Tester's change the e-mail address to send report's to.
Optional. Gets or sets the comments on the test report. This is most commonly used for distributions that did not pass a 'make test'.
Optional. Gets or sets the value that will turn debugging on or off. Debug messages are sent to STDERR. 1 for on, 0 for off. Debugging generates very verbose output and is useful mainly for finding bugs in Test::Reporter itself.
Optional. Defaults to the current working directory. This method specifies the directory that write() writes test report files to.
Gets or sets the name of the distribution you're working on, for example Foo-Bar-0.01. There are no restrictions on what can be put here.
Optional.. This is optional if you don't intend to use Test::Reporter to send reports via e-mail, see 'send' below for more information.
Optional. Gets or sets the e-mail address of the individual submitting the test report, i.e. "afoxson@pobox.com (Adam Foxson)". This is mostly of use to testers running under Windows, since Test::Reporter will usually figure this out automatically. Alternatively, you can use the MAILADDRESS environmental variable to accomplish the same.
Gets or sets the success or failure of the distributions's 'make test' result. This must be one of:
grade meaning ----- ------- pass all tests passed fail one or more tests failed na distribution will not work on this platform unknown distribution did not include tests
Optional. If you have MailTools installed and you want to have it behave in a non-default manner, parameters that you give this method will be passed directly to the constructor of Mail::Mailer. See Mail::Mailer and Mail::Send for details.
Returns an automatically generated Message ID. This Message ID will later be included as an outgoing mail header in the test report e-mail. This was included to conform to local mail policies at perl.org. This method courtesy of Email::MessageID.
Optional. Gets or sets the mail exchangers that will be used to send the test reports. If you override the default values make sure you pass in a reference to an array. By default, this contains the MX's known at the time of release for perl.org. If you do not have Mail::Send installed (thus using the Net::SMTP interface) and do have Net::DNS installed it will dynamically retrieve the latest MX's. You really shouldn't need to use this unless the hardcoded MX's have become wrong and you don't have Net::DNS installed.
This constructor returns a Test::Reporter object. It will optionally accept named parameters for: mx, address, grade, distribution, from, comments, via, timeout, debug, dir, perl_version, and transport. and cc's the e-mail to the specified recipients, if any. If you do specify recipients to be cc'd and you do not have Mail::Send installed be sure that you use the author's @cpan.org address otherwise they will not be delivered. You must check errstr() on a send() in order to be guaranteed delivery. Technically, this is optional, as you may use Test::Reporter to only obtain the 'subject' and 'report' without sending an e-mail at all, although that would be unusual.
Returns the subject line of a report, i.e. "PASS Mail-Freshmeat-1.20 Darwin 6.0". 'grade' and 'distribution' must first be specified before calling this method.
Optional. Gets or sets the timeout value for the submission of test reports. Default is 120 seconds.
Optional. Gets or sets the transport method. If you do not specify a transport, one will be selected automatically on your behalf: If you're on Windows, Net::SMTP will be selected, if you're not on Windows, Net::SMTP will be selected unless Mail::Send is installed, in which case Mail::Send is used.
At the moment, this must be one of either 'Net::SMTP', or 'Mail::Send'. Support for authenticated SMTP may soon be possibly added as well.
If you specify 'Mail::Send' as a transport, you can add an additional argument in the form of an array reference which will be passed to the constructor of the lower-level Mail::Mailer. This can be used to great effect for all manner of fun and enjoyment. ;-)
This is not designed to be an extensible platform upon which to build transport plugins. That functionality is planned for the next-generation release of Test::Reporter, which will reside in the CPAN::Testers namespace.
Optional. Gets or sets the value that will be appended to X-Reported-Via, generally this is useful for distributions that use Test::Reporter to report test results. This would be something like "CPANPLUS 0.036".
These methods are used in situations where you test on a machine that has port 25 blocked and there is no local MTA.:
my $reporter; $reporter = Test::Reporter->new()->read('pass.Test-Reporter-1.16.i686-linux.2.2.16.1046685296.14961.rpt')->send() || die $reporter->errstr(); # wrap in an opendir if you've a lot to submit
write() also accepts an optional filehandle argument:
my $fh; open $fh, '>-'; # create a STDOUT filehandle object $reporter->write($fh); # prints the report to STDOUT
If you specify recipients to be cc'd while using send() (and you do not have Mail::Send installed) be sure that you use the author's @cpan.org address otherwise they may not be delivered, since the perl.org MX's are unlikely to relay for anything other than perl.org and cpan.org.
This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself.
This is optional. If it's installed Test::Reporter will try even harder at guessing your mail domain.
This is optional. If it's installed Test::Reporter will dynamically retrieve the mail exchangers for perl.org, instead of relying on the MX's known at the time of this release.
This is optional. If it's installed Test::Reporter will use Mail::Send instead of Net::SMTP.
Adam J. Foxson <afoxson@pobox.com> and Richard Soderberg <rsod@cpan.org>, with much deserved credit to Kirrily "Skud" Robert <skud@cpan.org>, and Kurt Starsinic <Kurt.Starsinic@isinet.com> for predecessor versions (CPAN::Test::Reporter, and cpantest respectively).
|
http://search.cpan.org/~fhoxh/Test-Reporter-1.38/lib/Test/Reporter.pm
|
CC-MAIN-2018-13
|
refinedweb
| 1,308
| 57.77
|
Subject: Re: [boost] [review] [sort] Sort library review manager results
From: Steven Ross (spreadsort_at_[hidden])
Date: 2014-12-01 06:40:29
Francisco,
On Sun Nov 30 2014 at 11:32:10 AM Francisco José Tapia <fjtapia_at_[hidden]>
wrote:
> Hi,
>
>
> If you are interested about sort methods, I have a parallel implementation
> of intro-sort and merge-sort.
>
> The results of these implementations are similar to the obtained by the
> parallel implementations of GCC ( using OpnMP) and TBB ( don't have
> parallel stable sort). In the attached text file you can see the results on
> my computer.
>
Thanks for providing your results.
Some questions about these results:
1) what random number generation function did you use?
2) Did you try the worst-case ordering for introsort?
3) Why would someone want to use your library instead of TBB? (having used
OpenMP, I know why people might not like OpenMP) I'd like at least one
statement from someone else on this user group that they'd like to use this
library and why.
4) Can you do something to fix the performance for already-sorted data? TBB
appears to have an optimization for that surprisingly common case.
5) Your stable_sort appears unacceptably slow single-threaded vs the
standard gcc version. You probably should fix that or abandon it.
6) Have you compared memory usage to make sure all variants are comparable
for the same number of threads?
>
> My implementation don't use OpenMP nor TBB. Use a concurrent stack
> implemented with a vector and with an spin-lock done with atomic variables.
> It will be compatible with any C++11 compiler
>
Can you make it work as efficiently without C++11? Not everyone has a
C++11 compiler just yet.
> The algorithms run well. But they are pending of a detailed adjust in
> order to improve the speed.
>
> If you are interested , please ,say me in which name space must be
> included. Now are a part of the new version of my library countertree and
> are in this name space. I will need two weeks for to do because I am
> teacher and in these days I am buried in a mountain of exams.
>
No rush. I'll let you know once we finalize on a namespace.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2014/12/218046.php
|
CC-MAIN-2021-39
|
refinedweb
| 399
| 66.03
|
JEP 286: Local-Variable Type Inference can towards = Path.of(fileName); var fileStream = new FileInputStream(path); var bytes = Files.readAllBytes(fileStream); type is inferred based on the type of the initializer. If there is no initializer, the initializer is the
null literal, or the type of the initializer is not one that can be normalized to a suitable denotable type (these include intersection types and some capture types), or if the initializer is a poly expression that requires a target type (lambda, method ref, implicit array initializer), then the declaration is rejected.
We may additionally consider
val or
let as a synonym for
final var. (In any case, locals declared with
var will continue to be eligible for effectively-final analysis.)
The identifier
var will not be made into a keyword; instead it will be a reserved type name. This means that code that uses
var as a variable, method, or package name will not be affected; code that uses
var as a class or interface name will be affected (but these names violate the naming conventions.)
Excluding locals with no initializers eliminates "action at a distance" inference errors, and only excludes a small portion of locals in typical programs.
Excluding RHS expressions whose type is not denotable would simplify the feature and reduce risk. However, excluding all non-denotable types is likely to be too strict; analysis of real codebases show that capture types (and to a lesser degree, anonymous class types) show up with some frequency. Anonymous class types are easily normalized to a denotable type. For example, for
var runnable = new Runnable() { ... }
we normalize the type of
runnable to
Runnable, even though inference produces the sharper (and non-denotable) type
Foo$23.
Similarly, for capture types
Foo<CAP>, we can often normalize these to a wildcard type
Foo<?>. These techniques dramatically reduce the number of cases where inference would otherwise fail.
Alternatives
We could continue to require manifest declaration of local variable types.
We could support diamond on the LHS of an assignment; this would address a subset of the cases addressed by
var.
The design described above incorporates several decisions about scope, syntax, and non-denotable types; alternatives for those choices which were also considered are documented here.
Scope Choices
There are several other ways we could have scoped this feature. One, which we considered, was restricting the feature to effectively final locals (
val only). will inevitably)
Whether or not to have a second form for immutable locals (
val,
let) is a tradeoff of additional ceremony for additional capture of design intent. We already have effectively-immutable analysis for lambda and inner class capture, and the majority of local variables are already effectively immutable. Some people like that
var and
val are so similar, so that the difference recedes into the background when reading code, while others find them distractingly similar. Similarly, some like that
var and
let are clearly different, while others find the difference distracting. (If we are to support new forms, they should have equal syntactic weight (both
val and
let qualify), so that laziness is less likely to entice users to omit the additional declaration of immutability.)
Auto is a viable choice, but Java developers are more likely to have experience with Javascript, C#, or Scala than they are with C++, so we do not gain much by emulating C++ here.
Using
const or
final seems initially attractive because it doesn't involve new keywords. However, going in this direction effectively closes the door on ever doing inference for mutable locals. Using
def has the same defect.
The Go syntax (a different kind of assignment operator) seems pretty un-Javaish.
Non-Denotable Types
We have several choices as to what to do with nondenotable types (null types, anonymous class types, capture types, intersection types.) We could reject them (requiring a manifest type), accept them as inferred types, or try to "detune" them to denotable types.
Arguments for rejecting them include:
Risk reduction. There are many known corner cases with weird types such as captures and intersections in both the spec and the compiler; by allowing variables that have these types, they are more likely to be used, activate corner cases, and cause user frustration. (We are working on cleaning these up, but this is a longer-term activity.)
Expressibility-preserving. By rejecting non-denotable types, every program with
varhas a simple local transformation to a program without
var.
Arguments for accepting them include:
We already infer these types in chained calls, so it is not like our programs are free of these types anyway, or that the compiler need not deal with them.
Capture types arise in situations when you might think that a capture type is not needed (such as
var x = m(), where
m()returns
Foo<?>); rejecting them may lead to user frustration.
While we were initially drawn to the "reject them" approach, we found that there were a significant class of cases involving capture variables that users would ultimately find to be mystifying restrictions. For example, when inferring
var c = Class.forName("com.foo.Bar")
inference produces a capture type
Class<CAP>, even though the type of this expression is "obviously"
Class<?>. So we chose to pursue an "uncapture" strategy where capture variables could be converted to wildcards (this strategy has applications elsewhere as well). There are many situations where capture types would otherwise "pollute" the result, for which this technique was effective.
Similarly, we normalize anonymous class types to their (first) supertype. We make no attempts to normalize intersection or union types. The largest remaining category where we cannot infer a sensible result is when the initializer is null.
Risks and Assumptions
Risk: Because Java already does significant type inference on the RHS (lambda formals, generic method type arguments, diamond), there is a risk that attempting to use var/val: Inferring non-denotable types might press on already-fragile paths in the specification and compiler.
We've mitigated this by normalizing most non-denotable types, and rejecting the remainder.
Risk: source incompatibilities (someone may have used "var" as a type name.)
Mitigated with reserved type names; names like "var" and "val" do not conform to the naming conventions for types, and therefore are unlikely to be used as types. The names "var" and "val" are commonly used as identifiers; we continue to allow this.
Risk: reduced readability, surprises when refactoring.
Like any other language feature, local variable type inference can be used to write both clear and unclear code; ultimately the responsibility for writing clear code lies with the user.
|
http://openjdk.java.net/jeps/286
|
CC-MAIN-2016-36
|
refinedweb
| 1,092
| 51.89
|
bdflush − start, flush, or tune buffer-dirty-flush daemon
#include <sys/kdaemon.h>
int bdflush(int func, long *address);
int bdflush(int func, long data);−2)/2 is returned to the caller in that address.
If func is 3 or more and is odd (low bit is 1), then data is a long word, and the kernel sets tuning parameter numbered (func−3)/2 to that value.
The set of parameters, their values, and their valid ranges are defined in the Linux kernel source file fs/buffer.c.
If func is negative or 0 and the daemon successfully starts, bdflush() never returns. Otherwise, the return value is 0 on success and −1 on failure, with errno set to indicate the error..
bdflush() is Linux-specific and should not be used in programs intended to be portable.
fsync(2), sync(2), sync(8), update(8)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
|
http://man.sourcentral.org/slack141/2+bdflush
|
CC-MAIN-2018-22
|
refinedweb
| 173
| 72.87
|
An introduction to Mono
You've probably heard of .NET, and since you're reading a Linux column, you've probably also heard of the Mono Project, which provides a .NET compatible development platform. Mono has been the cause of a number of discussions regarding copyright, software patents, and other legal issues. So, instead of boring you with political ranting, we're going to concentrate on Mono's technical merits (of which there are many) and show you how you can use it for development. We also present ndiswrapper, which allows you to use some Windows network drivers with Linux, this enabling you to use a number of previously unsupported network cards.
Mono ? a unified development platform
We'll start out by introducing you to Mono (very briefly because you've probably had this pounded down your throat for the past 2 years) then we'll present some code and screenshots that demonstrate the versatility of the Mono project.
A brief history of Mono
The Mono project was conceived in the Summer of 2001 as an Open Source alternative to Microsoft's .NET development platform. Since then, it has come all the way to a 1.0 release among a flurry of controversy from mostly inside the Open Source community itself. Although we will not outline the reasons here, most of the criticism stems from the fact that .NET is Microsoft, and "we" don't like them.
At the end of June, 2004, Mono released their first major version (1.0) and the platform (especially C#) seems to be catching on very quickly. There's already a music player, a new app called Dashboard, an app I'm writing called MonoPlanet, and not to mention an IDE for developing Mono Apps called MonoDevelop and much more.
For users?
A user of a platform that supports Mono will hopefully see better-quality applications, applications that can just be installed (no compiling, not even for packaging), and applications that are secure. The ease for developers (given the proper tools) to write Mono applications will help to bring better applications to Linux (just as soon as Gtk# supports the current Gtk+ library).
Linux advocates have been saying all along that application support is key to its success. With the adoption of Mono, the accuracy of that statement should soon become clear.
For developers?
Mono's main pull for developers is that it is cross-platform and makes writing applications very fast because of its extensive framework. Mono also has the concept of garbage collection. Gone are the days of using malloc() and free() and recording where you allocated memory and making sure you free() it. Java has GC as well, but Java never really caught on as an application language.
Here we will show you how to write a very basic cross-platform application and run it on Windows and Linux. So without further delay, on to the code.
Note: Mono support in Mac OSX is still immature compared to other platforms, we expect this to change now that 1.0 is out the door.
The code: example 1
This code is simple, it takes command line argument and prompts the user for a Regular Expression. It test the command line string against the regular expression and prints whether it succeeded or failed.
using System; using System.IO; using System.Text; using System.Text.RegularExpressions; namespace Volume_27 { ///
/// Linux.Ars Volume 27 Example 1 ///class Example1 { static void Main(string[] args) { if( args.Length == 0 ) { Usage(); Console.ReadLine(); return; } String input = ""; for(int i = 0; i < args.Length; i++) input = input + ((i==0)?"":" ") + args[i]; Console.Write ("Enter a regular expression to test the string "{0}": ", input); String regex = Console.ReadLine(); bool matches = Regex.IsMatch( input, regex ); Console.WriteLine (""{0}" DOES {1} Match "{2}"", input, (matches)?"":"NOT", regex); } public static void Usage() { String ExecPath = Environment.GetCommandLineArgs()[0]; String Executable = ExecPath.Substring(ExecPath.LastIndexOf(Path.DirectorySeparatorChar) + 1); Console.WriteLine("-------------------------------------------------"); Console.WriteLine("{0} usage:", Executable ); Console.WriteLine(); Console.WriteLine("t{0} [Input String]", Executable); Console.WriteLine(); Console.WriteLine ("t[Input String]: An input string that should be tested against"); Console.WriteLine("t the prompted regular expression"); Console.WriteLine(); Console.WriteLine("-------------------------------------------------"); } } }
Download the compiled binary: Here
As you can see, the actual meat of this application is seven lines of code. Now, when we run this (on windows) we get this output:
C:>md5sum v27e1.exe 492f501af39a990a873a008adf82e8c7 *v27e1.exe C:>v27e1 Linux.Ars is cool Enter a regular expression to test the string "Linux.Ars is cool": ^L.*.A* "Linux.Ars is cool" DOES Match "^L.*.A*"
And now to execute the same exact binary on Linux (the case-mismatch is intentional)
forgue@ahab:~$ md5sum v27e1.exe 492f501af39a990a873a008adf82e8c7 v27e1.exe forgue@ahab:~$ forgue@ahab:~$ ./v27e1.exe Linux.Ars is cool Enter a regular expression to test the string "Linux.Ars is cool": ^L.*.C* "Linux.Ars is cool" DOES Match "^L.*.C*"
The great power of Mono and .NET lies in the ONE line of code:
bool matches = Regex.IsMatch( input, regex );
.NET and Mono are actually a collection of libraries that form a framework which allows you, the programmer, to write the logic of your application. I can call one line of code to do input validation on a string which saves you possibly hours of time. Things like Input validation, network communication, file reading and writing, text encoding, regular expressions, formatting, XML Parsing, LDAP Access, remoting, and GUI development are reduced to just a few lines of code compared to possibly hundreds.
You must login or create an account to comment.
|
https://arstechnica.com/information-technology/2004/07/linux-20040715/
|
CC-MAIN-2020-40
|
refinedweb
| 926
| 58.99
|
Tag of ScalaTest15 Apr 2017
We often want to exclude some test cases. ScalaTest has
@Ignore annotation
to exclude that test case to be run as well as JUnit. But how can we include or exclude test cases in more fine-grained way?
ScalaTest provides the feature called Tagging. By using tagging, you can specify which test cases to be run and which are not. You can use this feature in both
FlatSpec and
WordSpec.
For example, assuming you want to attach a tag to some test cases to specify them as slow test, you can create your own tag as follows.
import org.scalatest.Tag object SlowTest extends Tag("com.lewuathe.SlowTest")
You can use the tag in each test cases.
class SomeTest extends WordSpec { "A Class" should { "do heavy task" taggedAs(SlowTest) in { // Doing some heavy test } } }
There might be some cases when you don’t want to run these heavy test because they take a lot of time. Of course it’s the best we can improve test case performance to be done in reasonable time but just excluding can be a workaround. Once you tagged them as
SlowTest both including and excluding are easy.
Excluding
$ sbt "test-only -- -l com.lewuathe.SlowTest"
Including (just running only test cases tagged as
SlowTest)
$ sbt "test-only -- -n com.lewuathe.SlowTest"
You can which test case to be run arbitrary by using tagging of ScalaTest.
Thanks.
|
https://www.lewuathe.com/tag-of-scalatest.html
|
CC-MAIN-2022-21
|
refinedweb
| 237
| 73.37
|
We pause one final time at the end of the semester to look at differences that have popped up between C and C++.
As a reminder of the differences we've already covered:
FILE*instead of
ifstream
new, but C must use
mallocor
calloc
Today we'll look at functions, the lack of pass-by-reference, and linked lists. The summary is actually pretty easy:
You cannot overload functions in C. Recall that overloading is the ability to write two functions with the same function name but different function signatures. This is a very useful thing to do. For example, you might want two max functions::
int max(int, int); int max(int, int, int);
void printArray(int* arr, int N, ostream& out) { for( int i = 0; i < N; i++ ) out << arr[i]; out << endl; } void printArray(int* arr, int N) { printArray(arr, N, cout); }
This is a great use of overloading, but you should note that this is a matter of convenience right now. This doesn't give us more computational power, but rather ease of programming. It might help you write code more quickly (which is important!), but it doesn't make C++ more powerful. It's a feature!
What to do in C? If you want a default print function, well, you need to give it a different name. Some might say this is less than ideal, others might say this helps lead to less bugs since you're forced to give unique names to everything.
int printArrayFile(int* arr, int N, FILE* out) { for( int i = 0; i < N; i++ ) fprintf(OUT, "%d ", arr[i]); fprintf(OUT, "\n"); } int printArray(int* arr, int N) { for( int i = 0; i < N; i++ ) printf("%d ", arr[i]); printf("\n"); }
This is a huge difference between the two languages. Pass-by-reference was a big feature that came with C++ and changed the paradigm of what programmers can do. Since C does not have this, please don't try to do it. The following is illegal!!!
void add2front(Node* &front, int val);
We'll get back to linked list functions like this shortly. Let's first talk about where this leave us in C, though. If we can't pass-by-reference in our function calls, then we cannot directly modify variables that are passed to us, right? Well, right, but also wrong. All this means is that we need to be more explicit with pointers, and we have to use them more often.
Recall our function that found the max and the min of an array of integers. Since it needed to set two variables, it couldn't just return one of them. We used pass-by-reference to set both. It looked like this:
void maxMin(int* nums, int N, int &max, int &min) { max = nums[0]; min = nums[0]; for( int i = 1; i < N; i++ ) { if( nums[i] > max ) max = nums[i]; if( nums[i] < min ) min = nums[i]; } }
C can also have a function like this, but it requires explicit pointers. You've seen all the pieces of this, so this isn't actually new, but perhaps you haven't thought about it this way. We now must do this:
void maxMin(int* nums, int N, int* max, int* min) { *max = nums[0]; *min = nums[0]; for( int i = 1; i < N; i++ ) { if( nums[i] > *max ) *max = nums[i]; if( nums[i] < *min ) *min = nums[i]; } }
The big change is that
max is an explicit pointer! With C++ and pass-by-reference, you as the programmer get to treat it as a normal int, but with C we have to know this is a pointer. This means you can't just say
max = nums[0] to save the first int as the max. It's a pointer, not an int. We thus need that asterisk to follow the pointer to the actual memory location and fill the int with a value:
*max = nums[0]. Remember, pointers are memory addresses, so you have to keep the difference straight in your head as you code.
But if maxMin now needs pointers, how does the caller actually use this function? You need to dereference your variables explicitly when you make the call:
int max, min; maxMin(nums, N, &max, &min);
Hopefully you now see how dereferencing and pass-by-reference are related. In C, the caller has to do the dereferencing to get the pointer. In C++, the callee defines the function as referencing the given arguments. Here is an entire C program that reads an int array and prints out the max/min:
#include <stdio.h> #include <stdlib.h> void maxMin(int* nums, int N, int* max, int* min) { *max = nums[0]; *min = nums[0]; for( int i = 1; i < N; i++ ) { if( nums[i] > *max ) *max = nums[i]; if( nums[i] < *min ) *min = nums[i]; } } int main() { int N; printf("How many? "); scanf("%d", &N); // Create the array int* nums = (int*)malloc(N*sizeof(int)); printf("Enter %d nums: ", N); for( int i = 0; i < N; i++ ) scanf("%d", &nums[i]); // Get the max/min int max, min; maxMin(nums, N, &max, &min); printf("max=%d min=%d\n", max, min); return 0; }
Just a syntax change! Structs are actually a C construct. C++ builds on structs with something called
classes, but we ran out of time to introduce those. The change here is syntactic, and frankly a little annoying. You now have to write
struct in front of all your type declarations. For instance:
struct Mid { char* name; int alpha; int OOM; }; void readMid(struct Mid m) { // do stuff } int main() { // Create a variable of type Mid struct Mid a; // do stuff... }
Wherever you put
Mid now needs to be
struct Mid. You can probably now see why the C++ designers wanted to get rid of that.
Fundamentally, there is no real difference between C and C++. We have structs and pointers, and both of these are the same. However, there is a change in the types of functions that you are allowed to write, so manipulating linked lists feels different as a programmer.
Let's look at
add2front, our old friend. The prototype needs to change because it depends on pass-by-reference to always update the front pointer. Let's change that to an explicit pointer, and we now have this comparison:
Three things must change. The first is putting
struct in front of all the Nodes. The second (and biggest) is that pass-by-reference is removed and we just have the pointer to the front of the list. Since we have a new front pointer at the end of this function, it's easiest to then change the return type. The third change is then to actually return the front of the list! It's not a big change, and the caller makes a similar small change:
Hopefully you are more fully realizing that pass-by-reference does not give a programming language more power. We can still do everything we need to do, but our methodology of doing it is what changes. Many people argue over these things, arguing that one paradigm is better than the next, and indeed you might have your own personal preference already.
To conclude this section, I'm including C code to read a linked list from the user (stop at -1) and print it:
#include <stdio.h> #include <stdlib.h> struct Node { int data; struct Node *next; }; // Adds to the front! struct Node* add2front(int val, struct Node* L) { struct Node* T = malloc(sizeof(struct Node)); T->data = val; T->next = L; return T; } // Recursive, prints it backwards! // Since the list is built backwards, this prints forward! void printList(struct Node* L) { if( L == NULL ) printf("\n"); else { printList(L->next); printf("%d ", L->data); } } int main() { struct Node* mylist = NULL; int num; printf("Numbers? "); scanf("%d", &num); while( num != -1 ) { mylist = add2front(num, mylist); scanf("%d", &num); } printList(mylist); return 0; }
$ ./avg Enter name: Needham Average is: 71.1000005
|
https://www.usna.edu/Users/cs/nchamber/courses/si204/s18/lec/l37/lec.html
|
CC-MAIN-2018-22
|
refinedweb
| 1,338
| 79.19
|
curl_global_cleanup - global libcurl cleanup
NAME
curl_global_cleanup - global libcurl cleanup
SYNOPSIS
#include <curl/curl.h>
void curl_global_cleanup(void);
DESCRIPTION
This function releases resources acquired by curl_global_init.
You should call curl_global_cleanup once for each call you make to curl_global_init, calls functions of other libraries that are similarly thread unsafe, it could conflict with any other thread that uses these other libraries.
See the description in libcurl of global environment requirements for details of how to use this function.
CAUTION
curl_global_cleanup does not block waiting for any libcurl-created threads to terminate (such as threads used for name resolving). If a module containing libcurl is dynamically unloaded while libcurl-created threads are still running then your program may crash or other corruption may occur. We recommend you do not run libcurl from any module that may be unloaded dynamically. This behavior may be addressed in the future.
SEE ALSO
curl_global_init, libcurl, libcurl-thread
This HTML page was made with roffit.
|
https://curl.haxx.se/libcurl/c/curl_global_cleanup.html
|
CC-MAIN-2018-43
|
refinedweb
| 157
| 54.42
|
A typical algorithm for VS random number generators is as follows:
Create and initialize stream/streams. Functions vslNewStream, vslNewStreamEx, vslCopyStream, vslCopyStreamState, vslLeapfrogStream, vslSkipAheadStream.
Call one or more RNGs.
Process the output.
Delete the stream or streams with the function vslDeleteStream.
Note
You may reiterate steps 2-3. Random number streams may be generated for different threads.
The following example demonstrates generation of a random stream that is output of basic generator MT19937. The seed is equal to 777. The stream is used to generate 10,000 normally distributed random numbers in blocks of 1,000 random numbers with parameters a = 5 and sigma = 2. Delete the streams after completing the generation. The purpose of the example is to calculate the sample mean for normal distribution with the given parameters.
Example of VS RNG Usage
#include <stdio.h> #include "mkl_vsl.h" int main() { double r[1000]; /* buffer for random numbers */ double s; /* average */ VSLStreamStatePtr stream; int i, j; /* Initializing */ s = 0.0; vslNewStream( &stream, VSL_BRNG_MT19937, 777 ); /* Generating */ for ( i=0; i<10; i++ ) { vdRngGaussian( VSL_RNG_METHOD_GAUSSIAN_ICDF, stream, 1000, r, 5.0, 2.0 ); for ( j=0; j<1000; j++ ) { s += r[j]; } } s /= 10000.0; /* Deleting the stream */ vslDeleteStream( &stream ); /* Printing results */ printf( "Sample mean of normal distribution = %f\n", s ); return 0; }
Additionally, examples that demonstrate usage of VS random number generators are available in:
${MKL}/examples/vslc/source
|
https://software.intel.com/pt-br/mkl-developer-reference-c-vs-rng-usage-model
|
CC-MAIN-2019-30
|
refinedweb
| 228
| 51.24
|
Related link:
Doing a new project in Rails, I started using vim when putting it together, then a couple weeks went by, and I never got out of vim!
I used to use GUI-based editors with syntax highlighting, code hints and such - until I found that vim can do all of that! Once I figured out how to make it automatically close all my parens, braces, and brackets, I was sold.
Here’s my favorite .vimrc thing that helped….
Put this in ~.vimrc
:inoremap ( ()<ESC>i :inoremap ) <c-r>=ClosePair(')')<CR> :inoremap { {}<ESC>i :inoremap } <c-r>=ClosePair('}')<CR> :inoremap [ []<ESC>i :inoremap ] <c-r>=ClosePair(']')<CR> function ClosePair(char) if getline('.')[col('.') - 1] == a:char return "<Right>" else return a:char endif endf
The other thing that helped a lot was konsole. I like to full-screen all my terminal windows. Never fond of having lots of little boxes floating around. So, when in a Rails dev-mood, I open up a tab at the bottom for EACH of these:
- project
- model
- view
- controller
- lang
- database (pg/mysql console)
- Rails/ActiveRecord console
- misc
- log
I just [SHIFT]-arrow back-n-forth between them, each one taking the full screen, and getting my full attention while working.
Cool, vim is getting in 2005 what Emacs had in 1995
About time. Only a decade behind. {grin}
Cool, vim is getting in 2005 what Emacs had in 1995
Vim has had Perl built in (as a compile-time option) for a couple years now, Randall. Does emacs do that? :)
Editorial correction request
You forgot to turn
<into
<, so your
<ESC>,
<c-r>and
<Right>are treated as tags instead of showing up on the page.
Cool, vim is getting in 2005 what Emacs had in 1995
Cool, Emacs got in 1995 what TECO had in 1963. Zing!
like to add a trackback
hello, by the way, what is the trackback url of articles published by oreillynet.com?
brs
Editorial correction request
Thanks aristotle - fixed.
like to add a trackback
I believe it's this page it's on :
Cool, vim is getting in 2005 what Emacs had in 1995
Hm? I thought vim was written only in C, and depended only on C libraries.
If the integrated Perl Debugger and Compiler in Emacs are not good enough for you, there is Perlmacs.
Extoll, don't troll!
Vim is even sweeter with
Since you mentioned Emacs, a shameless plug for the equivalent in Emacs to Cream:
Avoid the 2005-1beta, until you know how to use it, or unless you are upgrading.
Cool, vim is getting in 2005 what Emacs had in 1995
Well, you think wrong then. Or maybe not entirely, as I did say compile-time option. But Vim has been offering the choice of an embedded Perl, Python and Ruby when you build it for quite some time now. (Yes, even all three at once; that is, if you are fond of a binary much heavier than Emacs ever was.)
I’ve actually written Gtk2-Perl scripts that run from within gvim. It’s pretty neat.
wooo, another editor discussion
Personally, I gotta go with TextMate. Of course, it is OS X only, but given that I'm on a Mac, it's hardly a barrier to adoption for me. It's easily the most user-friendly of the beefy text editor breed, and unsurprisingly, probably also the prettiest. It doesn't have all the bells and whistles of vim or emacs, but to be honest, I rarely use most of those gizmos anyhow.
And of course, it plays quite nicely with Ruby.
Cool, vim is getting in 2005 what Emacs had in 1995
Emacs has had that since around 2000, according to
That's a bit more than "a couple of years".
|
http://www.oreillynet.com/onlamp/blog/2005/07/vim_its_slim_and_trim_heres_wh.html
|
crawl-002
|
refinedweb
| 634
| 71.75
|
Yesterday, Volta was made publicly available for the first time. It is an experimental project in the early stages of development. The team decided to release an early technology preview so that developers everywhere can help guide the project through experience and feedback. We want your feedback.
The first release provides the basic feature set that will be improved upon with time. It has some obvious shortcomings that we are aware of and are actively addressing. But really, at this stage, the preview is more concerned with sparking your imagination about what is possible than ironing out all of the details.
Perhaps you disagree. Maybe the most important feature to you is the completeness of a final product. If that is the case, then say so and we will seriously consider making it a higher priority for the upcoming early experimental releases.
At some point, Volta may become, feed into, or inform a product, but that is a little way off yet. So let's enjoy the unique opportunity of working together to make something great.
In the coming months, I will alternate between three types of posts:
1. Volta focused posts: explaining the motivation, features, and technical details 2. C#: this includes both 3.0 and eventually 4.0 features 3. Random thoughts: like it says; two that will be discussed soon are programmer tests and continuations
1. Volta focused posts: explaining the motivation, features, and technical details
2. C#: this includes both 3.0 and eventually 4.0 features
3. Random thoughts: like it says; two that will be discussed soon are programmer tests and continuations
I hope you enjoy the posts and I look forward to engaging with you in discussion.
If you would like to receive an email when updates are made to this post, please register here
RSS
Wes,
When it comes to possible C# 4.0 features, along with the co/contravariance that Eric Lippert has been hinting at in his recent blogs, it would be very helpful if partial generic specialization could be added by way of the existing constraint mechanism.
The main pain point scenario that I see today is that certain classes might want to do one thing for a reference type and another for a struct. For example the return type might be T for a class or T? for a struct. There's no easy way to do this w/o creating a separate class for each.
What would be better is if I could specify two classes with the same name, but different where constraints. The overload resoltuion would pick out the most restrictive type. This could also work method overloading where there are two generic methods with the same name/parameters but different where constraints. The most specific would win.
This kind of feature would eliminate the need for different classes to handle classes and structs in a generic way. Of course, there'd be other uses too, but this is one major one.
Any thoughts?
Thanks!
Are you open/interested in feedback about what we would like to see in c# 4.0 as well? Or should we address those concerns to someone else?
It's great to see previews and be able to provide feedback. However, try not to follow the Acropolis path, where fun was sometimes favored instead of useful features. If you want people to believe in your project, the main focus should be kept on key features.
Well,I tried to run this, but it requires us to install the visual studio 2008.
However, Volta/Linq itself is a rather interesting idea.
I will try this after my exams a week later. It takes sometime to download all of these software first.
onovotny:
Yes, I understand where you are coming from. That is very interesting and duly noted.
rob:
As a member of the C# language design team, I am very interested to hear any input that you may have.
fabrice:
Good point. I love having fun, but I love solving problems even more. Do you have a particular useful feature in mind?
wu:
Awesome. Definitely check it out when you can. You can use either VS 2008 or VS 2008 beta 2. If you don't have either then try getting the n-day trial (for some n that I can't recall).
Volta really has done what was impossible before! Compiles IL to javascript or automatically delegates execution to server,
Looking forward for the next release, I'm sure that you guys will get the VoltaPage1.Designer.cs done... and a much leaner JS code.
Please please pretty-please tell us that we're going to get macros with C# 4.0.
Cool, Wes.
My initial impression of Volta is that it's a project that tries to do the impossible. :-) That's a cool project to tackle in any case!
Really looking forward to C# 4 posts and Volta posts. Those are both intriguing and will keep me subscribed.
Wes, as a suggestion for C# 4, I really just want things that help us write code with less bugs. Whether that's design by contract, or the full Spec# stuff, or STM, whatever: that's what I really want, better, more reliable software.
I am happy you're back, and waiting for your posts Wes.
Ziv:
I can't make any promises there but there has been a lot of discussion about metaprogramming.
Judah:
We regularly evaluate the stuff Spec# and other MSR projects do with contracts to see if they fit into the C# plans. Btw, both my wife and I love your blog.
Looking forward to hearing about new C# features! :) Type classes, maybe? Implementing interfaces using extension methods (changing the typing of the class) maybe? Something we could really use over here, although I do understand it has some scary consequences in reflection.
Type classes are one of my favorite things from Haskell, but I don't think we will see them in C# anytime soon. Extension classes on the other hand may make an appearance in some .NET language but no promises. It is just that extension classes are a great way to model relationships between objects as well as keep role based state around.
As a side note, what do you think about the "like" and "wrap" operators proposed in EcmaScript 4? If something similar were implemented in C# then a programmer could have a structurally compatible instance implement an interface. This would actually remove much of the need for duck typing and many cases where dynamic invocation is needed.
One of the reasons I like Haskell so much is its type system. You can probably imagine why I'm a bit disappointed to hear type classes won't make it anytime soon.
I haven't heard of extension classes before. Are they classes that have extension methods that can implement interfaces? Isn't adding state to such a concept faking (or implementing) multiple inheritance?
What about mixins?
public class SomeClass<T> : T
I can see problems implementing this, because the fact for each different T a version needs to be compuled to have the BaseType property of the Type of SomeClass<T> point to T. (and all the implications for the runtime). It would be a nice feature however.
I had a quick look at some resources about ES4 containing some information about like and wrap. Is it basically something like an anonymous interface? Because if it is anonymous, it can't be specified in the class signature, so it should be based on public or extension method signatures, instead of type constraints. I would like to have static checking on this (O(1)?), as to eliminate the reason for runtime checking (O(#methods)?).
This type of constraint for an object is of course already implemented with the foreach construct needing an object having a statically resolvable GetEnumerator() method. Could I take some type, implement a GetEnumerator extension method, and use it in a foreach loop?
Well that's about it, for now... :)
I will be missing the flexibility of type classes as well.
No, it isn't multiple inheritance. It is something more like role-based inheritance since the class would only implement that interface when the extension class is in scope.
You can already do something related to this. Make a static class with a bunch of extension methods for type T. Now add a static member of type Dictionary<T,ExtendedStateForT>. Bingo! The class now appears to have extra state. We can even do some things to implement an interface with extension methods.
This is all nice but it remains just a bunch of fine hackery. It would be a lot better if languages or the runtime support such a thing.
I can't tell you how many times I've wanted to write C<T> : T. At this point, I wouldn't plan on that either. But, you can use monads to get some of what you want. In fact, my next series of posts will eventually go into the latter approach.
Yes, you are right that a pattern based constraint is already used in foreach loops. The checks don't have to be O(#methods) because a particular type doesn't have to be checked multiple times. Most likely it would be O(#methods/number of times that type&interface are checked).
Off-topic:
I'm honored that you and your wife read my blog. Thanks man!
Take care.
Using Dictionary<T,ExtendedStateForT> sounds like a nice idea. Of course you would have to implement a IEqualityComparer interface, letting the equality checker check on reference equality. But how would you implement the method providing a hash code for a T? I would like to spread the elements of T over the underlying array in the Dictionary`2 class, and I would like to have 'reference-hashcodes'. Hashcodes that won't change even though the object does, effectively making the objects immutable enough for usage in a hashcode based data structure. So the question is: How would I accomplish this?
I think I'' rephrase my question. The most obvious answer is of course:
typeof(object).GetMethod("GetHashCode", BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly).Invoke(obj,null);
But is there a way without using reflection?
Can you post any updates to type inference that is being considered for 4.0? In particular as it relates to infering generic parameters. Hopefully something that will make creating lists of unnamable types (i.e. Anonymous Types) a bit less awkward.
Good idea!
P.S. A U realy girl?
|
http://blogs.msdn.com/wesdyer/archive/2007/12/06/volta-and-you.aspx
|
crawl-002
|
refinedweb
| 1,765
| 66.54
|
++) {
Why are you calling Main in yourMain method on line 16?
Tip: have a look at this:
You have way too much code and adding a class is too complicated for what you are doing. If you're using Visual Studio, start over with a clean project, compile it before doing anything else and then think about the problem.
Think character-array and characters.
A class is needed - this is an academic piece of work and the instructions say to use a class. Always follow the instructions if you want good marks!
Also where is the rest of your code and do your instructions give a definition for special characters?
anyone here guys who can show me how to do the upper case and lower case conversion? any tips? thanks in advance for those who will help I would really appreciate it.
If you want a quick and dirty solution, break your input into a character array and loop through it. For each character check if it isDigit(), if yes no change is required. Then check for the special characters you need to remove. Lastly check isUpper for the character. If false use toUpper, if true use toLower to convert them.
In some simple but ugly pseudo code:
if(isDigit){ add to output string } else if(isSpecial) { do not add to output string } else if(isUpper) { character.toLower() add to output string } else { character.toUpper() add to output string }
Inside your ChangeCase method, I would convert the string to a CharArray then iterate over each character and evaluate (with the isDigit(), isLower(), etc).
If you can use a StringBuilder, please do. Use the Append method to piece together the characters you can keep. ...and return the StgringBuilder as a string.
Please let me know if this is not clear.
Inside your ChangeCase method, I would convert the string to a CharArray
There is no need to do this. You can treat a String as if it was already a character array.
using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { String test = "this is a test string"; foreach (char c in test) { Console.Write("{0}-", c); } Console.WriteLine(); for (int i = 0; i < test.Length; i++) { Console.Write("{0}-", test[i]); } Console.WriteLine(); Console.ReadLine(); } } }
here is a piece of code you can use in your method:
string s = inputstring; string output = ""; char a; foreach (char c in s) { if ((c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) { if (Char.IsDigit(c)) { output += c; } else if (Char.IsUpper(c)) { a = Char.ToLower(c); output += a; } else if (Char.IsLower(c)) { a = Char.ToUpper(c); output += a; } } } return output;
|
http://www.daniweb.com/software-development/csharp/threads/377061/console-app-that-converts-uppercase-to-lower-and-vice-versa
|
CC-MAIN-2014-10
|
refinedweb
| 444
| 75.91
|
Contents
In C++, there are a few ways how values that we would consider different compare equal. A short overview.
Here, with “compare equal” I mean, that the expression
a == b for two different values
a and
b would be true. And with “different” I mean that inspecting the value, e.g. with a debugger or by printing it on the console, would show a difference.
User-defined types
To be able to compare instances of classes and structs, we have to define the comparison operator ourselves. This, in turn, makes the topic of different values comparing equal rather boring. After all, we can just define the comparison operator to always return true for one of our classes.
Other user-defined types are enums. We can not directly compare scoped enums of different types (aka. enum classes). If we compare enums of the same type or different classic C enums, we get the result of comparing the underlying integral value. There is nothing exciting going on – unless we forget that consecutive enumerators are given increasing values by the compiler if we do not define them differently:
enum class E { FIRST, SECOND = -1, THIRD, FOURTH, //... }; static_assert(E::FIRST == E::THIRD);
Here,
FIRST gets automatically assigned the value 0, and, after we explicitly set
SECOND to -1,
THIRD is 0 again,
FOURTH is 1 and so on. However, we just have two different names for the same value here, not different values. Inspecting two objects of type
E with the values
FIRST and
THIRD would give us the exact same result, making them indistinguishable.
Built-in types
At first sight, we can say that comparing two objects of the same built-in type will be boring. They’d have to have the same value to compare equal, and only different values would not compare equal. Except that’s not true!
Different zeroes compare equal
When we deal with floating point types, we have exceptions to these rules. The C++ standard does not specify how floating point types are represented internally, but many platforms use IEEE 754 floating point representation.
In IEEE 754, there are two distinguishable values for zero: positive and negative zero. The bitwise representation is different, and we will see different values when debugging or printing them. However, the two compare equal. On the other hand, floating points contain the value
NaN (not a number). And when we compare a variable with such a value with itself, they don’t compare equal.
static_assert(-0.0 == 0.0); int main() { //prints "0 -0" std::cout << 0.0 << ' ' << -0.0 << '\n'; } constexpr double nan = std::numeric_limits<double>::quiet_NaN(); static_assert(nan != nan);
Different integral values that compare equal
You’ll hopefully agree with me that a value of type unsigned int cannot be negative. If we have e.g. a variable
u of type
unsigned int and the comparison
u >= 0, this will always be true. Compilers may even warn about it, and optimizers may use it to optimize our code.
Nevertheless, there may be values for
u such that
u == -1 return true. The reason is that we’re comparing an unsigned int with an int here, and the compiler has to convert one to the other type. In this case, two’s complement is used to convert the
int to
unsigned int, which will give the largest possible
unsigned int:
static_assert(std::numeric_limits<unsigned int>::max() == -1);
Usually, this makes a lot of sense at the bit representation level: If the
int is already represented as two’s complement, with a leading sign bit, then these two values have the exact same bit representation.
unsigned int has to use two’s complement according to the standard. However, the bit representation for the
int is implementation-defined and might be something different entirely.
Different pointer values that compare equal
Have a look at this piece of code:
struct A { unsigned int i = 1; }; struct B { unsigned int j = 2; }; struct C : A, B {}; constexpr static C c; constexpr B const* pb = &c; constexpr C const* pc = &c; static_assert(pb == pc); static_assert((void*)pb != (void*)pc);
The last two lines are interesting: when we directly compare
pb and
pc, they are equal. The
constexpr and
const keywords do not play any role in that, they are only needed to make the comparisons a constant expression for the
static_assert. When we cast them to
void* first, i.e. compare the exact memory locations they point to, they are not. The latter can also be shown by simply printing the pointers:
#include <iostream> int main() { std::cout << pc << '\n' << pb << '\n'; }
The output will be something like this:
0x400d38
0x400d3c
So, what is going on here? The clue is that, again, we have two different types that can not be compared directly. Therefore, the compiler has to convert one into the other. Since
C inherits
B, a
C* is convertible to a
B* (and
C const* to
B const*). We already used that fact when we initialized
pb, so it is not a big surprise that they compare equal.
But why do they have different values? For this, we have to look at the memory layout of
c. Since it inherits first from
A, and then from
B, the first bytes are needed to store the
A subobject and its member
i. The
B subobject with its
j member comes after that and therefore can not have the same actual address as
c.
This is different if either A or B do not have any nonstatic data members. The compiler may optimize away empty base classes, and then
pb,
pc and a pointer to the
A subobject of
c would contain the same address.
2 Comments
Permalink
Nitpick, but I don’t quite agree that ‘u > 0’ is always true for an unsigned variable ‘u’. If you change it to ‘u >= 0’, thin I agree that it must always be true.
Permalink
Not a nitpick, but a nicely spotted error, thanks! Fixed 🙂
|
https://arne-mertz.de/2018/09/when-different-values-compare-equal/
|
CC-MAIN-2018-43
|
refinedweb
| 997
| 62.27
|
3 months ago.
Thread question
Hello,
I am using a B-L475E-IO1A1 and I am trying to use Threads. I checked the MBed platform site and it says that my board uses OS5 which, as far as I understand, should have the ROTS libraries included (correct me if I am wrong)
However, when I try something like this (as per the OS5 documentation):
<<code>>#
AnalogIn ain(A1);
void myFunction(){
printf("%in", ain);
}
int main{ Thread thread; thread.start(myFunction ); } <</code>>
it gives me an error that Thread is not identified.
Then I downloaed the mbed-rtos library and included it in my project. It then allowed me to compile and burn my program on my board. However, nothing gets printed either. Anyone has any ideas? I am confused if I need to actual include the rtos library or if it's actually included. If it were, why did I get that Thread error to start with? Also the compiler keeps crushing and I am not sure if it's my connection, heavy code or just the compiler itself.
Any helps is appreciated.
1 Answer
3 months ago.
You not need import RTOS lib when you use MbedOS5 lib.
Your code
// missing #include "mbed.h" /*Thread thread;*/ //probably correct place AnalogIn ain(A1); void myFunction(){ printf("%in", ain); //probably you need to declare Serial and something more in parameters } int main{ // missing () Thread thread; //probably wrong place thread.start(myFunction); }
so...
Try this
#include "mbed.h" DigitalOut led1(LED1); AnalogIn ain(A1); Serial pc(USBTX, USBRX); Thread thread; void myFunction(){ //while(1){ pc.printf("Input: %f", ain.read()); //wait(0.5); //} } int main(){ pc.printf("Running…\n"); thread.start(myFunction); while(true){ led1 =! led1; wait(0.5); } //while (osWaitForever); }
gl hf
To post an answer, please log in.
Hello. To enable rtos just with mbed.h include my solution is open rtos blink example and then modify it.posted by Kamil M 17 Nov 2018
|
https://os.mbed.com/questions/83190/Thread-question/
|
CC-MAIN-2019-09
|
refinedweb
| 325
| 68.16
|
JPA Implementation Patterns: Retrieving Entities
Join the DZone community and get the full member experience.Join For Free
last time i talked about
how to save an entity
. and once we've saved an entity we'd also like to retrieve it. compared to saving entities, retrieving entities is actually rather simple. so
simple i doubted whether there would be much point in writing this blog
. however we did use a few nice patterns when writing code for this.
and i'm interested to hear what patterns you use to retrieve entities.
basically, there are two ways to retrieve an entity with jpa:
- entitymanager.find will find an entity by its id or return null when that entity does not exists.
- if you pass a query string specified in java persistence query language to entitymanager.createquery it will return a query object that can then be executed to return a list of entities or a single entity .
a query object can also be created by referring to a named query (using entitymanager.createnamedquery ), or by passing in an sql query (using one of the three three flavours of entitymanager.createnativequery). and while the name implies otherwise, a query can also be used to execute an update or delete statement .
a named query may seem like a nice way to keep the query with the entities it queries, i've found that not to work out very well. most queries need parameters to be set with one of the variants of query.setparameter . keeping the query and the code that sets these parameters together makes them both easier to understand. that is why i keep them together in the dao and shy away from using named queries.
a convention i've found to be useful is to differentiate between finding an entity and getting an entity. in the first case null is returned when an entity cannot be found, while in the latter case an exception is thrown. using the latter method when your code expects an entity to be present prevents nullpointerexceptions from popping up later.
finding and getting a single entity by id
an implementation of this pattern for the jpadao base class we discussed a few blogs ago can look like this (i've included the find method for contrast):
public e findbyid(k id) { return entitymanager.find(entityclass, id); } public e getbyid(k id) throws entitynotfoundexception { e entity = entitymanager.find(entityclass, id); if (entity == null) { throw new entitynotfoundexception( "entity " + entityclass.getname() + " with id " + id + " not found"); } return entity; }
of course you'd also need to add this new method to the dao interface:
e getbyid(k id);
finding and getting a single entity with a query
a similar distinction can be made when we use a query to look for a single entity. the findordersubmittedat method below will return null when the entity cannot be found by the query. the getordersubmittedat method throws a noresultexception . both methods will throw a nonuniqueresultexception if more than one result is returned. to keep the getordersubmittedat method consistent with the findbyid method we could map the noresultexception to an entitynotfoundexception . but since there are both unchecked exceptions, there is no real need.
since these methods apply only to the order object, there are a part of the jpaorderdao :
public order findordersubmittedat(date date) throws nonuniqueresultexception { query q = entitymanager.createquery( "select e from " + entityclass.getname() + " e where date = :date_at"); q.setparameter("date_at", date); try { return (order) q.getsingleresult(); } catch (noresultexception exc) { return null; } } public order getordersubmittedat(date date) throws noresultexception, nonuniqueresultexception { query q = entitymanager.createquery( "select e from " + entityclass.getname() + " e where date = :date_at"); q.setparameter("date_at", date); return (order) q.getsingleresult(); }
adding the correct methods to the
orderdao
interface is left as an exercise for the reader.
finding multiple entities with a query
of course we also want to be able to find more than one entity. in that case i've found there to be no useful distinction between getting and finding . the findorderssubmittedsince method just return a list of entities found. that list can contain zero, one or more entities. see the following code:
public list<order> findorderssubmittedsince(date date) { query q = entitymanager.createquery( "select e from " + entityclass.getname() + " e where date >= :date_since"); q.setparameter("date_since", date); return (list<order>) q.getresultlist(); }
observant readers will note that this method was already present in the first version of the jpaorderdao .
so while retrieving entities is pretty simple, there are a few patterns you can stick to when implementing finders and getters. of course i'd be interested to know how you handle this in your code.
p.s. jpa 1.0 does not support it yet, but jpa 2.0 will include a criteria api . the criteria api will allow you to dynamically build jpa queries. criteria queries are more flexible than string queries so you can build them depending on input in a search form. and because you define them using domain objects, they are easier to maintain as references to domain objects get refactored automatically. unfortunately the criteria api requires you to refer to your entity's properties by name, so your ide will not help you when you rename those.
from
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/jpa-implementation-patterns-1
|
CC-MAIN-2021-10
|
refinedweb
| 872
| 56.76
|
John, My eVC 3 has these errors as well but if you let it run through you will find they are not critical enough to lead to a broken build. You will still get a sword.dll in the end. As far as commenting the line out I suggest you do not do that. The file is part of the sword library and thus the file may serve a purpose on a different platform. If we find it to be a problem later we can instead do something like #ifndef __EVC3__ #include <mem.h> #endif But for now just leave it be. In Christ, David Trotz SonWon wrote: > When I Rebuild All using EVC3 I get many errors like this: > > Could not find file mem.h > > If I remember correctly this is needed for a DOS build not windows. So > I rem the line out in swbuf.h (A Sword function) and now all of the > mem.h errors are gone. Do we need to just rem this out or is there a > better way to fix this? > > Someone not getting these errors check your swbuf.h to see if it is rem > out. I am thinking that if you are using Visual Studio (VS) then you do > not need to rem this line out as mem.h comes with VS, I think. I have a > copy of VS 2005 now and I can test later. I don't want to reload all of > the modules again at this time, that was painfully slow. > >
|
http://www.crosswire.org/pipermail/sword-devel/2008-July/028755.html
|
CC-MAIN-2015-11
|
refinedweb
| 254
| 92.22
|
Handbook C-Bus Analysis Revision Number: V1.0 © Copyright Clipsal Australia Pty Ltd (CAPL) 2006. All rights Reserved. This material is copyright under Australian and international laws. Except as permitted under the relevant law, no part of this work may be reproduced by any process without prior written permission of and acknowledgement to Clipsal Australia Pty Ltd. Clipsal and C-Bus are registered trademarks of Clipsal Australia Pty Ltd. The information in this manual is provided in good faith. Whilst Clipsal Australia Pty Ltd has endeavoured to ensure the relevance and accuracy of the information, it assumes no responsibility for any loss incurred as a result of its use. Clipsal Australia Pty Ltd does not warrant that the information is fit for any particular purpose, nor does it endorse its use in applications which are critical to the health or life of any human being. Clipsal Australia Pty Ltd reserves the right to update the information at any time without notice. V1.0 April 2006 Contents Scope Learning Outcomes 1.0 2.0 3.0 4.0 5.0 6.0 Analysis Tools 1.1 C-Bus Network Analyser 1.2 Multimeter 1.3 Cathode Ray Oscilloscope 1.4 Diagnostic Utility 1.5 HyperTerminal 3rd Party Devices 2.1 Typically Used 3rd Party Devices 2.2 Identifying 3rd Party Device Interference C-Bus Clock 3.1 Old C-Bus Clock 3.2 New C-Bus Clock 3.3 No C-Bus Clock 3.4 MMI Status Check Network Burden 4.1 Network burden Construction 4.2 Hardware Burden 4.3 Software Burden Units Per C-Bus Network 5.1 Network Impedance 5.2 Current Consumption 5.3 C-Bus Calculator C-Bus Voltage 6.1 Short To Earth 6.2 Short Circuit 6.3 Open Circuit 6.4 Insufficient Power Supplies 5 5 6 7 8 8 9 11 14 15 15 16 17 17 18 18 19 20 20 20 21 22 22 22 23 24 24 25 25 7.0 8.0 9.0 C-Bus Voltage 7.1 Cable Pairs 7.2 Cable Length 7.3 Cable Current Rating Mains Voltage Supply Conditions 8.1 Cable Pairs 8.2 Cable Length 8.3 Cable Current Rating Analysing A C-Bus Network 9.1 Third Party Interface Questions 9.2 C-Bus Clock Questions 9.3 Network Burden Questions 9.4 C-Bus Unit Questions 9.5 C-Bus Cable Questions 9.6 C-Bus Power Supply Questions 9.7 C-Bus Cable Run Questions 26 27 27 27 28 29 29 29 30 31 31 31 32 32 33 33 C-Bus Analysis Scope This handbook has been designed to provide a C-Bus installer or programmer with basic diagnostic skills, needed to analyse a C-Bus installation. A fundamental technical background is required. It is preferred that the installer or programmer has attended a C-Bus Basic Training Course, before using this handbook. To get the most out of this manual, be sure to read all chapters carefully. Learning Outcomes By the end of this module, you should have an understanding of: • the various tools used in the analysis of a C-Bus network • the effect of 3rd party devices on a C-Bus network • the construction of the C-Bus clock waveform • the effect of a network burden on a C-Bus network • the effect of C-Bus devices on a C-Bus network • the effect of various voltage conditions on a C-Bus network • the effect of C-Bus cable length on a C-Bus network • the effect of mains voltages on a C-Bus network. Handbook 5 C-Bus Analysis 1.0 Analysis Tools There are a number of various tools and software packages that may be used to analyse a C-Bus Network. These tools include: • C-Bus Network Analyser • Multimeter • Cathode Ray Oscilloscope • C-Bus Diagnostic Utility • HyperTerminal. These tools may be used to assess the correct operation of a C-Bus Network, so that the customer is left with a flawless installation. Handbook 6 C-Bus Analysis 1.1 C-Bus Network Analyser The C-Bus Network Analyser (5100NA) is a tool, which is used to analyse the conditions of an existing network. To use it, simply connect it to C-Bus via the terminals provided. Wait for five seconds and all of the LED’s will come on. This indicates that the Network Analyser is functioning correctly. Figure 1: The C-Bus Network Analyser. Each LED on the Network Analyser indicates a certain condition. These conditions are listed below. LED Status Of LED Check / Action Power Available Off / Flash Check C-Bus power is available. If Led flashes add power supply. Clock Not Present On Check the PC Interface and Bridge connections. Excess Voltage On Remove C-Bus Power Supply Units. Remove Burden On Remove Network Burden. Add Burden On Add Network Burden. Excess Cable On Reduce Cable Length or split Network Table 1: Network Analyser Indicators The push button on the Network Analyser, adds a Network Burden to the C-Bus network. This Network Burden is found inside the Network Analyser, and will be removed as soon as the push button is released. This function is used to test to see if the network impedance is within its tolerance. If the Add and Remove Network Burden LED’s are flashing alternately, then this indicates that the network is within a stable tolerance. When the C-Bus Network Analyser is connected to any C-Bus Network, it temporarily disturbs all C-Bus communications on the bus and causes temporary instability. Handbook 7 C-Bus Analysis 1.2 Multimeter The Multimeter is one of the most versatile and easily accessible measurement instruments. It has a number of functions, which can test a C-Bus network for unexpected behaviours using the: • Ohm Meter • DC Voltage Meter • AC Voltage Meter • Audible Continuity Test. When using a Multimeter, remember to test the C-Bus Network at various points by using successive approximation. This will help identify any unexpected conditions along the C-Bus network. 1.3 Cathode Ray Oscilloscope A Cathode Ray Oscilloscope (CRO) is another vital tool used in C-Bus Network Analysis. It is more complicated to use than a multimeter, but it allows the user to perform advanced readings and measurements, that would be unable to achieve using other measurement instruments. A CRO will be able to view the: • C-Bus clock • AC voltage waveform • DC voltage waveform • waveform frequency and period • MMI Communications • effect of a network burden • any other unexpected behaviour of the C-Bus clock. Many digital CRO’s have an ‘auto scale’ feature, which takes the C-Bus waveform and optimises it for the most accurate readings. Unfortunately all analogue CRO’s do not have this function to take measurements, the user must know exactly to use this instrument. With a mains rated probe or a current probe, a CRO can also be used to check mains voltage and current on the mains part of a C-Bus Network. Care must be taken when taking mains voltage measurements with a CRO, due to live exposed mains cables. Only a fully qualified electrician should take these measurements. Handbook 8 C-Bus Analysis 1.4 Diagnostic Utility The C-Bus Diagnostics Software allows the user to set the mode of a C-Bus Interface (serial or Ethernet), to send or receive C-Bus commands. This allows the programmer or installer to observe the C-Bus network traffic. The Diagnostic Utility generates a list, which shows the transmitted data and received data. Transmitted data is the data sent by the user to C-Bus via the C-Bus interface. Received data is the data which is generated by a unit on the C-Bus network. The C-Bus Diagnostic Utility is available for a free download from the downloads page, on the Clipsal Integrated Systems website. Figure 2: The C-Bus Diagnostic Utility software. Handbook 9 C-Bus Analysis 1.4.1 Diagnostic Utility Setup To set up the software for use, follow the steps below: 1) Click on the ‘Options’ menu 2) Select ‘Program Options’ 3) Select the appropriate C-Bus interface parameter and click the ‘OK’ button 4) Click on the ‘C-Bus’ menu 5) Select ‘Connect C-Bus.’ Once the software has successfully connected to a PC Interface, a similar message to Figure 3 will appear. Figure 3: The Information window indicates successful connection to C-Bus. 1.4.2 Using the C-Bus Diagnostic Utility The C-Bus Diagnostic Utility can be used to: • set the C-Bus interface into various modes • perform Installation, Application and Level MMI’s • identify any C-Bus unit on the network • get the PC Interface Data • monitor C-Bus commands via the Traffic Analyser • control C-Bus with the Command Generator. For further details on how to use the C-Bus Diagnostic Utility, consult the help files in the software. Handbook 10 C-Bus Analysis 1.5 HyperTerminal When using HyperTerminal, please consult a CIS Representative or a Technical Support Officer for help. It is strongly recommended that HyperTerminal is only used by experienced C-Bus installers or programmers. It should only be used as a last resort in diagnosing cosmetic faults on a C-Bus Network. HyperTerminal can be a vital tool when analysing communications along a C-Bus Network. It can monitor the C-Bus protocol and read commands sent down the network. It also allows you to type in commands to send through the bus. 1.5.1 HyperTerminal Setup HyperTerminal can connect to a: • PC Interface • C-Bus Network Interface. To connect to a PC Interface, select the relevant COM Port in the ‘Connect To’ window, and configure the COM Port properties to: • Bits Per Second: 9600 • Data Bits: 8 • Parity: None • Stop Bits: 1 • Flow Control: None To connect to a C-Bus Network Interface, select TCP/IP (Winsock) in the ‘Connect To’ window. The ‘Host Address’ is the IP Address of the C-Bus Network Interface, and the ‘Port Number’ is the Port Number of the C-Bus Network Interface. In the Property setting it would be best to enable “Send line ends with line feeds,” and “Echo typed characters locally,” in the ASCII Setup page. This will allows the user to see everything that is typed in, and everything that is sent to the computer. Handbook 11 C-Bus Analysis 1.5.2 Smart Mode Entering random strings should not be done unless familiar with the C-Bus protocol. Sending bogus strings onto C-Bus may damage the firmware of various units, voiding all warranties. There are several modes that a C-Bus Interface can be put into by using HyperTerminal. Generally for this type of application, Smart Mode is used. Smart Mode allows HyperTerminal to display and send C-Bus protocol. The C-Bus Interface will only go into Smart Mode, if the C-Bus Interface’s firmware is V3.0 or later. Firmware version V3.0 was implemented October 1999. To put the C-Bus Interface into Smart Mode, type in the pipe characters (eg. |||) followed by a Carriage Return (Enter). 1.5.3 Sending C-Bus Commands When sending C-Bus commands from HyperTerminal, ensure that there are no errors. If an incorrect character is typed in then simply press enter (carriage return) and start again. This example is assumes that the C-Bus Group Addresses 00 on the Lighting Application is being used. Before you begin to analyse the network using HyperTerminal, put the PCI into Smart Mode. To turn a group address on type in “\05380079XX,” where XX is the C-Bus group address in hexadecimal addressing. This string is then followed by a carriage return (Enter Key). On completing this you should see that group address physically turn on. To turn off the same group address, type in “\05380001XX.” Once again after completing this, the designated load should turn off as shown in Figure 4. Figure 4: Sending C-Bus commands. Handbook 12 C-Bus Analysis 1.5.4 Receiving C-Bus Commands To receive C-Bus commands, ensure that the C-Bus Interface is in smart mode. Once in smart mode, if any C-Bus unit on the network generates a C-Bus command, a C-Bus string will appear in HyperTerminal as shown in Figure 5. By receiving commands a user can determine which unit is generating the C-Bus command. In the string 050138007900XX where XX is a checksum, 01 is the Unit Address from where the command generated. Figure 5: Receiving C-Bus commands. Handbook 13 C-Bus Analysis 2.0 3rd Party Devices Before analyzing a C-Bus network for correct operation, check for any third party devices connected to the C-Bus network. Awareness of any 3rd party devices is important, because the devices may interfere with normal C-Bus operation. Handbook 14 C-Bus Analysis 2.1 Typically Used 3rd Party Devices Before analysing a C-Bus network for correct operation, check for any third party devices connected to the C-Bus network. Third party products include devices such as: • AMX • Crestron • Concept Panel • and many others. 2.2 Identifying 3rd Party Device Interference If the C-Bus network behaves unexpectedly on a network that has 3rd party interfaces attached to it, then remove the devices to test the integrity of the C-Bus network. If the problem is no longer there, then the integration with the third party device is causing the unexpected behaviour. Handbook 15 C-Bus Analysis 3.0 C-Bus Clocks A Cathode Ray Oscilloscope (CRO) is used to view the C-Bus clock on a C-Bus Network. The C-Bus clock is a 5 V AC pulse that is superimposed onto a 36 V DC C-Bus Voltage. It is recommended that there be more than one C-Bus clock enabled on each Network to provide a fully redundant C-Bus Network. The C-Bus clock can be enabled on: • Any C-Bus Unit with a PCI Simulator eg. C-Touch, PCI, Telephone Interface etc. • Any C-Bus output device. When analysing the C-Bus clock with a CRO, there are a number of things to look for: • A single C-Bus clock pulse with a period of 2 ms or 500 Hz • That the C-Bus clock’s amplitude measures 5 V AC • A clean waveform (no noise or hum on it) • That the discharge time of the positive cycle does not decay over a long time • The rising edge of the negative cycle is almost perfectly vertical • That there is no ringing on the waveform. • That there isn’t much overshoot on the positive part of the waveform. Handbook 16 C-Bus Analysis 3.1 Old C-Bus Clock Figure 6 is an example of the typical C-Bus clock. It is a 5 V p-p square waveform. This waveform will vary, depending on the cable capacitance on the network. A large cable capacitance will increase the discharge time on the positive cycle of the waveform. To reduce the discharge time of the positive cycle of the clock, simply add a Network Burden. Figure 6: Old C-Bus clock waveform. 3.2 New C-Bus Clock Figure 7 is an example of the new C-Bus clock which is found on most new units. Note that the waveform has more of a rounded appearance, rather than a square wave. The new circuitry of the C-Bus clock, enables the C-Bus unit to have a better control over the rise and fall of the clock. It also has the ability to handle the cable capacitance of larger networks better. This results in less overshoot, ringing or waveform distortion. Figure 7: New C-Bus clock waveform. Handbook 17 C-Bus Analysis 3.3 No C-Bus Clock If there is no active C-Bus clock enabled on the C-Bus network, then there will be no communications. There is an algorithm in the C-Bus units that checks to see if a C-Bus clock has been enabled. If there is more than one clock enabled on the C-Bus Network, then the C-Bus units will see this and disable as many clocks as needed. This process is automatic and usually takes any time between 1 second to around 2 minutes (depending on the network characteristics). It is recommended that no more than 3 C-Bus Clock be enabled. This will avoid communication delays after the C-Bus Network is powered up. 3.4 MMI Status Check Whilst examining the C-Bus Network with a CRO, notice the waveform has communications data sent along the bus approximately every 3 seconds as shown in Figure 8. This 3 second data transmission is the status report interval. The data transmission can be determined by SR Interval (Status Report Interval), in the graphical user interface of most C-Bus input units. Figure 8: MMI status check waveform. This is how C-Bus communicates and is a normal operation. Every time the network sends data onto the bus, the MMI (Multipoint to Multipoint Instruction) will check that all group addresses are in sync. A similar waveform will be displayed on the CRO if: • a command is being sent on the bus • a scan of the network is performed by Toolkit or C-Gate (the data transmission will be much more intense). Handbook 18 C-Bus Analysis 4.0 Network Burden A Network Burden applies a standard impedance to a C-Bus Network. It consists of a 1 kΩ 0.6 W resistor in series with a 10 uF 50 V capacitor. The Network Burdens may be hardware or software, and should only be used to limit the network to an impedance between 400 Ω and 1.5 kΩ. Handbook 19 C-Bus Analysis 4.1 Construction A network burden consists of a 1 kΩ 0.6 W resistor in series with a 10 uF 50 V capacitor as shown in Figure 9. 10uF to 22uF 50V 1kΩ 0.6W + C-Bus - C-Bus Figure 9: Construction of a network burden. The Network Burden may be hardware or software, and should only be used to limit the network impedance to be between 400 Ω and 1.5 kΩ. 4.2 Hardware Burden The Hardware Burden comes in an RJ45 package (5500BUR). The hardware burden is designed to be connected to any Din Rail units. It can be manually added or removed. Ensure that when a hardware burden to a C-Bus network a click is heard, indicating that the connection between the network burden and the RJ45 socket is secure. 4.3 Software Burden The Software Burden can be enabled and disabled from the Toolkit Software or Learn Mode. When using a software burden make sure that it is enabled at Unit Address 01. Check all units for an enabled software burden to make sure only one burden enabled in the GUI. There may be multiple software burdens enabled on various units due to poor programming. If you have a burden enabled at Unit Address 01 and you change its Unit Address, the burden is still active. In the global tab of the GUI, the burden setting will be enabled, but greyed out. To disable it, you will need to change the Unit Address back to 01, or disable it through Special Function Mode (see Learn Mode Application Notes). The number of burdens on a C-Bus network is dependant on the construction of the network. Typically all networks will need one network burden to function correctly. In some instances, no burden may be needed. This is common when the network has approximately 60 C-Bus units. Handbook 20 C-Bus Analysis 5.0 C-Bus Units Per Network As a rule of thumb, it is generally accepted that a single C-Bus network should not exceed a unit count of 100 units. However, there is an exception to this. There are two factors that effect how many units may be connected to C-Bus Network. They are: • network impedance • current consumption. Handbook 21 C-Bus Analysis 5.1 Network Impedance The C-Bus network impedance is an important parameter that affects the way a C-Bus network operates. As mentioned in Chapter 4, the ideal network impedance is between 400 Ω and 1.5 kΩ. Usually Clipsal Integrated Systems will specify 100 units per network, because 100 C-Bus units will bring the network impedance to its limit. Unfortunately 100 units may not be a realistic unit count, due to the current consumption of the bus. 5.2 Current Consumption The maximum current consumption of the bus affects the operation of C-Bus. A maximum of 2 A is allowed per C-Bus network. This is the equivalent of 10 Din Rail units with onboard power supplies. To calculate the maximum number of C-Bus units allowed on a C-Bus network, add the current consumption of all the Input, System Support Units and Outputs without power supplies (the current consumption can be found on the label of the C-Bus unit). The total current of all of the calculated devices must not exceed 2 A. 5.3 C-Bus Calculator The C-Bus Calculator is found in Toolkit. It is a useful tool that calculates the characteristics of the C-Bus network that is being programmed. In Toolkit, each C-Bus network will have a unique calculation. To view the calculations: 1) add all C-Bus units to the database in Toolkit (including stand alone power supplies) 2) place the Catalogue Number of each unit into the corresponding Graphical User Interfaces in Toolkit 3) click on the desired network in the project manager, to view the calculations. The main pane will then list the parameters that the C-Bus Calculator has calculated, as shown in Figure 10. Figure 10: The Toolkit C-Bus calculator. Handbook 22 C-Bus Analysis 6.0 C-Bus Units Per Network At any point in the Network, the voltage across a C-Bus Unit must be in the range of 15 V DC to 36 V DC inclusive. However, a C-Bus voltage as low as 15 V DC, may cause unstable communications. As a rule of thumb, it is strongly recommended that the average C-Bus voltage be no lower than 20 V DC. When designing a single network topology, evenly distribute C-Bus Power Supplies along the network. This will ensure minimal voltage drops over the C-Bus cable. Low C-Bus Voltage may be due to: • not evenly distributing C-Bus power supplies • not using the correct C-Bus cable • poor termination of C-Bus cable, causing multiple high resistance joints • not using the correct pairs of the Cat-5 (cable) • not enough power supplies on the network • too many C-Bus units on the network • faulty C-Bus power supply. Handbook 23 C-Bus Analysis 6.1 Short Circuit To Earth When approaching a C-Bus network for the very first time it is important to check C-Bus cable has not been shorted to earth. Using a multimeter on the DC Voltage setting, place the negative probe onto true earth and the positive probe onto the negative C-Bus rail. The multimeter should read roughly –15 V DC. Following this, place the negative probe onto true earth and the positive probe onto the positive C-Bus rail. The multimeter should read roughly +15 V DC. As C-Bus utilises a floating bus voltage. If the C-Bus is shorted to earth in any of these two tests, the multimeter will roughly read either +30 V DC or –30 V DC. Therefore when the multimeter probes are placed directly across the terminals of C-Bus, the multimeter should read approximately 30 V DC. 6.2 Short Circuit A short circuit on a C-Bus Network is a result of an incorrect installation. A short circuit will cause the C-Bus voltage to drop to 0V at the point of the short. However, there may be a rather small voltage on the bus at other points. The symptoms of a short on a C-Bus Network would be: • all LED’s on input units are off • all C-Bus indicators on output units are off • the C-Bus Voltage is 0 V. The most common short circuit conditions are results of: • incorrect C-Bus pairs used on installation • loose C-Bus terminations shorting to another potential • moisture partially shorting two terminals • foreign objects shorting C-Bus terminals. One of the first tests that should be conducted to identify a short circuit condition, is to take voltage measurement using a multimeter. A short circuit condition should indicate that there is 0 V on the C-Bus network. Handbook 24 C-Bus Analysis 6.3 Open Circuit An open circuit condition is yet again another symptom of an incorrect installation. An open circuit may cause units on a particular side of the C-Bus Network not to operate. However, if there are power supplies and C-Bus clocks enabled on both sides of the open circuit, both sides of the break will operate as two separate C-Bus networks. Finding an open circuit may be difficult to diagnose because some parts of the C-Bus network may still be operational. Finding an open circuit is most commonly found by using successive approximation. This is means that the C-Bus network will be broken into sections for testing. Successive approximation, may be executed by following the steps below: 1) break the network at the half way point 2) identify which half of the network is not operating as expected 3) take the half of the C-Bus network that is not operating correctly and half it again 4) identify which part of the network is not operating as expected 5) continue this process until the open circuit had been found. 6.4 Insufficient Power Supplies A C-Bus network that has insufficient C-Bus power supplies is result of incorrect network design. If the amount of C-Bus units is consuming more current than the C-Bus power supplies can provide, then the C-Bus voltage will be low. This can be rectified by adding more power supplies to the C-Bus network(ensuring the current on the C-Bus network does not exceed 2 A). C-Bus units are designed to operate from the C-Bus Power Supply with a nominal loaded output voltage of 36 V DC. As more C-Bus units are added to the network, the output voltage of the C-Bus power supply starts to decrease. Voltage will also drop over large lengths of cable. This voltage drop is proportional to the cable length and the current consumption. Output Voltage 36 V DC 32 V DC Number of C-Bus Units C-Bus Units Figure 11: Maximum number of C-Bus units per power supply. Handbook 25 C-Bus Analysis 7.0 C-Bus Cable All C-Bus Networks use an Unshielded Twisted Pair (UTP), Cat-5 (cable) as the communications medium. The Clipsal catalogue number for this product is 5005C305B. The C-Bus cable is coloured pink so it is easily identifiable, in comparison to any other Cat-5 (cable) eg. Ethernet. The cable has a mains rated insulated outer sheath. This allows the cable to be safely wired into a distribution board where mains are present. Handbook 26 C-Bus Analysis 7.1 Cable Pairs The following conductors of the CAT-5 (CABLE) cable must be used to make the C-Bus connections: • Orange & Blue: (+) Positive C-Bus Rail • Orange / White & Blue / White: (-) Negative C-Bus Rail Using the correct pairs increases immunity to electro-magnetic interference. This means that the C-Bus cable is less susceptible to picking up noise. 7.2 Cable Length On a C-Bus Network, there must be no more than 1000 metres of C-Bus Cat-5 (cable) used. This is determined by the propagation delay of the C-Bus communications, and the total cable capacitance of 100 nF. Large cable lengths can introduce effects on the network such as: • a drop in voltage • an increase in cable capacitance. The voltage drop will occur because the Cat-5 (cable) has a resistance of 90 Ω per 1000 meters. To minimise the amount of voltage drop along large lengths of cable, evenly space out the C-Bus power supplies along the C-Bus Network. A high cable capacitance is a result of large lengths of Cat-5 (cable). This will cause the C-Bus clock to distort. This is easily rectified by adding or removing a Network Burden (this may vary with different C-Bus Networks). Figure 12: The effect of cable length on a C-Bus clock. 7.3 Cable Current Rating The maximum amount of current allowed to flow onto the Cat-5 (cable) is 2 A. This is a limitation of the cable. If 2 A is exceeded then you will run the risk of damaging the C-Bus cable. Therefore if high resistance joints are at both ends of a length of C-Bus cable, the cable will only be able to carry 1 A instead of 2 A. Handbook 27 C-Bus Analysis 8.0 Mains Voltage Supply Conditions The mains supply to a C-Bus device is just as important as the C-Bus supply. Unfortunately mains power cannot be controlled by an end user, so it is important to know how mains voltage may effect a network. Handbook 28 C-Bus Analysis 8.1 Isolated Generators All C-Bus units that are mains powered are designed to operate from a sinusoidal voltage waveform. If an Inverter Supply (that produces a square wave voltage) is used, then C-Bus units may not behave as expected and may damage the C-Bus unit. An Uninterruptible Power Supply (UPS) may be used, if the output voltage and frequency are within acceptable limits for C-Bus units that require mains power. These limits are: • that the UPS must operate between the voltages 190 V and 265 V • that the UPS must maintain a frequency of 50 Hz or 60 Hz, ±3 Hz • the frequency may only vary by 3 Hz over 1 minute. C-Bus Dimmers are the only C-Bus units that are effected by the shift of voltage and frequency. 8.2 Brownouts A brownout is a condition where a lower than normal mains voltage, is being supplied by the local energy utility. A power line voltage reduction of 8 to 12 % is usually considered a brownout. Unfortunately Clipsal Integrated Systems has no control over a brownout, as it is usually caused by the local energy supplier. During a brownout, electronic equipment may experience errors due to an erratic power supply. Other electronic equipment may function poorly or not function at all. Most C-Bus units (with power supplies) use switch mode power supplies, which are more immune to mains voltage changes. If the mains input voltage decreases, the output will remain the same voltage. C-Bus Professional Series Dimmers do not use switch mode power supplies. If the mains voltage on a C-Bus Professional Series Dimmer drops, the output of that power supply will drop by the same percentage. 8.3 Overvoltages And Transients All C-Bus units consist of electronic components that may be damaged by overvoltages and transients. C-Bus does provide some level of protection to units, but not enough to protect them from large inrush currents or lightning strikes. It is recommended that overvoltage protection is used in the distribution board, to protect the C-Bus units from overvoltages and transients. If the C-Bus cable is routed between buildings or used in an outdoor installation, then protection must be used on the Cat-5 (cable) as well. Handbook 29 C-Bus Analysis 9.0 Analysing A C-Bus Network When analysing a C-Bus network, there are a series of questions that an installer or programmer should ask their selves. These questions relate to: • 3rd Party Interfaces • C-Bus Clocks • Network Burdens • C-Bus Units • C-Bus Cable • C-Bus Power Supplies • C-Bus Cable Runs. This chapter identifies some of the questions each C-Bus installer or programmer should be asking them selves, when analysing a C-Bus network. Handbook 30 C-Bus Analysis 9.1 Third Party Interfaces Questions Before testing the C-Bus network, ensure all third party devices are disconnected from the C-Bus network. This will remove the possibility of the third party devices causing problems. Questions that will assist in analysing the C-Bus networks are: 1) Are there any third party devices connected to the C-Bus network? 2) If so, what devices are connected? 3) Does the C-Bus network operate correctly once the third party device has been removed? 4) Is the function of the third party interface clearly understood? 5) What is the function of the third party interface? 9.2 C-Bus Clock Questions If no C-Bus clock is enabled on a C-Bus network, then there will be no communications. Questions that will assist in analysing the C-Bus networks are: 1) Is the C-Bus LED on the DIN Rail units on? 2) How many clocks are enabled on the C-Bus Network? 3) How many active clocks are there? 4) Are the units with the C-Bus clock enabled on them, physically located near each other? 5) Are there any clocks enabled on Network Bridges (5500NB) or C-Bus Network Interfaces (5500CN)? 9.3 Network Burden Questions Check all C-Bus devices for Hardware Network Burdens. Also check for Software Network Burdens, by using the latest version of Toolkit software. Questions that will assist in analysing the C-Bus networks are: 1) Does the C-Bus Calculator in Toolkit indicate that a Network Burden is needed? 2) How many hardware burdens are connected to the C-Bus network? 3) Have you checked all the C-Bus distribution boards for hardware Network Burdens? 4) Is the hardware burden used supplied Clipsal Integrated Systems? 5) What type of hardware Network Burden is being used (RJ45 or Flying Lead)? 6) How many software Network Burdens are enabled? 7) What is the unit type and unit address, of the unit with the software enabled Network Burden? 8) Using toolkit, check the Global tab of each C-Bus unit, to see that there is a Software Network Burden enabled. 9) Have you looked on the Status tab in Toolkit (and clicked the Update Status button) for each C-Bus unit, to see if there is a Software Network Burden enabled? Handbook 31 C-Bus Analysis 9.4 C-Bus Unit Questions The following may be used to analyse how the C-Bus units affect the C-Bus system requirements. Questions that will assist in analysing the C-Bus networks are: 1) Including C-Bus Power Supplies, how many C-Bus units are connected to the network? 2) Add up the current consumption of each C-Bus unit on the C-Bus Network. What is the total current consumption for all these units? 3) Are there enough C-Bus Power Supplies to provide current for all the units on the network? 9.5 C-Bus Cable Questions The following may be used to analyse any possible wiring problems. Questions that will assist in analysing the C-Bus networks are: 1) Has the recommended Cat-5 (cable) been used? 2) When running the C-Bus cable in parallel with 240 V mains cable, is there a minimum segregation of 150 mm at all times? 3) When the C-Bus needs to cross 240 V mains cable, does it cross at a 90° angle with 60 mm segregation? 4) What pairs of the C-Bus cable have been used for the positive and negative rails of C-Bus? 5) Have any junctions been made on the C-Bus cable (using junction boxes etc)? 6) What is the exact length of the C-Bus cable, including patch leads? 7) Has the C-Bus cable been checked for broken conductors? 8) Has the C-Bus cable been checked for high resistant joins? 9) Has the C-Bus cable been checked for open circuits? 10) Has the C-Bus cable been checked for short circuits? 11) Has the C-Bus cable been checked for closed loops? 12) Does any of the C-Bus cable run underground, either in conduit or buried directly? If yes, then does the installation of this cable meet national wiring standards? Handbook 32 C-Bus Analysis 9.6 C-Bus Power Supply Questions The following will help to identify any possible problems related to power supplies. Questions that will assist in analysing the C-Bus networks are: 1) What is the mains voltage measured on the C-Bus power supplies? 2) How many stand alone C-Bus Power Supplies are there on the network? 3) How many on board C-Bus Power Supplies are there on the network? 4) Add up the current that each C-Bus Power Supply provides. What is the total current supplied by all C-Bus Power Supplies? 5) Are the C-Bus Power Supplies concentrated in one general area? 6) Are the C-Bus Power Supplies evenly distributed throughout the installation? 7) Are there any old C-Bus Power Supplies (5100PS)? 9.7 C-Bus Cable Run Questions The following will allow the analysis of problems with cable runs. Questions that will assist in analysing the C-Bus networks are: 1) Is the network a standard daisy chain configuration? 2) If using a star configuration, how many branches are there on the network? 3) Are any of the voltages on all of the branches below 20 V DC? 4) What are the lowest voltages (measured on C-Bus units) on each C-Bus cable run? 5) Are any of the voltages on all of the branches above 36 V DC? 6) What are the highest voltages (measured on C-Bus units) on each C-Bus cable run? Handbook 33
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
|
https://manualzz.com/doc/27402989/handbook---c-bus-analysis
|
CC-MAIN-2019-51
|
refinedweb
| 6,387
| 64.2
|
12 January 2011 17:46 [Source: ICIS news]
TORONTO (ICIS)--Shell is in talks with potential buyers to sell the base oil operations and business at its refinery in ?xml:namespace>
Shell said that the remaining refinery would be converted into a terminal for oil products, with completion expected in 2012.
The move came as attempts to sell the
In
A Shell lubricants plant, Schmierolwerk Grasbrook in
The Shell refinery in Hamburg-Harburg has a processing refinery of 5.5m tonnes/year. It produces fuels, base oils and waxes, according to information on the company’s website.
Last year, Shell sold its refinery at Heide, near
Shell is also converting a refinery
|
http://www.icis.com/Articles/2011/01/12/9425433/shell-in-talks-to-sell-base-oil-ops-at-german-hamburg.html
|
CC-MAIN-2015-14
|
refinedweb
| 112
| 50.16
|
9.3. Transformer¶
Until now, we have covered the three major neural network architectures: the convolution neural network (CNN), the recurrent neural network (RNN), and the attention mechanism. Before we dive into the transformer architecture, let us quickly review the pros and cons for first two:
The convolution neural network (CNN): Easy to parallelize at a layer but cannot capture the nonfixed sequential dependency very well.
The recurrent neural network (RNN): Able to capture the long-range, variable-length sequential information, however, suffer from inability to parallelize within a sequence.
To combine the advantages from both CNN and RNN, [Vaswani et al., 2017] innovates a novel architecture with the attention mechanism that we just introduced in Section 9.1. This architecture, which is called as the transformer, achieves parallelization by capturing recurrence sequence with attention but at the same time encoding each item’s position in the sequence. As a result, the transformer leads to a compatible model with significantly shorter training time.
Similar to the seq2seq model in Section 8.14, the transformer is also based on the encoder-decoder architecture. However, the transformer differs to the former by replacing the recurrent layers in seq2seq with multi-head attention layers, incorporating the position-wise information through position encoding, and applying layer normalization.
Recall in Section 9.2, we introduced Sequence to Sequence (seq2seq) with attention model. To better visualize and compare the transformer architecture with it, we draw them side-by-side in Fig. 9.3.1. These two models are similar to each other.
On the flip side, the transformer differs to the seq2seq with attention model in three major places:
Transformer block: a recurrent layer in seq2seq is replaced.
In the rest of this section, we will equip you with each new component introduced by the transformer, and get you up and running to construct a machine translation model.
import math import d2l from mxnet import np, npx, autograd from mxnet.gluon import nn npx.set_np()
9.3.1. Multi-Head Attention¶
Before the discussion of the multi-head attention layer, let us quick express the self-attention architecture. The self-attention model is a normal attention model, with its query, its key, and its value are copied exactly same from each item of the sequential inputs. As we illustrate in Fig. 9.3.2, self-attention outputs a same length sequential output for each input item. Compared to a recurrent layer, output items of a self-attention layer can be computed in parallel and, therefore, it is easy to obtain a high-efficient implementation.
Based on the above, the multi-head attention layer consists of \(h\) parallel self-attention layers, each one is called a head. For each head, before feeding into the attention layer, we project the queries, keys, and values with three dense layers with hidden sizes \(p_q\), \(p_k\), and \(p_v\), respectively. The outputs of these \(h\) attention heads are concatenated and then processed by a final dense layer.
To be more specific, assume that the dimension for a query, a key, and a value are \(d_q\), \(d_k\), and \(d_v\), respectively. Then, for each head \(i=1,\ldots,h\), we can train the learnable parameters \(\mathbf W_q^{(i)}\in\mathbb R^{p_q\times d_q}\), \(\mathbf W_k^{(i)}\in\mathbb R^{p_k\times d_k}\), and \(\mathbf W_v^{(i)}\in\mathbb R^{p_v\times d_v}\). Therefore, the output for each head can be demonstrated by
where the \(\text{attention}\) in the formula can be any attention
layer, such as the
DotProductAttention and
MLPAttention as we
introduced in .. _sec_attention:.
After that, the output with length \(p_v\) from each of the \(h\) attention heads are concatenated to be a length \(h p_v\) output, which is then passed the final dense layer with \(d_o\) hidden units. The weights of this dense layer can be denoted by \(\mathbf W_o\in\mathbb R^{d_o\times h p_v}\). As a result, the multi-head attention output will be
Let us implement the multi-head attention in MXNet. Assume that the
multi-head attention contain the number heads
num_heads \(=h\),
the hidden size
hidden_size \(=p_q=p_k=p_v\) are the same for
the query, the key, and the value dense layers. In addition, since the
multi-head attention keeps the same dimensionality between its input and
its output, we have the output feature size $d_o = $
hidden_size as
well.
class MultiHeadAttention(nn.Block): def __init__(self, hidden_size, num_heads, dropout, **kwargs): super(MultiHeadAttention, self).__init__(**kwargs) self.num_heads = num_heads self.attention = d2l.DotProductAttention(dropout) self.W_q = nn.Dense(hidden_size, use_bias=False, flatten=False) self.W_k = nn.Dense(hidden_size, use_bias=False, flatten=False) self.W_v = nn.Dense(hidden_size, use_bias=False, flatten=False) self.W_o = nn.Dense(hidden_size, use_bias=False, flatten=False) def forward(self, query, key, value, valid_length): # query, key, and value shape: (batch_size, seq_len, dim), # where seq_len is the length of input sequence # valid_length shape is either (batch_size, ) or (batch_size, seq_len). # Project and transpose query, key, and value from # (batch_size, seq_len, hidden_size * num_heads) to # (batch_size * num_heads, seq_len, hidden_size). query = transpose_qkv(self.W_q(query), self.num_heads) key = transpose_qkv(self.W_k(key), self.num_heads) value = transpose_qkv(self.W_v(value), self.num_heads) if valid_length is not None: # Copy valid_length by num_heads times if valid_length.ndim == 1: valid_length = np.tile(valid_length, self.num_heads) else: valid_length = np.tile(valid_length, (self.num_heads, 1)) output = self.attention(query, key, value, valid_length) # Transpose from (batch_size * num_heads, seq_len, hidden_size) back to # (batch_size, seq_len, hidden_size * num_heads) output_concat = transpose_output(output, self.num_heads) return self.W_o(output_concat)
Here are the definitions of the transpose functions
transpose_qkv
and
transpose_output, who are the inverse of each other.
def transpose_qkv(X, num_heads): # Original X shape: (batch_size, seq_len, hidden_size * num_heads), # -1 means inferring its value, # after first reshape, X shape: (batch_size, seq_len, num_heads, hidden_size) X = X.reshape(X.shape[0], X.shape[1], num_heads, -1) # After transpose, X shape: (batch_size, num_heads, seq_len, hidden_size) X = X.transpose(0, 2, 1, 3) # Merge the first two dimensions. # Use reverse=True to infer shape from right to left. # Output shape: (batch_size * num_heads, seq_len, hidden_size) output = X.reshape(-1, X.shape[2], X.shape[3]) return output def transpose_output(X, num_heads): # A reversed version of transpose_qkv X = X.reshape(-1, num_heads, X.shape[1], X.shape[2]) X = X.transpose(0, 2, 1, 3) return X.reshape(X.shape[0], X.shape[1], -1)
Let us validate the
MultiHeadAttention model in the a toy example.
Create a multi-head attention with the hidden size \(d_o = 100\),
the output will share the same batch size and sequence length as the
input, but the last dimension will be equal to the
hidden_size
\(= 100\).
cell = MultiHeadAttention(100, 10, 0.5) cell.initialize() X = np.ones((2, 4, 5)) valid_length = np.array([2,3]) cell(X, X, X, valid_length).shape
(2, 4, 100)
9.3.2. Position-wise Feed-Forward Networks¶
Another critical it to as position-wise. Indeed, it equals to apply two \(Conv(1,1)\), i.e., \(1 \times 1\) convolution layers.
Below, the
PositionWiseFFN shows how to implement a position-wise
FFN with two dense layers of hidden size
ffn_hidden_size and
hidden_size_out, respectively.
class PositionWiseFFN(nn.Block): def __init__(self, ffn_hidden_size, hidden_size_out, **kwargs): super(PositionWiseFFN, self).__init__(**kwargs) self.ffn_1 = nn.Dense(ffn_hidden_size, flatten=False, activation='relu') self.ffn_2 = nn.Dense(hidden_size_out, flatten=False) def forward(self, X): return self.ffn_2(self.ffn_1(X))
Similar to the multi-head attention, the position-wise feed-forward network will only change the last dimension size of the input—the feature dimension. In addition, if two items in the input sequence are identical, the according outputs will be identical as well. Let us try a toy model!
ffn = PositionWiseFFN(4, 8) ffn.initialize() ffn(np.ones((2, 3, 4)))[0]
array([[]])
9.3.3. Add and Norm¶
Besides the above two components in the transformer block, the “add and
norm” within the block also play a core role to connect the inputs and
outputs of other layers smoothly. To be more clear, we add a layer that
contains a residual structure and a layer normalization after both the
multi-head attention layer and the position-wise FFN network. Layer
normalization is similar to the batch normalization as we discussed in
Section 7.5. One difference is that the mean and
variances for the layer normalization are calculated along the last
dimension, e.g
X.mean(axis=-1) instead of the first batch dimension,
e.g.,
X.mean(axis=0). Layer normalization prevents the range of
values in the layers changing too much, which means that faster training
and better generalization ability.
MXNet has both
LayerNorm and
BatchNorm implemented within the
nn block. Let us call both of them and see the difference in the
below example.
layer = nn.LayerNorm() layer.initialize() batch = nn.BatchNorm() batch.initialize() X = np.array([[1,2],[2,3]]) # compute mean and variance from X in the training mode. with autograd.record(): print('layer norm:',layer(X), '\nbatch norm:', batch(X))
layer norm: [[-0.99998 0.99998] [-0.99998 0.99998]] batch norm: [[-0.99998 -0.99998] [ 0.99998 0.99998]]
Now let us implement the connection block
AddNorm together.
AddNorm accepts two inputs \(X\) and \(Y\). We can imagine
\(X\) as the original input in the residual network, while \(Y\)
as the outputs from either the multi-head attention layer or the
position-wise FFN network. In addition, we apply dropout on \(Y\)
for the purpose of regularization.(np.ones((2,3,4)), np.ones((2,3,4))).shape
(2, 3, 4)
9.3.4. Positional Encoding¶
Unlike the recurrent layer, both the multi-head attention layer and the position-wise feed-forward network compute the output of each item in the sequence independently. This functionality enables us to parallel the computation, but it fails to model the sequential information for a given sequence. To better capture the sequential information, the transformer model utilizes the positional encoding to maintain the positional information of the input sequence.
So what is the positional encoding? Assume that \(X\in\mathbb R^{l\times d}\) is the embedding of an example, where \(l\) is the sequence length and \(d\) is the embedding size. This positional encoding layer encodes X’s position \(P\in\mathbb R^{l\times d}\) and outputs \(P+X\).
The position \(P\) is a 2d matrix, where \(i\) refers to the order in the sentence, and \(j\) refers to the position along the embedding vector dimension. In this way, each value in the origin sequence is then maintained using the equations below:
for \(i=0,\ldots,l-1\) and \(j=0,\ldots,\lfloor(d-1)/2\rfloor\).
An intuitive visualization and implementation of the positional encoding are showing below.
class PositionalEncoding(nn.Block): def __init__(self, embedding_size, dropout, max_len=1000): super(PositionalEncoding, self).__init__() self.dropout = nn.Dropout(dropout) # Create a long enough P self.P = np.zeros((1, max_len, embedding_size)) X = np.arange(0, max_len).reshape(-1,1) / np.power( 10000, np.arange(0, embedding_size, 2)/embedding_size) self.P[:, :, 0::2] = np.sin(X) self.P[:, :, 1::2] = np.cos(X) def forward(self, X): X = X + self.P[:, :X.shape[1], :].as_in_context(X.context) return self.dropout(X)
Now we test the
PositionalEncoding with a toy model for 4
dimensions. As we can see, the 4th dimension has the same frequency as
the 5th but with different offset. The 5th and 6th dimension have a
lower frequency.
pe = PositionalEncoding(20, 0) pe.initialize() Y = pe(np.zeros((1, 100, 20 ))) d2l.plot(np.arange(100), Y[0, :,4:8].T, figsize=(6, 2.5), legend=["dim %d"%p for p in [4,5,6,7]])
9.3.5. Encoder¶
Armed with all the essential components of transformer, let us first
build a encoder transformer block. This encoder contains a multi-head
attention layer, a position-wise feed-forward network, and two “add and
norm” connection blocks. As you can observer in the code, for both of
the attention model and the positional FFN model in the
EncoderBlock, their outputs’ dimension are equal to the
embedding_size. This is due to the nature of the residual block, as
we need to add these outputs back to the original value during “add and
norm”.
class EncoderBlock(nn.Block): def __init__(self, embedding_size, ffn_hidden_size, num_heads, dropout, **kwargs): super(EncoderBlock, self).__init__(**kwargs) self.attention = MultiHeadAttention(embedding_size, num_heads, dropout) self.addnorm_1 = AddNorm(dropout) self.ffn = PositionWiseFFN(ffn_hidden_size, embedding_size) self.addnorm_2 = AddNorm(dropout) def forward(self, X, valid_length): Y = self.addnorm_1(X, self.attention(X, X, X, valid_length)) return self.addnorm_2(Y, self.ffn(Y))
Due to the residual connections, this block will not change the input
shape. It means the
embedding_size argument should be equal to the
input size of the last dimension. In our toy example below, the
embedding_size \(= 24\),
ffn_hidden_size \(=48\),
num_heads \(= 8\), and
dropout \(= 0.5\).
X = np.ones((2, 100, 24)) encoder_blk = EncoderBlock(24, 48, 8, 0.5) encoder_blk.initialize() encoder_blk(X, valid_length).shape
(2, 100, 24)
Now it comes to the implementation of the whole encoder transformer as
shown below. With the transformer encoder, \(n\) blocks of
EncoderBlock stack up one after another. Because of the residual
connection,, embedding_size, ffn_hidden_size, num_heads, num_layers, dropout, **kwargs): super(TransformerEncoder, self).__init__(**kwargs) self.embedding_size = embedding_size self.embed = nn.Embedding(vocab_size, embedding_size) self.pos_encoding = PositionalEncoding(embedding_size, dropout) self.blks = nn.Sequential() for i in range(num_layers): self.blks.add( EncoderBlock(embedding_size, ffn_hidden_size, num_heads, dropout)) def forward(self, X, valid_length, *args): X = self.pos_encoding(self.embed(X) * math.sqrt(self.embedding_size)) for blk in self.blks: X = blk(X, valid_length) return X
Let us create an encoder with two stacked encoder transformer blocks,
whose hyper-parameters are same as before. Similar to the previous toy
example’s parameters, we add two more parameters
vocab_size to be
\(200\) and
num_layers to be \(2\) here.
encoder = TransformerEncoder(200, 24, 48, 8, 2, 0.5) encoder.initialize() encoder(np.ones((2, 100)), valid_length).shape
(2, 100, 24)
9.3.6. Decoder¶
The decoder transformer block looks similar to the encoder transformer block. However, besides the two sub-layers—the multi-head attention layer and the positional encoding network, the decoder transformer block contains a third sub-layer, which applys multi-head attention on the output of the encoder stack. Similar to the encoder transformer block, the decoder transformer block employ “add and norm”, i.e., the residual connections and the layer normalization to connect each of the sub-layers.
To be specific, at time step \(t\), assume that \(\mathbf x_t\) is the current input, i.e., the query. As illustrate in Fig. 9.3.5, the keys and values of the self-attention layer consist of the current query with all past queries \(\mathbf x_1, \ldots, \mathbf x_{t-1}\).
During the training, because the output for the \(t\)-query could observe all the previous key-value pairs, which results in an inconsistent behavior than prediction. We can eliminate the unnecessary information by specifying the valid length to be \(t\) for the \(t^\textrm{th}\) query.
class DecoderBlock(nn.Block): # i means it is the i-th block in the decoder def __init__(self, embedding_size, ffn_hidden_size, num_heads, dropout, i, **kwargs): super(DecoderBlock, self).__init__(**kwargs) self.i = i self.attention_1 = MultiHeadAttention(embedding_size, num_heads, dropout) self.addnorm_1 = AddNorm(dropout) self.attention_2 = MultiHeadAttention(embedding_size, num_heads, dropout) self.addnorm_2 = AddNorm(dropout) self.ffn = PositionWiseFFN(ffn_hidden_size, embedding_size) self.addnorm_3 = AddNorm(dropout) def forward(self, X, state): enc_outputs, enc_valid_lengh = state[0], state[1] # state[2][i] contains the past queries for this block if state[2][self.i] is None: key_values = X else: key_values = np.concatenate((state[2][self.i], X), axis=1) state[2][self.i] = key_values if autograd.is_training(): batch_size, seq_len, _ = X.shape # shape: (batch_size, seq_len), the values in the j-th column # are j+1 valid_length = np.tile(np.arange(1, seq_len+1, ctx=X.context), (batch_size, 1)) else: valid_length = None X2 = self.attention_1(X, key_values, key_values, valid_length) Y = self.addnorm_1(X, X2) Y2 = self.attention_2(Y, enc_outputs, enc_outputs, enc_valid_lengh) Z = self.addnorm_2(Y, Y2) return self.addnorm_3(Z, self.ffn(Z)), state
Similar to the encoder transformer block,
embedding_size should be
equal to the last dimension size of \(X\).
decoder_blk = DecoderBlock(24, 48, 8, 0.5, 0) decoder_blk.initialize() X = np.ones((2, 100, 24)) state = [encoder_blk(X, valid_length), valid_length, [None]] decoder_blk(X, state)[0].shape
(2, 100, 24)
The construction of the whole decoder transformer is identical to the
encoder transformer, except for the additional last dense layer to
obtain the output confident scores. Let us implement the decoder
transformer
TransformerDecoder together. Besides the regular
hyper-parameters such as the
vocab_size and
embedding_size, the
decoder transformer also needs the encoder transformer’s outputs
enc_outputs and
env_valid_lengh.
class TransformerDecoder(d2l.Decoder): def __init__(self, vocab_size, embedding_size, ffn_hidden_size, num_heads, num_layers, dropout, **kwargs): super(TransformerDecoder, self).__init__(**kwargs) self.embedding_size = embedding_size self.num_layers = num_layers self.embed = nn.Embedding(vocab_size, embedding_size) self.pos_encoding = PositionalEncoding(embedding_size, dropout) self.blks = nn.Sequential() for i in range(num_layers): self.blks.add( DecoderBlock(embedding_size, ffn.embedding_size)) for blk in self.blks: X, state = blk(X, state) return self.dense(X), state
9.3.7. Training¶
Finally, we are fully prepared to build a encoder-decoder model with transformer architecture. Similar to the seq2seq with attention model we built in .. _sec_seq2seq_attention:, we use the following hyper-parameters: two transformer blocks with both the embedding size and the block output size to be \(32\). The additional hyper-parameters are chosen as \(4\) heads with the hidden size to be \(2\) times larger than output size.
embed_size, embedding_size,), embedding_size, num_hiddens, num_heads, num_layers, dropout) decoder = TransformerDecoder( len(src_vocab), embedding_size, num_hiddens, num_heads, num_layers, dropout) model = d2l.EncoderDecoder(encoder, decoder) d2l.train_s2s_ch8(model, train_iter, lr, num_epochs, ctx)
loss 0.033, 3364 tokens/sec on gpu(0)
As we can see from the training time and accuracy, compared to the seq2seq model with attention model, the transformer runs faster per epoch, and converges faster at the beginning.
Last but not the least, let us translate some sentences. Unsurprisingly, this model outperforms the previous one we trained in the below consists of two dense layers that applies to the last dimension.
Layer normalization differs from batch normalization by normalizing along the last dimension (the feature dimension) instead of the first (batch size) dimension.
Positional encoding is the only place that adds positional information to the transformer model.
9.3.9. Exercises¶
Try a large size of epochs and compare the loss between seq2seq model and transformer model in earlier stage and later stage.
Can you think of another functions for positional encoding?
Compare layer normalization and batch normalization, what are the suitable scenarios to apply them?
|
http://www.d2l.ai/chapter_attention-mechanism/transformer.html
|
CC-MAIN-2019-47
|
refinedweb
| 3,099
| 50.73
|
Service management with Nix.
Posted
I have often wondered how I would manage a cluster of machines. I have taken a lot of inspiration from Google's solution of treating them as a giant pool of resources, no machines are reserved for any particular task. This adds a level of abstraction so that you can focus on what your service needs in order to run instead of worrying about where it should run. I also believe that this will provide better utilization as an automated scheduler can pack services onto machines very effectively where as a human would be inclined to use a simple algorithm and just get more computers as they appear to get "full".
Another goal of my infrastructure is to avoid virtualization. This is just adds complexity and overhead into the problem. For some models this makes sense but for a single organization users and groups should provide enough access control to create a secure system.
My early visions were pretty simple. They were basically using a standard packages manager such as apt or pacman to download required packages from a custom repository. This means that dependencies could be pulled down automatically and shared dependencies were only downloaded and installed once. While this basic idea is good it has a major problems when it comes to versions. If you wanted to update a package that was used by multiple services you would run the risk that one of them could be incompatible with the newer version. One work around would be to give each version of the package a new name which would work but starts to get messy.
Some people have used docker to solve this problem. While docker does solve the dependency problem it also adds a bunch of overheads that I would rather avoid. Running multiple docker containers often means having many copies of common libraries, both on the disk and in cache. Also the isolation makes introspection of services difficult.
This is where Nix comes in. I have used Nix to implement my ideal infrastructure. It is composed of two main layers to solve two different problems.
Infrastructure layer.
The first layer is the infrastructure layer. This is a core collection of programs and services that is identical across all machines. This runs the cluster management software as well as global services such as log collection and forwarding as well as providing tools for system administrators and other users who might log into a machine to debug a problem. In my current setup this consists of the following things:
- Administration tools including shell and editor configuration.
- Logins for users.
- A webserver service that can be required by the application layer.
- journald configuration and a log forwarder.
- A cluster management solution (currently fleet).
- Secrets required by the services.
These provide a base on which to run the services. I manage this layer via nixops which provides an easy way to manage these static servers, however this layer could effectively be any distribution as the application layer only depends on Nix and packages in the Nix store.
Application Layer
The application layer is where the business happens. These services are not static but instead scheduled by a cluster manager across the available machines. Currently I am using fleet as it has a small resource footprint and I only need very basic scheduling. It works well for my simple situation however if I was going to expand my cluster I would switch to using Mesos as it provides much more advanced features.
Fleet and Mesos are excellent building blocks but they still need a bit of magic to create a full management solution. However Nix provides all that is needed for this glue, allowing me to run services that are largely independent from the underlying system. The basic premise is that where ever the service gets scheduled, it downloads all of it's dependencies and starts to run. Using Nix we can ensure that there will be no conflicts between "packages" as they are all stored under unique names in the Nix store as well as "de-duplicating" dependencies that makes sure that we don't re-download dependencies already present on the system and just use the existing version.
I will show you a quick example of how I got this to work using Nix. The service I will be showing here is etcd-cloudflare-dns, a ruby script I wrote that continuously updates my CloudFlare DNS settings to match which services are running in etcd. The details of the script itself aren't important, but it does require a couple of dependencies to run (ruby) and I always want exactly one instance running.
The first step is building the package that will be used to distribute the service. This is done using the default.nix expression in the project. This contains instructions for packaging the script, and through the
import <nixpkgs> also instructions for building all of it's dependencies. This package can be built by running
nix-build in the top level of the repo which will build a package and create a symlink called
result in the current directory which points at the result in the Nix store.
I wrote a script called b2-nix-cache which will build the expression and upload the new Nix archive files to a Backblaze b2 bucket which I use as a Nix binary cache. To use it I simply run
~/p/b2-nix-cache/upload.sh my-nix-cache-bucket /tmp/nix-cache-key in the directory of the project I wish to upload.
Now that I have all of the dependencies uploaded I schedule the service to start using Fleet. I generate the service file specifying the exact path of the binary in the Nix store and use a service that already exists on all of my machines to "realize" all of the required paths and pin them so that they don't get garbage collected while the service is running. This is what the generation script looks like.
#! /bin/bash set -e # Get the store path from the result of the build. pkg="$(readlink -f result)" # Generate the service description. cat >etcd-cloudflare-dns.service <<END [Unit] Description=Keep CloudFlare DNS up to date with values in etcd. After=nix-expr@${pkg##*/}.service Requires=nix-expr@${pkg##*/}.service [Service] Environment=DOMAIN=kevincox.ca Environment=GEM_HOME=$pkg/gems EnvironmentFile=/etc/kevincox-environment EnvironmentFile=/run/keys/cloudflare User=etcd-cloudflare-dns ExecStart=$pkg/bin/etcd-cloudflare-dns Restart=always END # Remove the old service and start the new one. fleetctl "$@" destroy etcd-cloudflare-dns.service || true fleetctl "$@" start etcd-cloudflare-dns.service
This script is incredibly simple, it just inserts the package path into the right places and uploads the service to fleet. The interesting part is likely
nix-expr@${}.service which is a parameterized service that exists on all of my machines (it is put there by the infrastructure layer). It's job is to download the required dependencies and keep them around as long as the service is running. It consists of the simple systemd service below.
[Unit] Description=Install and keep installed the specified path. StopWhenUnneeded=true [Service] Type=oneshot RemainAfterExit=true ExecStart=${pkgs.nix}/bin/nix-store -r '/nix/store/%i' \ --add-root '/nix/var/nix/gcroots/tmp/%n' ExecStop=${pkgs.coreutils}/bin/rm -v '/nix/var/nix/gcroots/tmp/%n'
This service consists of just a start command and a stop command. The start command "realizes" a store path, which is Nix terms for "make it exist". Since I have both and my Backblaze bucket configured as binary caches the required derivations will be downloaded from both of them (preferring). Once that is done the
nix-expr@.service will be considered "running" and
etcd-cloudflare-dns will be allowed to start.
Once
etcd-cloudflare-dns is done running, either because it has been stopped or it has failed the associated
nix-expr@.service will be unneeded (assuming no other service is using the same derivation) and since
StopWhenUnneeded is
true it will be stopped, removing the gc root and allowing the associated store paths to be freed the next time the garbage collector runs.
It is also important to note that because
/nix/var/nix/gcroots/tmp/ is cleared on each boot power failures or other unexpected stops won't slowly cause your drive to fill. While is is possible that systemd failures could leak gc roots I am assuming that they are rare enough that they won't cause much of an issue between reboots.
Downsides
The major downside of this approach is I currently don't have a good secret management solution. Currently every secret I want to use has to be deployed to all machines and it is accessed using a well known path. While the secret is pretected by filesystem permissions and not written to disk it would be nice to be able to deploy new secrets without modifying the infrastructure layer configuration as well as not having every secret on every machine. I don't want to put the secrets into the Nix store because that is world readable and fleet services decriptions doens't seem like the right solution either. For now I am satisfied with my solution however I have thought of a couple more methods including running a secret management service or storing a single encryption key on each node and putting encrypted secrets in the Nix store (downloadable from my binary cache) however none of thoses seems great to me.
Another downside to this approach is the difficulty of preforming security updates. Since the services themselves specify the exact versions of their dependencies there is no way to globally update a package. This means that in a situation were an update is required you would have to get the maintainer of every affected service to release a new version instead of having the option to just update the packaged on every system and have them take effect. However this is a well known tradeoff to having exact dependcies and not unique to this solution. It would be nice to have a script that scans the Nix stores of all machines and reports any vulnrable packages it finds. Tracing these packages to the users would be very easy as Nix maintains a full store path dependency graph.
Conclusion
This solution allows seperating the infrastructure layer from the applications so that changes in each are largely independent (expect for required coupling). This allows abstracting away the actual hardware of the cluster and instead focusing on what your services actually need in order to run. This trivializes deployment and dependency management and when combined with NixOS provides a very robust and pure archetecture.
|
https://kevincox.ca/2015/12/21/service-management-with-nixos/
|
CC-MAIN-2017-22
|
refinedweb
| 1,789
| 53
|
Sending Gasless Transactions
Anyone who sends an Ethereum transaction needs to have Ether to pay for its gas fees. This forces new users to purchase Ether (which can be a daunting task) before they can start using a dapp. This is a major hurdle in user onboarding.
In this guide, we will explore the concept of gasless (also called meta) transactions, where the user does not need to pay for their gas fees. We will also introduce the Gas Station Network, a decentralized solution to this problem, as well as the OpenZeppelin libraries that allow you to leverage it in your dapps:
What is a Meta-transaction?
All Ethereum transactions use gas, and the sender of each transaction must have enough Ether to pay for the gas spent. Even though these gas costs are low for basic transactions (a couple of cents), getting Ether is no easy task: dApp users often need to go through Know Your Customer and Anti Money-Laundering processes (KYC & AML), which not only takes time but often involves sending a selfie holding their passport over the Internet (!). On top of that, they also need to provide financial information to be able to purchase Ether through an exchange. Only the most hardcore users will put up with this hassle, and dApp adoption greatly suffers when Ether is required. We can do better.
Enter meta-transactions. This is a fancy name for a simple idea: a third-party (called a relayer) can send another user’s transactions and pay themselves for the gas cost. In this scheme, users sign messages (not transactions) containing information about a transaction they would like to execute. Relayers are then responsible for signing valid Ethereum transactions with this information and sending them to the network, paying for the gas cost. A base contract preserves the identity of the user that originally requested the transaction. In this way, users can interact directly with smart contracts without needing to have a wallet or own Ether.
This means that, in order to support meta transactions in your application, you need to keep a relayer process running - or leverage a decentralized relayer network.
The Gas Station Network
The Gas Station Network (GSN) is a decentralized network of relayers. It allows you to build dapps where you pay for your users transactions, so they do not need to hold Ether to pay for gas, easing their onboarding process.
However, relayers in the GSN are not running a charity: they’re running a business. The reason why they’ll gladly pay for your users' gas costs is because they will in turn charge your contract, the recipient. That way relayers get their money back, plus a bit extra as a fee for their services.
This may sound strange at first, but paying for user onboarding is a very common business practice. Lots of money is spent on advertising, free trials, new user discounts, etc., all with the goal of user acquisition. Compared to those, the cost of a couple of Ethereum transactions is actually very small.
Additionally, you can leverage the GSN in scenarios where your users pay you off-chain in advance (e.g. via credit card), with each GSN-call deducting from their balance on your system. The possibilities are endless!
Furthermore, the GSN is set up in such a way where it’s in the relayers' best interest to serve your requests, and there are measures in place to penalize them if they misbehave. All of this happens automatically, so you can safely start using their services worry-free.
Building a GSN-powered DApp
Time to build a dapp leveraging the GSN and push it to a testnet. In this section, we will use:
The
create-react-apppackage to bootstrap a React application, along with OpenZeppelin Network JS to easily set up a web3 object with GSN support
OpenZeppelin GSN Helpers to emulate the GSN in your local ganache instance
The
@openzeppelin/contracts-ethereum-packagesmart contract library to get GSN
The OpenZeppelin CLI to manage and deploy our contracts
We will create a simple contract that just counts transactions sent to it, but will tie it into the GSN so that users will not have to pay for the gas when sending these transactions. Let’s get started!
Setting up the Environment
We will begin by creating a new npm project and installing all dependencies, including Ganache (which we’ll use to run a local network):
$ mkdir gsn-dapp && cd gsn-dapp $ npm init -y $ npm install @openzeppelin/network $ npm install --save-dev @openzeppelin/gsn-helpers @openzeppelin/contracts-ethereum-package @openzeppelin/upgrades @openzeppelin/cli ganache-cli
Use the CLI to set up a new project and follow the prompts so we can write our first contract.
$ npx oz init
Creating our Contract
We will write our vanilla
Counter contract in the newly created
contracts folder.
// contracts/Counter.sol pragma solidity ^0.5.0; contract Counter { uint256 public value; function increase() public { value += 1; } }
This is simple enough. Now, let’s modify it to add GSN support. This requires extending from the
GSNRecipient contract and implementing the
acceptRelayedCall method. This method must return whether we accept or reject to pay for a user transaction. For the sake of simplicity, we will be paying for all transactions sent to this contract.
// contracts/Counter.sol pragma solidity ^0.5.0; import "@openzeppelin/contracts-ethereum-package/contracts/GSN/GSNRecipient.sol"; contract Counter is GSNRecipient { uint256 public value; function increase() public { value +=) { return _approveRelayedCall(); } // We won't do any pre or post processing, so leave _preRelayedCall and _postRelayedCall empty function _preRelayedCall(bytes memory context) internal returns (bytes32) { } function _postRelayedCall(bytes memory context, bool, uint256 actualCharge, bytes32) internal { } }
Start ganache on a separate terminal by running
npx ganache-cli. Then, create an instance of our new contract using the OpenZeppelin CLI with
npx oz create and follow the prompts, including choosing to call a function to initialize the instance.
Be sure to take note of the address of your instance, which is returned at the end of this process!
$ openzeppelin create ✓ Compiled contracts with solc 0.5.9 (commit.e560f70d) ? Pick a contract to instantiate Counter ? Pick a network development All contracts are up to date ? Call a function to initialize the instance after creating it? Yes ? Select which function * initialize() ✓ Instance created at 0xCfEB869F69431e42cdB54A4F4f105C19C080A601
Great! Now, if we deployed this contract to mainnet or the rinkeby testnet, we would be almost ready to start sending gasless transactions to it, since the GSN is already set up on both of those networks. However, since we are on a local ganache, we’ll need to set it up ourselves.
Deploying a Local GSN for Development
The GSN is composed of a central
RelayHub contract that coordinates all relayed transactions, as well as multiple decentralized relayers. The relayers are processes that receive requests to relay a transaction via an HTTP interface and send them to the network via the
RelayHub.
With ganache running, you can start a relayer in a new terminal using the following command from the OpenZeppelin GSN Helpers:
$ npx oz-gsn run-relayer Deploying singleton RelayHub instance RelayHub deployed at 0xd216153c06e857cd7f72665e0af1d7d82172f494 Starting relayer -Url ... RelayHttpServer starting. version: 0.4.0 ... Relay funded. Balance: 4999305160000000000
The last step will be to fund our
Counter contract. GSN relayers require recipient contracts to have funds since they will then charge the cost of the relayed transaction (plus a fee!) to it. We will again use the
oz-gsn set of commands to do this:
$ npx oz-gsn fund-recipient --recipient 0xCfEB869F69431e42cdB54A4F4f105C19C080A601
Cool! Now that we have our GSN-powered contract and a local GSN to try it out, let’s build a small (d)app.
Creating the Dapp
We will create our (d)app using the
create-react-app package, which bootstraps a simple client-side application using React.
$ npx create-react-app client
First, create a symlink so we can access our compiled contract
.json files. From inside the
client/src directory, run:
$ ln -ns ../../build
This will allow our front end to reach our contract artifacts.
Then, replace
client/src/App.js with the following code. This will use OpenZeppelin Network JS to create a new provider connected to the local network. It will use a key generated on the spot to sign all transactions on behalf of the user and will use the GSN to relay them to the network. This allows your users to start interacting with your (d)app right away, even if they do not have MetaMask installed, an Ethereum account, or any Ether at all.
// client/src/App.js import React, { useState, useEffect, useCallback } from "react"; import { useWeb3Network } from "@openzeppelin/network/react"; const Increase Counter by 1 </button> </React.Fragment> )} </div> ); } export default App;
Great! We can now fire up our application running
npm start from within the
client folder. Remember to keep both your ganache and relayer up and running. You should be able to send transactions to your
Counter contract without having to use MetaMask or have any ETH at all!
Moving to a Testnet
It is not very impressive to send a local transaction in your ganache network, where you already have a bunch of fully-funded accounts. To witness the GSN at its full potential, let’s move our application to the Rinkeby testnet. If you later want to go onto mainnet, the instructions are the same.
You will need to create a new entry in the
networks.js file, with a Rinkeby account that has been funded. For detailed instructions on how to do this, check out Deploying to Public Tests Network.
We can now deploy our
Counter contract to Rinkeby:
$ openzeppelin create ✓ Compiled contracts with solc 0.5.9 (commit.e560f70d) ? Pick a contract to instantiate: Counter ? Pick a network: rinkeby ✓ Added contract Counter ✓ Contract Counter deployed ? Call a function to initialize the instance after creating it?: Yes ? Select which function * initialize() ✓ Setting everything up to create contract instances ✓ Instance created at 0xCfEB869F69431e42cdB54A4F4f105C19C080A601
The next step will be to instruct our (d)app to connect to a Rinkeby node instead of the local network. Change the
PROVIDER_URL in your
App.js to, for example, an Infura Rinkeby endpoint.
We will now be using a real GSN provider rather than our developer environment, so you may want to also provide a configuration object, which will give you more control over things such as the gas price you are willing to pay. For production (d)apps, you will want to configure this to your requirements.
import { useWeb3Network, useEphemeralKey } from "@openzeppelin/network/react"; // inside App.js#App() const context = useWeb3Network('' + INFURA_API_TOKEN, { gsn: { signKey: useEphemeralKey() } });
We are almost there! If you try to use your (d)app now, you will notice that you are not able to send any transactions. This is because your
Counter contract has not been funded on this network yet. Instead of using the
oz-gsn fund-recipient command we used earlier, we will now use the online gsn-tool by pasting in the address of your instance. To do this, the web interface requires that you use MetaMask on the Rinkeby Network, which will allow you to deposit funds into your contract.
That’s it! We can now start sending transactions to our
Counter contract on the Rinkeby network from our browser without even having MetaMask installed.
The GSN Starter Kit
Starter Kits are pre-configured project templates to bootstrap dapp development. One of them, the GSN Starter Kit, is a ready-to-use dapp connected to the GSN, with a similar setup as the one we built from scratch in the previous section.
If you are building a new dapp and want to include meta-transaction support, you can run
oz unpack gsn to jumpstart your development and start with a GSN-enabled box!
Next Steps
To learn more about the GSN, head over to the following resources:
To learn how to use OpenZeppelin Contracts to build a GSN-capable contract, head to the GSN basics guide.
If you want to learn how to use OpenZeppelin Contracts' pre-made accept and charge strategies, go to the GSN Strategies guide.
If instead you wish to know more about how to use GSN from your application, head to the OpenZeppelin GSN Provider guides.
For information on how to test GSN-enabled contracts, go to the OpenZeppelin GSN Helpers documentation.
|
https://docs.openzeppelin.com/learn/sending-gasless-transactions
|
CC-MAIN-2020-34
|
refinedweb
| 2,059
| 60.75
|
I've created a dashboard with some panels, and I am getting different event counts than when I run the reports individually. The event counts from dashboards is less than the event counts run through a report. I've read some posts mentioning that we can do some settings in savesearches.conf file to run the dashboard in verbose mode. I have Splunk User role access, and I don't have admin access to perform these changes. Please suggest if there is a way to get this resolved through Simple/Advance XML dashboard configuration.
Fast mode events count: 10222
Why would you want to run this in verbose mode? The only difference between modes is field discovery..
Are you using post processing in your dashboard? Are your fields not getting passed?
The only difference is not only field discovery but the count of events also differ. If I run the dashboard panel as a report in fast mode, the events count is same when I run the report from a dashboard. There is mismatch of results while running the search from dashboard from the search ran from report or a general verbose mode search.
This is not true. The only difference from verbose and fast mode is field discovery. It's not going to magically change the count because of the mode..
You should first see if your looking over identical timeranges.. It you're using relative time then you will definitely have different result counts. You should also post your search
Hi @skoelpin,
I ran the searches in verbose and fast mode. I took screenshots, to show the differences of events I am seeing when I run the search in fast mode versus verbose mode. But I am unable to post the screenshots here. You can try to see the difference by running simple query with a transforming command in it.
Once again, are you using non-relative time when doing this?
This is my query:
index="abc" attrs.io.kubernetes.pod.namespace="xyz" earliest=-60d@d latest=now
| rex "ERRORCODE=-(?\d{4})"
| stats count by EC
You're using relative time since you do not have
latest specified.. It is expected to get a different result count each time you run the search. If you used non-relative time then you should get identical counts each time you run the search
Try running this search to test it
index="abc" attrs.io.kubernetes.pod.namespace="xyz" earliest=-60d@d latest=-59d@d | rex "ERRORCODE=-(?\d{4})" | stats count by Database_Error_Code
I tried the method you mentioned above. Please find screenshot below. The count is 10,222 in fast mode and the count is 10,600 with verbose mode.
Pos the image with identical SPL in verbose mode
Please find the second image in answer section.
Image looks broken
Could please try opening the browser in chrome?
I'm using Chrome.. First image shows and second image is broken. Please get this straightened out before replying back
Please see below, for the Verbose image, I've uploade. Please let me know if you can see without any issue.
|
https://community.splunk.com/t5/Dashboards-Visualizations/run-a-dashboard-search-in-verbose-mode-through-Simple-XML/td-p/412586
|
CC-MAIN-2021-04
|
refinedweb
| 519
| 75.5
|
I am trying to write a program that reads words from files and then puts them into arrays and then does a count on certain words. I have moved stuff around and really I am not sure that I am even on the right track. Could someone please look at this for me and tell me what I have wrong. It will not run at all. Also I am having trouble with the counting of the certain word. Any help would be greatly appreciated. This is what I have so far. I think I may have things in the wrong places. I am going to put the instruction for the part that I think I have done.
#include "stdafx.h" #include <iostream> #include <string> #include <fstream> using namespace globalType; void printResults(); print word_count(std::string); int find(std::string[], int word_count, std::string a_word) { for(int i=0; i < word_count; ++i) if(words[i] == a_word) return i; return-1; // did not find word } int main() { const int MAX_WORDS = 100; std::string words[MAX_WORDS]; int words[100][length of longest word +1] std::ifstream("SelectWords.txt"); while(file >> words[word_count]) ++word_count; //word_count contains number of words read int words[100][30]; int word_count[100]={0}; //initialize to zeros int pos = find( words, word_count, word_read_from_file); if(pos != -1) ++word_count; //if found, increment the count <<<<<<<< I am having trouble here system("pause"); return 0; }
INSTRUCTIONS:
Read the words from a file named “SelectWords.txt” into a 100-element array of strings.
Declare and initialize to zeroes a parallel array of whole numbers for word counts
Declare and initialize to zeroes a parallel array of doubles for word frequencies.
Declare two strings to hold the shortest and longest words in the data file. Initialize the shortest word to “thisisarunonsentencewithlotsofletters” and the longest word to “a”.
Declare a whole number variable to hold the total word count, and initialize it to zero.
|
https://www.daniweb.com/programming/software-development/threads/442577/reading-words-from-a-file-and-doing-a-count-on-a-certain-word
|
CC-MAIN-2016-44
|
refinedweb
| 317
| 62.68
|
Theme based on the Bootstrap 5 CSS framework. More...
#include <Wt/WBootstrap5Theme.h>
Theme based on the Bootstrap 5 CSS framework.
This theme implements support for building a Wt Wt distribution, but you can replace the CSS with custom-built CSS by reimplementing styleSheets().
Although this theme styles individual widgets correctly, for your web application's layout you are recommended to use WTemplate in conjunction with Bootstrap's CSS classes. For this we refer to Bootstrap's documentation at..
Sets the data target for a widget.
The
widget is a bootstrap element that requires a data-bs-target attribute to function (with Bootstrap JS). The
target is the element that is targeted by the
widget.
Reimplemented from Wt::WTheme..
|
https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1WBootstrap5Theme.html
|
CC-MAIN-2022-05
|
refinedweb
| 119
| 59.09
|
setup sub-command without any argument error
Bug Description
@@
def setup(self, args):
+ if not args:
+ raise zc.buildout.
+ "setup command expects one argument.\n"
+ )
setup = args.pop(0)
if os.path.
setup = os.path.join(setup, 'setup.py')
Fixed in r80638
The patch (and subsequent checkin) lacked a test or a notation in CHANGES.txt.
Also, don't just tell people what they did wrong. Tell them how to make it right:
The setup command expects at least one argument, the name of the directory that contains a setup.py. Further arguments will be passed along to that setup.py
(Note the missing article "The" in Baiju's original message).
Philipp, the message has changed by Jim like this, I think it's fine:
The setup command requires the path to a setup script or
directory containing a setup script, and it's arguments.
On Oct 5, 2007, at 5:54 AM, Baiju Muthukadan wrote:
> Public bug reported:
------- ------- ----- buildout/ buildout. py ======= ======= ======= ======= ======= ======= ======= ======= ==== buildout/ buildout. py (revision 80625) buildout/ buildout. py (working copy) UserError(
>
> @@
> ep.load()(self)
>
> def setup(self, args):
> + if not args:
> + raise zc.buildout.
> + "setup command expects one argument.\n"
"one or more arguments".
Please feel free to go ahead and apply this. :)
Thanks.
Jim
--
Jim Fulton
Zope Corporation
|
https://bugs.launchpad.net/zc.buildout/+bug/149352
|
CC-MAIN-2016-36
|
refinedweb
| 211
| 69.99
|
What icons do you miss in this set?
Please let us know and we’ll design them for you and release in the final version of the package!
Related posts
Please take a look at other high-quality freebies as well:
RaulMay 20th, 2009 12:41 am
Nice icons!!!
WoboMay 20th, 2009 12:41 am
awesome, gorgeous, excellent, brilliant!
DaveMay 20th, 2009 12:43 am
Very Nice Icons, Thank You
DKumar M.May 20th,th, 2009 12:45 am
Wow! Nice icons!!
Will use it on my site
JohnMay 20th, 2009 12:46 am
Nice set. Great effort put into it.
But the “ultimate” Icon Pack is still Silk.
SusieMay 20th, 2009 12:56 am
WOW! Amazing icons. Thanks – you’re the best!
SarahMay 20th, 2009 12:57 am
Great icon set! I love them!
Does anyone know how you can create such a preview image without having to do it all by hand?
AvishJanuary 11th, 2010 5:08 am
Hi, your name is really beautiful, :)
ChristianMay 20th, 2009 1:00 am
Wow this is really nice.
i will serious use these on my site :D
They are quit complete i think!
DOMay 20th, 2009 1:00 am
i dont think that these icons are good. just icons, and it FREE (wow, big bubble behind it:) ) still, the author seems to be not very professional to know that PayPal logo cannot be applied that way
MonieMay 20th, 2009 1:11 am
Great collection..
CharliendMay 20th, 2009 1:13 am
Really good set!!
AdamMay 20th, 2009 1:16 am
Thank you, loved the originals and can’t believe your giving the psd’s away too! thanks smashing mag and Oliver!
TomMay 20th, 2009 1:17 am
Great set! Didn’t even have to hesitate to download them.
I do miss a few social icons e.g. Facebook, Vimeo, YouTube, etc.. suggestion?
Thanks! @TomInc
Dalibor VasiljevicMay 20th, 2009 1:17 am
Nice one. It deserves to be included on my blog :)
trendezMay 20th, 2009 1:18 am
its ok but quality is not up to the mark… anyways good attempt.
IvanMay 20th, 2009 1:27 am
Great set, thanks a lot, many interesting items here !
AbnerMay 20th, 2009 1:39 am
Cool! I was looking for that exact podcast icon, with the purple ripples behind the dude. Great!
V1May 20th, 2009 1:51 am
I agree with trendez, its ok. but thats all i can say about it. Its not really smashing. but than again, its free. what else would you expect..
StrykerMay 20th, 2009 2:12 am
Cool, thanks a lot for these amazing icons
WilliamMay 20th, 2009 2:33 am
wooo 21 person to comment im the best. Great post.
JcMay 20th, 2009 2:53 am
not impressive but good sety
adewemimoMay 20th, 2009 3:01 am
Nice icons. Will be nice to have these icons in my library collection.
ItzikMay 20th, 2009 3:08 am
Amazing icons! Thanks alot
danielMay 20th, 2009 3:08 am
owesome, thanks for the sharing and hard working :)
Ian DroogmansMay 20th, 2009 3:27 am
Very nice work man! Just Love the set <3
TobiMay 20th, 2009 3:29 am
Thanks… great set!
Would like to see a shopping cart icon.
RetheeshMay 20th, 2009 3:31 am
superb icons….!!! thank you
Hastimal ShahMay 20th, 2009 3:34 am
Thanks it was great…
Looking cool icons.
Gabe HarrisMay 20th, 2009 3:39 am
WOW, those are really sexy. Can icons be sexy? Absolutely. This is exactly the set I needed, such a wide range of useful images.
Tremendous work, many, many thanks!
ANdyMay 20th, 2009 3:42 am
Great set – would be brilliant to have in Fireworks format for non-photoshop users. Is there a way to export psd to Fireworks (PNG) in editable layers?
AkeruMay 20th, 2009 4:04 am
Nice icons but they are still far away from the real ultimate icons pack :
SiGaMay 20th, 2009 4:13 am
Many thanks, Oliver, for your incredible work – it´s real generous to share this with us!
Grüße aus Österreich und weiterhin viel Glück!
JulienMay 20th, 2009 4:25 am
Many thanks for this amazing artwork… !
Brian VanaskiMay 20th, 2009 4:45 am
Thank you very much, this is nice set of icons. These will be very helpful in my future designs.
BenjaminMay 20th, 2009 4:53 am
Thank you for your hard work. This is a great resource and it is much appreciated.
LAIMay 20th, 2009 4:54 am
Thank you!
rakazishiMay 20th, 2009 4:59 am
wow! It’s great, thank you!
Jessica MMay 20th, 2009 5:29 am
Wow all free!!!!! Sweet thank you!
ChrisMay 20th, 2009 5:46 am
got to love free icons. especially high quality. thanks a ton!
Yan CharbonneauMay 20th, 2009 5:53 am
Wow! Nice! Thank you, great work!
MajerMay 20th, 2009 5:53 am
Really nice work.
Thanks for releasing them free.
JesseMay 20th, 2009 6:10 am
I hope Oliver Twardowski takes a moment today to reflect on what a bad ass he is. For he is a jolly good fellow!
tx8May 20th, 2009 6:23 am
I’d say ‘audio / speakers / headphones’ are missing ;)
OzMay 20th, 2009 6:58 am
Good work man, i’d buy you a beer!
Jasper KennisMay 20th, 2009 7:15 am
Generous:)
MaxMay 20th,th, 2009 7:34 am
thanks for this free set
AshliMay 20th, 2009 7:57 am
Wow!!! This is amazing– you are the best! I love free stuff! :)
Thanks, Oliver!
Richard DaviesMay 20th, 2009 8:20 am
Missing house/home, building/office, and gears icons.
Benedikt RoßgardtMay 20th, 2009 8:39 am
Cool! Thank you!
BenniMay 20th, 2009 8:45 am
450+ Icons? That´s awesome. Thanks.
shane plaseboMay 20th, 2009 9:18 am
Thanks a lot, these icons are far better than famfamfam, Are they free to use for commercial websites?
(SM) Yes, they are free to use.
JennyMay 20th, 2009 9:43 am
OMG, this set is amazing, thanks for doing – and sharing – this !!!
Tyler DiazMay 20th,th, 2009 11:25 am
Amazing, the first lot were great, but these are awesome!
Thanks for your hard work.
Morten RyumMay 20th, 2009 11:52 am
A lot of beautiful icons here, some are a little ehhh though.
Thank you so much for keeping me feeded with brilliant freebies! Keep it up! High quality for a price of free, I like that.
SathishMay 20th, 2009 12:18 pm
Wow!! free for commercial!!
thanks for the high quality icons, this is definitely going to come in handy many times. thanks a ton
Alex TayraMay 20th, 2009 12:23 pm
some of them are really ugly..
but 450.. ok..
Adolfo TavizónMay 20th, 2009 12:56 pm
Just add a “video” icon and it will be perfect, maybe a videocamera or something like that, but boys, this is awesome!!!!
MattMay 20th, 2009 12:58 pm
Great! Thanks for the icons. I would like some icons that convey security for an ecommerce site. Any chance you can make some for the final release?
AeroBLueMay 20th, 2009 1:43 pm
Wonderful!
Thanks a lot for these usefull icons!!!
Fr4gsterMay 20th, 2009 1:51 pm
Wow!
Really useful.
Thanks a lot for this :D
unformatikMay 20th, 2009 3:07 pm
greatttttttttt I’m very geatful
thnkX
KrangMay 20th, 2009 3:15 pm
Thank you for sharing this! Oliver Twardowski!
really beautiful icon pack.!
SteveMay 20th, 2009 4:18 pm
Nice work – thanks for sharing these icons.
dv8boyMay 20th, 2009 9:25 pm
Thanks….. Nice collection
gr8pixelMay 21st, 2009 12:13 am
many thanks!
izioSEOMay 21st, 2009 1:50 am
Big thanks … its very nice …
VijaytaMay 21st, 2009 1:54 am
awesome icons
Thanks for sharing
Farid HadiMay 21st, 2009 3:16 am
Smashing Magazine does it again! Thank you Oliver, these are great!
Karol AdamczykMay 21st,st, 2009 10:06 am
Amazing. Thanks!
NigasaMay 21st, 2009 2:58 pm
thank you! from Korea
Jeff GranMay 21st, 2009 11:09 pm
Incredible – thanks for releasing these. They will come in handy. Nice work!
JanMay 21st, 2009 11:40 pm
Thanks Oliver and SM for these cool icons! :)
TomaseMay 22nd, 2009 1:16 am
Thank you for sharing! Very nice to use in our webdesign
MarcMMay 22nd, 2009 9:46 am
Awesome! I’m not sure how I got by before i found smashing…
Thanks Smashing! and Thanks Oliver!!
tewoosMay 22nd, 2009 1:05 pm
really great job, I appreciate your good work!
maybe thumb up, thumb down and a heart icon can be included.
Danny HalarewichMay 22nd, 2009 2:18 pm
Thanks Oliver! These are great. The PSD source is also really nice to have.
TraiaNMay 22nd, 2009 2:49 pm
Here’s a nice collection of shopping cart icons. Have a look!
ElvisMay 22nd, 2009 11:06 pm
Beautiful! Thanks a ton :)
e-animaMay 23rd, 2009 2:39 am
thank you very much for this.
Kevin GMay 23rd, 2009 8:17 pm
Love these! A phone would be a nice addition to the set…
mighty joeMay 23rd, 2009 11:16 pm
Excellent release Oliver ! It will be a great help for my new project
adelMay 24th, 2009 12:02 am
very nice collection
ChrisMay 24th, 2009 5:07 am
Absolutely great! Many thanks!!!!
DaveyMay 24th, 2009 7:21 am
I like the comments icon. Using it (slightly recolored) on my website. ^^
abujamilMay 24th, 2009 8:24 pm
Great! thanks for the hard work.
Some says that silk still the best and the ultimate free package.
I agree but silk is a functions icons, excellent for any dynamic web app.
BUT these icons are great for other tasks
TavisMay 26th,th, 2009 9:32 am
Wow. Thanks for the icons, I’ve already started using a few. Good work.
KaxJune 1st, 2009 11:45 am
the email icon is missing! i mean a simple envelope… but thanks
SheillaJune 3rd, 2009 8:36 am
Lovely icons! Thanks so much Oliver!!!
Brad GillapJune 3rd, 2009 10:57 pm
Thank you so much!
Simon HarlinghausenJune 8th, 2009 11:29 am
These icons are amazing. Thanks.
Josh W.June 8th, 2009 2:38 pm
I agree with Kevin G. I would also like to see a phone.
Also a mail icon without anything in the top right corner (i.e. add, forward, alert) would be great.
Amazing set. Thank you so much!
phullsiteJune 9th, 2009 3:08 pm
def need a phone of sorts. mobile, home and office.
Francis BoudreauJune 12th, 2009 8:10 am
Very nice set, thank you!
burkanovJune 18th, 2009 9:49 am
Thanks a lot, very good icons!
Tanaka13 - Créations du NetJune 24th, 2009 10:39 pm
thank you for this nice setup;)
|
http://www.smashingmagazine.com/2009/05/20/flavour-extended-the-ultimate-icon-set-for-web-designers/comment-page-1/
|
CC-MAIN-2013-48
|
refinedweb
| 1,781
| 85.69
|
That's probably JavaScript 1.9 or ES7 to you. If you are puzzled by the name it is probably because you haven't realized that ECMAScript has gone over to a yearly release schedule, which might not be a good thing at all.
Let's start with what's new in 2016 - not a lot really.
There are two small new features.
You can now perform a functional programming like search of an array for a particular element. That is instead of:
function in(el,array){ for(x in array){ if( el===x) return true; }; return false;}
You can now write:
function in(el,array){ return array.includes(el);}
That is, array.includes(el) is true if el is an element of the array. Of course you could have done something similar before using indexOf:
function in(el,array){ return (arr.indexOf(el) !== -1}
However, includes is more expressive of what you mean.
There are also some minor differences between indexOf and includes. For example, it will match NaN and undefined.
Useful but hardly a revolution.
The second new feature is long overdue if you write any sort of numerical number crunching program.
JavaScript doesn't have a "raise to the power" operator. A very common error, committed by programmers who simply cannot believe that a language doesn't have an exponential operator, is to write x^2 for x squared. This might appear to work if you don't check the result because the ^ operator is exclusive OR.
The correct way to raise to a power is to use Math.pow(x,2) which doesn't look good and looks even worse when you use it in a formula, for example:
z= Math.pow(x,2) + Math.pow(y,2);
At long last you can now write raise to a power using the old Fortran ** operator and formulas look a lot better for it:
z= x**2 + y**2;
Of course for small powers it still usually better to write x*x for x squared:
z= x*x + y*y;
So that's it - ES 2016 summed up.
Welcome additions but not much to show for a year's worth of deliberations.
The whole point seems to be that small incremental changes are the new normal for JavaScript, instead of having to wait years for a mega upgrade that might not even happen. This seems sensible, but there is a downside.
Now you have to worry if your code is ES 2015 or 2016 compatible. Is it really worth having a JavaScript engine support issue for just two small additions?
With big changes you can make a conscious decision to write in a clear subset of the language small changes mean you have a whole set of things to keep in mind. Imagine in a few years time when you need to look at a "can I use" matrix for ES2020.
Of course, in an ideal world all JavaScript engines would upgrade as soon as the standard was out and you could write the latest code and expect it to run anywhere.
But the world isn't ideal.
ECMAScript® 2016 Language Specification.
GitHub has added yet more material to the Arctic Vault in Svalbard, and says this completes its part of the task. The vault now contains the 02/02/2020 snapshot of every active public GitHub repositor [ ... ]
|
https://www.i-programmer.info/news/167-javascript/9845-ecmascript-2016-approved.html
|
CC-MAIN-2022-40
|
refinedweb
| 560
| 72.46
|
OData Scaffolding
With the release of Visual Studio 2013 RTM, we added support for scaffolding OData controllers with Entity Framework. In this blog topic we will cover the following topics
• Scaffolding an OData controller with Entity Framework on a Web API 2 project.
• Extra configuration steps required to setup OData scaffolding in a MVC project.
Scaffolding an OData controller with Entity Framework on a Web API2 project
Create a Web project using ASP.NET Web Application template and select Web API. Create the following model classes in the Models folders of the project
public class Customer
{
public int CustomerId { get; set; }
public string CustomerName { get; set; }
public ICollection<Order> Orders { get; set; }
}
public class Order
{
public int OrderId { get; set; }
public string OrderName { get; set; }
public Customer Customer { get; set; }
}
Build the project.
Right click on the Controllers folder and select “New Scaffolded Item”. As an alternative, you can also select “Controller”
Choose “Web API 2 OData Controller with actions, using Entity Framework”
In the “Add Controller” dialog, name the controller “CustomerController”. Choose Customer class as the model in the dropdown menu and click on “New Data Context”. If you check “Use async controller actions”, it will create an OData controller with async methods. You can look at the blog topic for async controllers here.
This creates an OData Controller for Customer. To get this code running, we need to follow the instructions in readme.txt that gets generated once a controller is created.
As mentioned in readme.txt, look at the instructions in CustomerController.cs. Copy the using statements (that don’t already exist) in the instructions of CustomerController.cs file to the top of the Webapiconfig.cs file. Copy the rest of the statements in the instructions of CustomerController.cs and place them inside the Register method of WebapiConfig.cs. After you are done, your Webapiconfig.cs should look like this:
If your model has one or more navigation properties, besides the CRUD actions for the entity, the Get method is created for each navigation property too
Build the project. Now you are all set to run your application and use the OData actions in Customer controller.
Do note that OData is case-sensitive, so when calling your actions, make sure you call them with right capitalization. For instance, the URI for the GetCustomer actions is:. Each action has its relative URI as a comment above it for guidance.
If you scaffold OrderController too in the similar way mentioned above, do note that generated instructions in OrderController.cs file will include adding Order and Customer entity sets (again) to Register method of WebapiConfig.cs file. Since these were already added once, you don’t have to add them again.
Extra configuration steps required to setup OData scaffolding in a MVC project
If you are using an Mvc project, apart from adding ODataRoute to Webapiconfig.cs file, you should also modify the Global.asax.cs file as explained in the readme.txt file
Hope you find this helpful. Thanks to all my team members for reviewing this blog.
Related Posts
|
https://devblogs.microsoft.com/aspnet/odata-scaffolding/
|
CC-MAIN-2019-26
|
refinedweb
| 511
| 65.22
|
- Class AES, RSA, SHA, Base64 for Ext JS 4
- GridPanel switch "views"
- Problem loading custom views in my viewport
- Reloading a tree
- Problem with loading view
- MVC: How can I access controller methods/properties from view
- How to preserve filter on gridview/store with direct proxy upon reload
- Why does 'viewready' event on GridView not fire?
- Accessing / finding Controls in a View
- how to disable the hiding behavior with the legend?
- How to create, destroy, then re-create a custom window?
- Wrong collapsing with border layout (MVC)
- Sencha list with column lines.
- How can I get an item from a string array loaded in a store?
- Code not displaying complex JSON file in a Panel
- Ext.app.Application. Objects creation sequence and using Options Objects
- Problem creating custom panel by inheriting panel
- EXTJS Portal Example - Unable to start
- [EXTJS 4] ComboBox select event - how to get selected record ?
- ExtJS replacements for tree dropZone's onNodeOver etcpp.
- Multiple clicks in the button produces multiple window instances
- One panel and differents grids
- Like to have vetical scrollbars in a grid panel
- Can I use a memory proxy as a data source for infinite scrolling
- Layouts and Scrolling
- Strange behavior of Controller.control()
- Correct way to load a single store from multiple similar JSON sources?
- Local combobox with any match filter.
- Group Header template
- Button height in ie 7
- Transient Model Fields
- "Best" way to incorporate Ajax requests to fill in XTemplates in the MVC Architecture
- Looking for a Simple example with .net WCF and EXT JS
- CellEdit Plugin Bug [4.0.7]
- Load a Store with 'GET' method
- How to scroll selected record into view
- Right Align Window
- A problem with Norwegian characters from json file.
- Ext.view.View use templates
- Duplicate button on Row Grid?
- Paging Store does not work for local data
- Combobox column with dynamic data in ExtJS 4 Grid
- When should I use a feature and when a plugin?
- How to update a databse?
- How to post data?
- Ext JS 4 and Selenium
- why combo filter start after 4 letter in remote queryMode ?
- Buffered Grid Example - question
- What is the proper way to create listeners on reusable MVC views?
- grid reorder after insert a row
- MVC: Controller reference from Grid's Action Cell handler.
- Encoding colon?
- Combobox does not show selected value
- appending checkbox tree from one store and filling in values from another...
- gmappanel and form in window
- ExtJS 4 MVC - how to load only current controller and its related data ?
- Unable to configure CK Editor with ExtJS4.0
- How do I use a spinner where the display value is different from value?
- Ticker / Live Updated Chart
- Ext.Msg.alert is set behind iframe in IE
- 'locked' is null or not an object error in IE 8
- which js files are required for any basic ext 4 application ?
- Pagination with show all option
- disable button if selected row not got any value on specific column
- extjs file into an i18n issue
- namespace is undefined EXTJS4
- Namespace Problem, Inheritance
- Ext js 4: How to sripe path from the filefield inorder to retrieve filename ?
- filter by surname and firstname
- How to add a field in the form when a button is clicked
- How to add a field in the form when a button is clicked
- Project Management System using ExtJS 4 with Java+Spring+Hibernate
- Portal Example
- Issue with piechart
- reload filtered store when closing dialog
- Fit to parent plugin available for Extjs4?
- problem with require under chrome browser
- export Ext to .txt
- How best to implement similar configuration of proxy across various models and stores
- AutoScroll items inside multiselect (multiselect-demo.html)
- group in grid
- TreeStore : appendChild & id/internalId problems
- combobox queryMode local still shows loading
- MVC Architecture with Visual Studio 2010
- Accessing complex Json data in grid
- Export Grid to Excel
- Adding config to a subclass
- new on sencha please help
- Changing scope of functions called
- URGENT :Add new row when user reaches last cell of last row in grid using Ext JS 4
- Delayed Filter
- Dragging events during action
- Where do i get 4.1?
- Ext JS combo box
- Grid remains empty after store.load.
- how to render (modify) combo field display value?
- Ext.Date monthNames, monthNumbers, getMonthNumber
- Load substore in a Grid
- Is there a multi-grouping feature in Ext JS 4?
- using inline javascript code in Xtemplates
- Linked-ComboBoxes Error
- How to make tree panel in navigation as attached screenshot
- Ext.Loader extra param?
- EXT COMBO
- Ext Store bug?
- Store.group method is not working as expected
- Ext.tree.panel is not loading
- Problem with adding new fields to FormPanel in IE
- ExtJS an IE9 don't work
- How to add new menu Item to Grid Column Header
- Add a link in a panel header
- ExtJS MVC pattern + JAWR
- POST Request with extraParams
- Model Asscociations without full path?
- Add Record Button in Grid GroupHeaderTpl Alignment
- Grid & RowEditing - Records still flagged as "dirty" after update
- Grouped Grid Grouping "Fake/Non-Existing Records"
- Error installing sass:invalid gem format for C:/Ruby193....
- Cannot hide simple button
- Event Controllers Don't Exist for Certain Components?
- Portal Demo Locally
- grid.property.Grid - change the text of the columns?
- Access a Toolbar->ComboBox from a controller?
- ExtJS tooltip for form fields - NumberField specifically
- Very basic but in need
- login form
- Ext.ux.J2EEAuth extension
- Help with Feed Viewer :: noob
- Store, Models, and who is calling Base.initConfig()?
- set an id to tabpanel
- Creating linkedin like Inbox Menu in Ext JS 4
- formsubit does not work
- Confused where to put "beforeDestroy" method of "beforedestroy" event.
- Question about the data model and nested data...
- GridPanel after load of zero items does not scroll vertically in FF 8.0
- Hello world application not working in Firefox 7.0.1
- Hide column series in chart & zooming to selection
- Script error on Ext JS tree
- Exjts 4 and ASP MVC3
- Object reference of two different files
- DataView arrow navigation
- Ext.state.SessionProvider whats the alternative in Ext JS 4
- ExtJS inline tree editing
- Open an extjs Windows on click on a html link
- Changes required at Server side JSON for using Paging.
- Datepicker change month listener
- RadarAxis drawAxis method
- Can we export graphs and charts created using ExtJS to PDF or JPG or PNG format?
- Problem with a ComboBox
- Scrollbar in nested Layouts/Panels
- c is not a constructor
- Mutiple filters on a store
- check checkbox to disable/enable triggerfiled
- GridPanel VS TreeStore VS treePanel
- extjs4 - bulk insert in store cause a performance issue
- Add Ext.Element to Ext.Component
- Can't seem to bind ajax data to grid
- How to show tooltip as popup content information of current record when mouse over
- Is it possible to get NOT JSON response using Ext.data.Store?
- How to change a component's config dynamically?
- Create range reference chart
- Editable Grid not working
- Ext JS 4.0.7 - Grid Scroll Bars Not Showing
- TreeStore load from custom JSON
- Using a row editor in multiple widgets
- How to put the focus on textfield.
- Ext.data.proxy.Ajax and WCF services via JSON
- HtmlEditor & raw text paste
- Ext.extend defined in Class does not work like the old Ext.extend defined in Ext
- Why is my gridpanel empty when it loads?
- Move Focus on next Row on same column on Enter Key
- One instance of Model belong to multiple Stores
- Load XML with attributes in a store
- Popup detail date window
- Giving onclick for list items in Extjs4
- How to implement Grid & Tree Grid Scroll To?
- How to clear cookies when click on hyperlink (log out)
- Grid panel: style CELL according to its value (not whole row)
- Ext JS Direct error in IE 8 "Object doesn't support this propert or method"
- Collapsing panel breaks in hbox layout?
- portal example
- Grid is not populating if used XML file as a data strore
- Probelm with GET
- How to display combobox value in a textfield?
- Problem with Grid Action Columns
- STORE / TREE STORE : Whata a filter is really supposed to do ?
- Did anyone has any luck to make TreeGrid work with locked columns in 4.0.7?
- Custom css content htmleditor
- Call Controller Method from outside Ext namespace
- Adding an additional datastore when using single-file grid definition+data
- Costum tab.Panel Look TabItem
- Get selected data from an Ext.view.View
- How to RowExpander with Ajax Request in Extjs4
- Treepanel Listener event does not works ExtJs 4.0.7
- Ext.Msg, Ext.MessageBox : undefined ! :o
- Get selected text
- Problem using combo as a filter for grid panel data.
- Dragging Window causes window to hang ("Grey out")
- How to get the rendered HTML of a component.
- How Combobox chain event
- Tree grid with checkbox model and paginaton
- Accordion alignment gaps in IE
- How to reload Store?
- Combobox Expand firing two events
- Float panels
- Extjs4 alignment issues
- MVC: Attach event in controller
- How to apply Ext.ux.grid.FiltersFeature to Ext.tree.Panel
- Error Message: Ext.Msg is undefined
- selected tree node
- display the values of a form in a grid
- problem with getting data into triggerfiled
- Ext.MessageBox.prompt to password
- Get Cell value in selModel: 'cellmodel' and select multiple cells
- After expanding the tree , its not showing all the nodes
- datefield - show only month and year and close after select
- No ID, please!
- ajax combobox
- Cell containing other panels
- Inheritance allowed in mixins?
- Graphs not rendering in firefox 8.
- typeAhead in Combobox inside rowEditing Grid
- mp
- Rendering issue with stacked chart
- question regarding namespace is undefined
- Update grid with group titles radio buttons.
- window.maximize not working in 4.0.7
- Can a proxy or reader redefine a model's fields using a JSON response?
- Simple Ajax
- Get element/item based on it's xtype
- Is there any Opening in India
- How can i get the size of the file before upload ?
- Combobox List Width
- Place to look for Sencha programmers
- Date field change date picker style
- Some Problems around Ext.draw.Component
- Stand alone Grid Multiple Windows
- If this radiobutton do this else this problem
- does extjs4 has dropped feature of markDirty:false to hide red triangle ?
- Timefield in EditorGrid: prior value in pulldown
- What is the proper way to style tabs?
- Disable reset of certain form fields
- reload a Tree Panel
- Removing border line between locked and unlocked parts of the grid
- How to get a specific editor component from a grid?
- Cannot set CheckboxSelectionModel for lockable Grid
|
http://www.sencha.com/forum/archive/index.php/f-87-p-9.html
|
CC-MAIN-2014-41
|
refinedweb
| 1,731
| 53
|
This Week on p5p 1999/10/24
$^O
STOPblocks and the broken compiler
- Blank lines in POD
PERL_HEADERenvironment variable
- Out of date modules in Perl distribution
- Enhanced
UNIVERSAL::isa
sortimprovements
globcase-sensitivity
reftypefunction
- New
perlthreadman page
- Win32 and
fork()
- Module Bundling and the proposed
importpragma
crondaemon runs processes with
$SIG{CHLD}set to
IGNORE
- Day range checking in
Time::Local::timelocal
- New quotation characters
- Lexical or dynamic scope for
use utf8?
- Full path of cwd in
@INC
- A Strategic Decision to use the Perl Compiler
- Happy Birthday Perl 5
- Unicode Character Classes Revisited
- Sarathy says `Yikes’ again
- Various.
$^O
There was a gigantic discussion of
$^O and related matters. This was brought on by Tom, who wants to write a program that cross-checks the
SEE ALSO sections of the man pages. The problem: Every version of Linux has a
man command that is slightly incompatible with every other. In particular, each system has a different idea of where the pages are and how they are organized. Tom wants his program to find out what sort of Linux it is on, `Red Hat’ or `Debian’ or whatever, but
$^O (and also the
uname command) only says
linux, which is not enough.
Various discussion ensued. Suggestion 1: Make
$^O look like
linux-redhat or something. Objections: Changing
$^O will break stupid programs that have
$^O eq 'linux' instead of
$^O =~ /linux/. Putting
redhat into
$^O will not actually solve Tom’s problem, at least not in general, since the semantics of
redhat changes from release to release.
Suggestion 2: Add a
Config.pm field for the distribution vendor. Objections:
Config.pm only reflects the state of the system at the time Perl was built, not at the time your program runs. Possible solution to this: Have
Config determine the OS at run time at the moment the information is requested. Second objection: If
Config can do this, why can’t Tom’s program do it the same way, but without
Config? Well, OK, the nastiness could be encapsulated in a module. But Sarathy didn’t like the idea of putting this dynamic information into
Config. He suggested:
Suggestion 3: A new module,
OS, to provide functions for looking up this sort of thing dynamically. There were other similar suggestions. Dan Sugalski suggested adding a new magical
%^O variable that would behave similarly. Nick Ing-Simmons suggested an
OS_Info module. This multiplicity suggests that I was the only one following the whole tedious discussion. (And, if so, that everyone else had good sense.)
Gosh. When I took this job, I knew there would be occasional weeks where there was some gigantic but trivial discussion. But I wasn’t expecting one so soon.
If there was a conclusion to this discussion, I was not able to find it. Maybe there will be an update next week, or maybe everyone will just get tired of the whole thing and forget about it. Tom eventually punted on the problem, and his program now assumes that it is running under Red Hat.
In this midst of this, there were some sidetracks I found interesting. There was discussion of Sarathy’s hack to create
fork() on forkless Microsoft OSes (more about this below.) Tom Horsley had a really delightful rant about
Configure, which unfortunately is too long to reproduce here:
[
Configure] acts, in fact, as though it were a compressed archive chock full of config.h files for all kinds of different systems, and pressing the button merely unpacks one of the files.
The problem comes when you attempt to extract a file that was never put into the archive in the first place. …
The replies to this are worth reading too.
STOP blocks and the broken compiler
One of the changes in perl 5.005_62 was that
END blocks would no longer be run under
-c mode. Nick Ing-Simmons wanted to know how the compiler would work; it had formerly worked by enabling
-c mode, and walking the op tree and dumping out the compiled code in an
END block, which was executed after the program file was parsed and compiled. (This may be an incorrect description; I would be grateful for corrections here.) Disabling
END blocks under
-c mode, while correct, would break the compiler.
When he made the change, Sarathy planned a workaround, which you can find in
perldelta if you are interested. But the workaround is annoying for the compiler, and Sarathy suggested that the best solution would be
STOP blocks. These would be run after the compilation phase, but before the run phase; they are in contrast to
INIT blocks, which are run at the start of the run phase. Normally, these two things happen at almost the same time, with
STOP blocks immediately before
INIT blocks. But if you think of a compiler module, which pauses after the compilation phase, writes out the compiled code and exits, the usefulness of
STOP becomes clear.
Vishal Bhatia pointed out that this would solve an existing compiler bug:
END blocks are presently not executed at all by compiled scripts. If the
B:: modules did their work in
STOP blocks instead of
END blocks, they would not have to usurp the
END blocks.
Blank lines in POD
Larry Virden submitted a minor doc patch: There was a line which looked empty, but which contained white space. This prevented the POD parser from recognizing a
=head directive on the following line, because directives are only recognized when they begin `paragraphs’, and a line is not deemed to end a paragraph unless it is entirely empty.
It appears that this annoying behavior is finally going to be fixed. I am delighted, because I had complained about this back in 1995.
PERL_HEADER environment variable
Ed Peschko wanted a new
PERL_HEADER environment variable, somewhat analogous to
PERLLIB or
PERL5OPT, which would contain code that would be prepended to the source file before it was executed. He wanted this so that he could make an environment setting to tell Perl to always load up some standard, locally defined modules before compiling the rest of any program.
Many people found persuasive reasons why this would be a bad thing to do, and many other people suggested ways that it could be accomplished. For example, you could set
PERL5OPT to
-MFoo -MBar.
Out of date modules in Perl distribution
Michael Schwern pointed out that there are several modules being distributed with Perl for which more recent versions exist on CPAN.
It turns out that many of these cases are for good reasons. For example, Ilya keeps the version number of the Devel::Peek on CPAN higher than the version in Perl so that if you ask
CPAN.pm to
install Devel::Peek, it does not go and try to install the latest version of Perl for you. (Why does it do that, anyway?)
However, some modules really are out of date in the distribution. Sarathy asked that authors of modules in the Perl distribution send him a note when they update their modules.
Enhanced
UNIVERSAL::isa
Mark Mielke suggested enhancing
isa so that you could give it and object and several class names and it would return true if the object belonged to any of the classes. At present, only one class is allowed. No conclusion was reached. My guess is that this is not going in because it is easy to write such a function if you want it.
sort improvements
I don’t fully understand this yet, but it looks interesting. It appears that Peter Haworth wants to have Perl notice when a sort comparator function is prototyped with
($$), and to optimize the argument passing to such a function to get the speed of the
$a-
$b hack, but without actually using
$a and
$b. Then you could use any two-argument function as a sort comparator but it would be as fast as if it were using the special
$a-
$b method. I have asked Peter to confirm this, and I will report back next week.
Note added 26 October: Peter cofirms that I have it mostly right, but adds:
The gains aren’t so much for performance, as getting rid of package annoyances. If I manage to get this patch working properly, you can use a comparator function from a different package, and it can just get its arguments from
@_, rather than
${caller.'::a'}and
${caller.'::b'}. Also, Ilya says this will allow XSUBs to be used as comparators, but I don’t know the history of this well enough to know why they can’t be used now.
glob case-sensitivity
Perl 5.005_62 optionally has a new built-in implementation of the
glob function; it does not need to call the shell to do a glob. Paul Moore pointed out that the new internal globber is case-sensitive, even on his Win32 system with the case-insensitive filesystem; formerly,
glob had been case-insensitive.
Some discussion ensued about what to do. Sarathy seemed inclined to let the new globber continue to be insensitive on case-insensitive filesystems, and vice versa; on Windows systems there is an API for finding this out. He asked Paul for a patch for this. He said that people could use the
File::Glob or
File::DosGlob modules if they needed a specific semantics.
Incidentally, Larry suggested that the new
glob be made the default for the beta test versions of Perl, so that it would be tested adequately.
reftype function. Jeff encountered this behavior while he was writing a function to determine what kind of reference (array, hash, whatever) its argument is.
(This is more difficult than it seems. You cannot use only
ref, because if you have an object blessed into a class named
ARRAY,
ref will return
ARRAY even if the object is a hash, and you run into similar problems with classes named
0 and so forth.)
Nobody addressed the
(;$) issue, but there was discussion of how to build such a function. Spider Boardman revealed that he had such a function named
attributes::reftype already in the standard Perl distribution. It is written in C as an XS, which is clearly the Right Way to Do It. Sarathy said he thought that
attribute.pm was a good place for the function to be.
New
perlthread man page
Dan Sugalski presented for comments a draft of a
perlthread man page, discussing Perl’s thread interface and thread semantics.
Win32 and
fork()
Sarathy has been working for some time on making
fork work on forkless Win32 systems. The idea:
fork will create a new thread, running a separate copy of the Perl interpreter, which will run the fictional child process. The child process will somehow have its own current working directory, environment, open file table, and so forth.
exec in the `child’ thread will terminate the thread and its associated interpreter, rather than the entire process.
Dan Sugalski: I see there’s going to be something interesting to implement for VMS before 5.6 gets released. Cool. :)
Module Bundling and the proposed
import pragma
This continued from last week. Michael King split up his module functionality into
Import::ShortName for module aliasing, and
Import::JavaPkg, to load a whole bunch of modules in a single namespace all at once, with aliasing.
At the tail end of this discussion, several people complained that although they thought that they’d followed the documented procedure for reserving namespaces in the CPAN module list, nothing ever seemed to come of it, and their names never appeared in the list. Andreas König took responsibility for this problem. He is rewriting the PAUSE software to handle the bookkeeping, because the module list owners are too overworked to do it all manually.
Andreas asked people whose requests had been forgotten to send a reminder to the module list by the end of October, and promised to get these requests listed within 24 hours.
cron daemon runs processes with
$SIG{CHLD} set to
IGNORE
On some systems, the
cron daemon has this bug. (It is a bug in
cron, because
cron should know to restore the signal handling to the default case when running a job; otherwise the job will inherit this unusual signal environment and might get unexpected results.)
Tom Phoenix added a patch to the linux hints file to try to detect this, and print out a warning at Perl build time if so. Sarathy said it was bad to put this in the hints, because it does not actually affect the build process, and that it should be documented more prominently.
Mike Guy asked: ``Wouldn’t it be better for Perl just to set
$SIG{CHLD} = 'DEFAULT' automatically at startup in this case? Would it do any harm to do it in all cases?” Sarathy agreed, and put in a patch to do that, and also to issue a warning if so.
Day range checking in
Time::Local::timelocal
If you ask
timelocal to convert a date where the day of the month is larger than 31, it aborts with a warning like
Day '32' out of range 1..31
John L. Allen complained that this was stupid for two reasons: First, it doesn’t abort when you ask for February 30, and second, it prevents you from asking for January 280 to find out the date of the 280th day of the year. He submitted a patch that eliminated the check.
A patch like that had been in before, but Sarathy took it out because it caused a test failure in
libwww; Sarathy wants it to be conditionalized on a
nocroak variable or something, for backward compatibility. In the ensuing discussion, Jonathan Scott Duff made a list of new features he’d like to see in
Time::Local—features like `fast’ and `correct’.
Mike Guy said that he had worked on such a thing, but run into some annoying backward compatibility issues. For example, the current
timelocal returns -1 on an error. But because -1 also indicates a valid time before 1970,
timelocal cannot work for dates before 1970 and be backward-compatible with the current version at the same time. Also, the existing
timelocal has a very nasty interpretation of the year:
2070,
170, and
70 all mean the year 2070, contrary to good sense and the documentation.
Sarathy said he would accept the
timelocal replacement if there were a command to enable the improved behaviors that were not backward compatible with the old behavior.
New quotation characters
Kragen Sitaker asked, on
comp.lang.perl.misc, whether it wouldn’t be nice for Perl to recognize additional kinds of parentheses once Unicode support is really in. For example, U+3010
and U+3011
are left and right `black lenticular brackets’. The
q operator understands
q{...} and
q(....)
q[...] and the like; why not the black lenticular brackets also?
Kragen also suggested that, the Japanese `corner quote’ characters U+300C
and U+300D
(for example) could be used to imply the
qr operator, in the same way that ordinary double quotes presently imply the
qx operator.
Ilya thought it was worth forwarding to
p5p: ``Once Unicode goes in, one would not be able to change matching rules. So it should be at least discussed early.” But nobody had anything to say about it.
Lexical or dynamic scope for
use utf8?
It is presently lexically scoped. There was discussion some weeks ago about whether to make it dynamically scoped; then the caller of a function could set the
utf8 behavior of the library functions it called. I did not understand the issues at the time, so I cannot rehash them here.
Sarathy asked for informed persons to contribute their thoughts, but there were none.
Full path of cwd in
@INC
Ed Peschko asked if it would be possible to include the full path of the current directory in
@INC, rather than just a dot. The usual objections: 1. There is already an easy way to put the full path in, if that is what you want: you use the
FindBin module. 2. It would be expensive for the large population that did not need it.
A Strategic Decision to use the Perl Compiler
Sounds like a bad move to me, but David Falk had this to say for himself:
I am CEO of Dionaea Corporation, a software company that designs performance monitoring tools for UNIX, and we have made the strategic decision to use the
perlcccompiler as the hub of our code development. Overall this has been a good decision for us, but we have run into several snags with the compiler.
He then reported the bugs. They looked pretty simple, but nobody replied. Scary to think that someone’s family might starve in the streets because of problems in the Perl compiler.
Happy Birthday Perl 5
Actually the real birthday was on 17 October, 1994, but there is an error in
perlhist so the birthday wishes arrived on the 18th. (Nobody has supplied a patch yet.)
Chris Nandor submitted a birthday patch.
Unicode Character Classes Revisited
Last week there was discussion of use of Unicode properties to define regex character classes. People interested should also consider reading the Unicode Regular Expression Guidelines.
Sarathy says `Yikes’ again
``Yikes, this is one is the size of China.”
Various
A large collection of bug reports, bug fixes, non-bug reports, questions, answers, and a small amount of flamage and spam.
Also, Tuomas Lukka continues to send email with an incorrect
Date: header.
Until next week I remain, your humble and obedient servant,
|
https://www.perl.com/pub/1999/10/p5pdigest/THISWEEK-19991024.html/
|
CC-MAIN-2021-49
|
refinedweb
| 2,927
| 60.85
|
#include <StelSkyPolygon.hpp>
Inherits MultiLevelJsonBase.
Default constructor.
Constructor.
Constructor.
Destructor.
Draw the image on the screen.
Implements StelSkyLayer.
Return the dataset credits to use in the progress bar.
Return the server credits to use in the progress bar.
Convert the polygon informations to a map following the JSON structure.
It can be saved as JSON using the StelJsonParser methods.
Reimplemented from MultiLevelJsonBase.
Load the polygon from a valid QVariantMap.
Implements MultiLevelJsonBase.
Minimum resolution at which the next level needs to be loaded in degree/pixel.
The credits of the server where this data come from.
The credits for the data set.
Direction of the vertices of the convex hull in ICRS frame.
|
http://www.stellarium.org/doc/0.10.5/classStelSkyPolygon.html
|
CC-MAIN-2013-48
|
refinedweb
| 112
| 53.68
|
It all starts with a plan
Plan structure
A config declared in Dagger starts with a plan, specifically
dagger.#Plan
Within this plan we can:
- interact with the
clientfilesystem
- read files, usually the current directory as
.
- write files, usually the build output as
_build
- read
envvariables, such as
NETLIFY_TEAMin our example
- declare a few
actions, e.g.
deps,
test&
build
This is our Getting Started todoapp plan structure:
// ...
// A plan has pre-requisites that we cover below.
// For now we focus on the dagger.#Plan structure.
// ...
dagger.#Plan & {
client: {
filesystem: {
// ...
}
env: {
// ...
}
}
actions: {
deps: docker.#Build & {
// ...
}
test: bash.#Run & {
// ...
}
build: {
run: bash.#Run & {
// ...
}
contents: core.#Subdir & {
// ...
}
}
deploy: netlify.#Deploy & {
// ...
}
}
}
When the above plan gets executed via
dagger do build, it produces the following output:
[✔] client.filesystem.".".read 0.0s
[✔] actions.deps 1.1s
[✔] actions.test.script 0.0s
[✔] actions.test 0.0s
[✔] actions.build.run.script 0.0s
[✔] actions.build.run 0.0s
[✔] actions.build.contents 0.0s
[✔] client.filesystem."./_build".write 0.1s
Since these actions have run before, they are cached and take less than 2 seconds to complete.
While the names used for the actions above -
deps,
test &
build - are short and descriptive,
any other names would have worked. Put differently, action naming does not affect plan execution.
Lastly, notice that even if the
deploy action is defined, we did not run it.
Similar to Makefile targets, we have the option of running specific actions.
We ran the
dagger do build command, which only runs the
build action (and all its dependent actions).
This Dagger property enables us to keep the entire CI/CD config in a single file, while keeping the integration execution separate from the deployment one.
Separating CI & CD concerns becomes essential as our pipelines grow in complexity, and we learn about operational and security constraints specific to our systems.
Packages & imports
In order to understand the correlation between actions, definitions and packages, let us focus on the following fragment from our Getting Started todoapp config:
package todoapp
import (
"dagger.io/dagger"
"universe.dagger.io/netlify"
)
dagger.#Plan & {
// ...
actions: {
// ...
deploy: netlify.#Deploy & {
// ...
}
// ...
}
}
We start by declaring the package name,
package todoapp above.
Next, we import the packages that we use in our plan.
The first import is needed for the
dagger.#Plan definition to be available.
The second import is for
netlify.#Deploy to work.
info
Which other imports we are missing? Look at all the actions in the plan structure at the top of this page. Now check all the available packages in universe.dagger.io.
We now understand that the
deploy action is the deploy definition from the netlify package, written as
deploy: netlify.#Deploy
Each definition has default values that can be modified via curly brackets. This is what that looks like in practice for our deploy action:
// ...
deploy: netlify.#Deploy & {
contents: build.contents.output
site: client.env.APP_NAME
token: client.env.NETLIFY_TOKEN
team: client.env.NETLIFY_TEAM
}
// ...
We can build complex pipelines efficiently by referencing any definition, from any package in our actions. This is one of the fundamental concepts that makes Dagger a powerful language for building CI/CD pipelines.
If you want to learn more about packages in the context of CUE, the config language used by Dagger configs, check out the Packages section on the What is CUE? page.
tip
Now that we understand the basics of a Dagger plan, we are ready to learn more about how to interact with the client environment. We can read the env (including secrets), run commands, use local sockets, etc.
|
https://docs.dagger.io/1202/plan/
|
CC-MAIN-2022-27
|
refinedweb
| 588
| 57.98
|
I have a requirement to write FLAC files in java. Earlier I was writing the audio input into a WAV file and then converting it to FLAC file using a external converter
I was looking into JFlac to find any API through which I can write FLAC files. I found that AudioFileFormat.TYPE in java supports only the following file formats - AIFC, AIFF, SND, AU, WAVE .
I would like to have a method where I can capture the audio from the microphone and, using an API such as Audiosystem.write, write it to a FLAC file instead of WAV file.
Please suggest a method or an API that can solve my problem.
You can use this lib. Here is a simple example using version 0.2.3 (javaFlacEncoder-0.2.3-all.tar.gz). Extract the dowloaded file, and then import javaFlacEncoder-0.2.3.jar to your project. For more documentation, see here:
package fr.telecomParisTech; import java.io.File; import javaFlacEncoder.FLAC_FileEncoder; public class SoundConverter { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub FLAC_FileEncoder flacEncoder = new FLAC_FileEncoder(); File inputFile = new File("hello.wav"); File outputFile = new File("hello.flac"); flacEncoder.encode(inputFile, outputFile); System.out.println("Done"); } }
|
https://codedump.io/share/Xv2UuveXuwiE/1/how-to-write-flac-files-in-java
|
CC-MAIN-2017-39
|
refinedweb
| 204
| 61.02
|
In today’s Programming Praxis exercise we have to implement two algorithms that select random items from a list in linear time. Let’s get started, shall we?
Some imports:
import Control.Monad import Data.List import System.Random
I found myself doing the same thing in both functions so I factored it out. This function gives you an x in y chance of choosing a instead of b.
chance :: Int -> Int -> a -> a -> IO a chance x y a b = fmap (\r -> if r < x then a else b) $ randomRIO (0, y - 1)
The first algorithm (selecting one item at random from a list) can be done by folding over the list, with each item having a decreasing chance of becoming the new choice.
fortune :: [a] -> IO a fortune = foldM (\a (n, x) -> chance 1 n x a) undefined . zip [1..]
The second algorithm (selecting m items from a list of integers) is pretty much the same, except now we also have to keep track of how many items we selected. This version always goes through the entire list rather than stopping when m items have been selected, but since it still runs in O(n) and the resulting code is cleaner I went with this version.
sample :: Int -> Int -> IO [Int] sample m n = fmap snd $ foldM (\(m', a) x -> chance m' x (m' - 1, x:a) (m', a)) (m, []) [n, n-1..1]
With random algorithms it’s always a good idea to check the distribution of the results, as was proven again today because it revealed a bug in my code.
main :: IO () main = do let dist n f = mapM_ (\x -> print (length x, head x)) . group . sort . concat =<< replicateM n f dist 10000 . fmap return $ fortune ["rock", "paper", "scissors"] dist 10000 $ sample 6 43
The frequency distribution is pretty much equal and they sum up to the correct amount, so everything seems to be working correctly.
Tags: bonsai, code, Haskell, kata, praxis, programming, random, selection
December 10, 2010 at 2:41 pm |
Please tell us about the bug.
December 10, 2010 at 3:30 pm |
As usual, the code went through several iterations, so I can’t recall the exact code that produced the bug, but the basic problem was that by using ints instead of floats for the random number generation I introduced an off-by-one error because the bounds of randomRIO are inclusive, which means that asking for, say, 6 samples often produced only 4 or 5 because the final odds were 1 in 2 instead of 1 in 1.
April 5, 2011 at 11:35 am |
[…] we wrote some code in a previous exercise to select a random item from a list we can just use that here […]
|
https://bonsaicode.wordpress.com/2010/12/10/programming-praxis-two-random-selections/
|
CC-MAIN-2016-50
|
refinedweb
| 457
| 64.85
|
Hi,
I would like to forward error messages from the Login-Module (JAAS) to
my login-error.jsp site. To do this I create a own JAASRealm class that
override SecurityContraint method. In this method I placed a
this.request = request, so that I can use request.getSession.getID() and
request.getSession().setAttribute(<refname>, message) to forward the
message to the login-error.jsp site. I placed in the login-error.jsp
site <%= session.getAttribute(<refname>) %>.
Here are the first lines from my JAASRealm (I called it JAARSRealm):
public class JAARSRealm extends JAASRealm {
private Request request = null;
private static final String ERRMSG = null;
public SecurityConstraint [] findSecurityConstraints(Request request,
Context context) {
this.request = request;
return super.findSecurityConstraints(request, context);
}
public void setErrorMsg(String message) {
if(message.length() != 0){
HttpSession session = request.getSession(true);
session.setAttribute(ERRMSG, message);
}
}
...
So, now the problem:
My JAAS class is loosing the session after the validation of the current
user. Initially the session is there: If I log the request befor the
validation the session is appearing. But when I try to get the current
session after the validation in my setErrorMsg method sometimes I get
the session and sometimes not.
To make some more confusions: On my develope environment it works fine,
but not with all browsers. On the deployment environment it doesn't
work. I must makes two or more entries to get a error message.
So, is this a Tomcat bug (problem of timing)? Or are there some
configuration I must do (server.xml, web.xml) to make this work?
-Thanks
- Franck
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
|
http://mail-archives.apache.org/mod_mbox/tomcat-users/200602.mbox/%3C1140522797.7203.29.camel@borel.ub.uni-freiburg.de%3E
|
CC-MAIN-2017-09
|
refinedweb
| 278
| 52.46
|
Is the DRY Principle Bad Advice?
The DRY principle is probably the first software design concept you learn when you start coding. It sounds very serious and convincing, after all, it has an acronym! Furthermore, the idea of not repeating ourselves deeply resonates with the reason many of us enjoy programming computers: to liberate us from mind-numbing repetitious work. It is a concept that is very easy to grasp and explain (I still have to google Liskov substitution whenever I discuss SOLID design) and applying it usually gives that wonderful buzz your brain gets when it matches a pattern. What’s not to like?
Well, preventing repetition in code is often a good idea. But sometimes, I find, it is counterproductive in ways that I’d like to discuss in this post.
DRY code can result in tight-coupling
Sharing a single piece of code between two callers often is a great idea. If you have two services that need to send transactional email, that fetches some details about the user, renders a template and send the email out, it might look something like this:
class OrderService:
# ...
def send_order_receipt(self, user_id, order_id):
user = UserService.get(user_id)
subject = f"Order {order_id} received"
body = f"Your order {order_id} has been received and will be processed shortly"
content = render('user_email.html', user=user, body=body)
class PaymentService:
# ...
def send_invoice(self, user_id, order_id):
user = UserService.get(user_id)
subject = f"Payment for {order_id} received"
body = f"Payment for order {order_id} has been received, thank you!"
content = render('user_email.html', user=user, body=body)
Look at all of that repeated code! It’s very tempting to DRY it up with:
def send_transaction_email(user_id, order_id, subject, body):
user = UserService.get(user_id)
content = render('user_email.html', user=user, body=body)
Nice! We extracted the common code between the services to a helper function and now our services look like this:
class OrderService:
# ...
def send_order_receipt(self, user_id, order_id):
subject = f"Order {order_id} received"
body = f"Your order {order_id} has been received and will be processed shortly"
send_transaction_email(user_id, ,order_id, subject, body)
class PaymentService:
# ...
def send_invoice(self, user_id, order_id):
subject = f"Payment for {order_id} received"
body = f"Payment for order {order_id} has been received, thank you!"
send_transaction_email(user_id, ,order_id, subject, body)
Much cleaner, don’t you think?
One of the promises of DRY is that it will allow us to evolve our software better; business requirements and engineering constraints change all the time, and if we need to change the way this piece of code behaves, we only change it once and it will be reflected everywhere.
In the example above, we can pretty easily change the way we fetch user information, and even the email provider we use can be changed with ease.
But applied blindly, DRY code can do the exact opposite of facilitating change. Consider in our example, what if because of a business decision the
PaymentService's invoice mail needs to use a different template, how would we facilitate that? Or if the
OrderService is now required to retrieve a list of purchased items and feed that into the email template? Our extraction of shared logic into the
send_transaction_email method caused the
OrderService and the
PaymentService to become tightly coupled: you can't change one without the other.
As my good friend Ohad Basan once taught me:
“When you encounter a class named Helper, the last thing it will do is help you.”
DRY code can be harder to read
Let’s take another example. Assume we are writing unit tests for a web-server we are working on, we have two tests so far:
func TestWebserver_bad_path_500(t *testing.T) {
srv := createTestWebserver()
defer srv.Close()
resp, err := http.Get(srv.URL + "/bad/path")
if err != nil {
t.Fatal("failed calling test server")
}
if resp.StatusCode != 500 {
t.Fatalf("expected response code to be 500")
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatal("failed reading body bytes")
}
if string(body) != "500 internal server error: failed handling /bad/path" {
t.Fatalf("body does not match expected")
}
}
func TestWebserver_unknown_path_404(t *testing.T) {
srv := createTestWebserver()
defer srv.Close()
resp, err := http.Get(srv.URL + "/unknown/path")
if err != nil {
t.Fatal("failed calling test server")
}
if resp.StatusCode != 404 {
t.Fatalf("expected response code to be 400")
}
if resp.Header.Get("X-Sensitive-Header") != "" {
t.Fatalf("expecting sensitive header not to be sent")
}
}
Plenty of duplication to refactor! Both tests do pretty much the same: they spin up a test server, make a GET call against it and then run simple assertions on the
http.Response.
We can express this with a helper function that does exactly this:
func runWebserverTest(t *testing.T, request Requester, validators []Validator) {
srv := createTestWebserver()
defer srv.Close()
response := request(t, srv)
for _, validator := range validators {
validator.Validate(t, response)
}
}
I’m redacting here the exact definitions of a
Requester and a
Validator to save space but you can see the full implementation in this gist.
Now our tests can be refactored to be nice and DRY:
func Test_DRY_bad_path_500(t *testing.T) {
runWebserverTest(t,
getRequester("/bad/path"),
[]Validator{
getStatusCodeValidator(500),
getBodyValidator("500 internal server error: failed handling /bad/path"),
})
}
func Test_DRY_unknown_path_404(t *testing.T) {
runWebserverTest(t,
getRequester("/unknown/path"),
[]Validator{
getStatusCodeValidator(404),
getHeaderValidator("X-Sensitive-Header", ""),
})
}
A few interesting things to note about this change:
- It will be faster to write new, similar tests. If we have 15 different endpoints that behave similarly and require similar assertions, we can express them in a very concise and efficient way.
- Our code became significantly harder to read and extend. If our test fails due to some change in the future, the poor person debugging the issue will have to do a lot of clicking around until they have a good grasp of what’s going on: we replaced trivial, flat, straight-forward code, with clever abstractions and indirections.
But when should we DRY our code?
Pulling common code into libraries that can be shared between applications is a proven and effective practice that’s well established in our industry, surely we don’t mean to say we should stop doing so!
To help us decide when we should DRY code I would like to present an idea from a terrific book that was recently released in an updated 2nd edition: The Pragmatic Programmer by Andy Hunt and Dave Thomas:
.
Thomas, David. The Pragmatic Programmer, 2nd edition, Topic 8: The Essence of Good Design
In this gem of a chapter, Hunt and Thomas explore the idea that there is a meta-principle for evaluating design decisions that often collide — how easy is it going to be to evolve our codebase if we pick this specific path? In our discussions above we showed two ways in which DRYing code can make it harder to change, either by tight-coupling or by hampering readability — going against the meta-principle of ETC.
Being cognizant of these possible implications of DRY code can help decide when we should not DRY our code; to learn when we should do so, let’s return to the original scripture and re-examine this principle.
The DRY principle was originally introduced to the world by the same Hunt and Thomas in the 2000 edition of the book, and so they write:
“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. The alternative is to have the same thing expressed in two or more places.
If you change one, you have to remember to change the others, [..]. It isn’t a question of whether you’ll remember: it’s a question of when you’ll forget.
Thomas, David. The Pragmatic Programmer, 2nd edition . Topic 9 — The Evils of Duplication
Notice that the DRY principle originally does not deal at all with the repetition or duplication of code, instead, it discusses the danger of not having a single source-of-truth representation for a piece of knowledge in the system.
When we refactored the
send_transaction_email method to replace duplicated code between the
OrderService and the
PaymentService we confused between duplicated code and duplicated knowledge. If two procedures are identical at a certain point in time, there is no guarantee that they will continue to be required to be so in the future. We must be able to differentiate between procedures that are coincidentally shared and those that are essentially shared.
Coming full circle, I must admit, the DRY principle is a pretty important piece of advice after all; we should just remember that despite it being thrown around all the time:
Conclusion
- Removing duplication feels good, but is often wrong.
- The DRY principle is not about code duplication.
- The meta-principle of good design is ETC.
Originally published on placeholder on May 18, 2020.
|
https://rotemtam.medium.com/the-dry-principle-is-bad-advice-78c51afd5cf0?source=post_internal_links---------7----------------------------
|
CC-MAIN-2021-17
|
refinedweb
| 1,455
| 53.92
|
import "google.golang.org/grpc/xds/internal/client/bootstrap"
Package bootstrap provides the functionality to initialize certain aspects of an xDS client by reading a bootstrap file.
type Config struct { // BalancerName is the name of the xDS server to connect to. // // The bootstrap file contains a list of servers (with name+creds), but we // pick the first one. BalancerName string // Creds contains the credentials to be used while talking to the xDS // server, as a grpc.DialOption. Creds grpc.DialOption // NodeProto contains the node proto to be used in xDS requests. NodeProto *corepb.Node }
Config provides the xDS client with several key bits of information that it requires in its interaction with an xDS server. The Config is initialized from the bootstrap file.
NewConfig returns a new instance of Config initialized by reading the bootstrap file found at ${GRPC_XDS_BOOTSTRAP}.
The format of the bootstrap file will be as follows: {
"xds_server": { "server_uri": <string containing URI of xds server>, "channel_creds": [ { "type": <string containing channel cred type>, "config": <JSON object containing config for the type> } ] }, "node": <JSON form of corepb.Node proto>
}
Currently, we support exactly one type of credential, which is "google_default", where we use the host's default certs for transport credentials and a Google oauth token for call credentials.
This function tries to process as much of the bootstrap file as possible (in the presence of the errors) and may return a Config object with certain fields left unspecified, in which case the caller should use some sane defaults.
Package bootstrap imports 11 packages (graph) and is imported by 3 packages. Updated 2020-07-01. Refresh now. Tools for package owners.
|
https://godoc.org/google.golang.org/grpc/xds/internal/client/bootstrap
|
CC-MAIN-2020-29
|
refinedweb
| 271
| 51.48
|
As a System Administrator i realized that we can move through different specializations even it is not our primary role, that is interesting because one can never say that is bored!
I have seen many tutorials about exploit analysis, more about linux and less about windows but all of them very good. I have studied this subject for a long time but only now i will share some words which may probably have been already said, but i hope this post helps somebody to understand with another example. In the other hand, i share how i did things (which compiled i used, debugger) there are many ways to do the same and that is not teached in books.
- Get a Windows 7 Professional.
- Get a ansi c compiler: Dev-Cpp but can also be Visual Studio 2017.
- A debugger (Ollydbg or Immunity).
Create a vulnerable program in C:
This is the source, you can copy and paste it:
#include <stdio.h> #include <string.h> void doit(char *buffer) { int i = 0; for(i = 0; i < 30; i++) { buffer[i] = 'A'; } printf("Done doit %s !\n", buffer); } void main() { char buffer[10]; doit(buffer); printf("Done main!\n"); }
Now if you compile and test it the program will crash:
Lets debug the program and see how this bug can be exploited:
Open the test2.exe file with the debugger of your choice, i will show the examples with Immunity Debugger. Then step forwared with F8, the .exe will do some initial stuff:
The debugged program is displayed in Assembler. When a function is called in Assm, this is done with the “CALL” instruction, when a Call instruction is executed then the Next line of the code (the next one after the function call) will be stored in the Stack. This is done so the program can keep from the point where it left, when the function call finishes its work. This step in particular is done automatically, i guess it is done by the CPU but i am not sure.
When a function is called, the Stack is used to store:
- Internal buffers and variables
- Saved EBP
- The return address
Our stack looks like this in this moment:
Then this function doit() is called:
Again, the “call” instruction automatically stores the Return Address and the Stack looks like this:
As told before, the function saves the Return Address into the STACK, then it saves the EBP (Base Pointer) and space for the variables. Look that the Stack is a FIFO (First In First Out) Stack.
In this function, 30 characters ‘A’ (0x41 in hex) are stored into a variable of size 10, this causes the out of bounds overwrite. Look the previous picture, where the return address was located in SP:28FECC now it says 41414141 (these are the ‘A’s) and this will cause the program to try to jump to that address and an error.
But not so fast, when the function ends, the “LEAVE” instruction is executed and the control of the program returns to the place it came from (Stack Pointer: 28FE9C) this is done by the RET instruction that takes the address in the SS:SP (Stack Segment:Stack Pointer) and continues there. In this case it is 0040155A.
Finally, once in the Instruction Pointer 0040155A the instructions are a LEAVE and finally a RETN. The LEAVE at 00401566 expects a return address at 0028FEC8 but there we wrote illegally a lot of ‘A’ (0x41 hex) which will exploit the program.
Like before, the RET instruction says “where should i go now ?” It knows that the address should be in SS:SP but our SS:SP is contaminated with noice…. so occurs a Stack Based Buffer overflow.
|
https://twilightbbs.wordpress.com/2017/08/10/basic-windows-7-exploitation-analysis/
|
CC-MAIN-2018-30
|
refinedweb
| 621
| 75.74
|
Concept
Similar to IKImageBrowserView, but supports arbitrary drawing via icon subclasses, and is compatible with 10.4. A single-column view that scales icons to fit containers is also available. The view scales to several thousand icons with no problem, and takes advantage of multiple processors/cores when possible. Memory usage for the framework should remain low.
Here's a screenshot of one of the SamplePrograms:
Supported File and URL Types
- PDF/PostScript
- Skim PDFD
- Anything NSAttributedString can read
- http/ftp URLs and local HTML files using WebView
- QuickTime movies
- Anything ImageIO can read
- Quick Look thumbnails (on 10.5)
- Icon Services as a last resort
Availability
Compilation currently requires 10.5, but could be accomplished on 10.4 with a modicum of effort. Copy enums and typedefs from Leopard headers as needed, or possibly from the latest QuickTime headers on 10.4.
Why Use This?
If you're only supporting Leopard, IKImageBrowserView is probably faster, and will likely improve in future. FileView is designed to be more flexible (icons scale as large or small as you need), and of course the source is available for modification. It was originally intended for use in BibDesk, so some of the functionality is problem-specific.
Code for dumpster-divers
- FVOperationQueue and FVOperation, similar to NSOperation
- priority queue with fast enumeration
- thread-safe disk cache for arbitrary data
- CGImage scaling with vImage
- Finder icon label control
- Malloc zone for reusing large blocks of memory
- Thread pooling
- demonstrates two-way view binding implementation
- demonstrates IBPlugin implementation
- demonstrates CFRunLoop sources
Any or all of these may be improved upon significantly, of course! Bug fixes and performance improvements are welcome, and feel free to e-mail with questions or comments.
API Documentation
There are only three public classes in the framework, but most of them are commented. See FrameworkDocumentation for a link to the Xcode docset.
Known Problems
- There are certainly bugs in the code, but I'm not aware of anything critical at this time.
- Apple's ATS code has a memory corruption bug or bugs that can cause a deadlock after it stomps on memory. FVCoreTextIcon seems to avoid this to some extent since it doesn't get the same font change notifications as the AppKit string drawing mechanism, but it's only available on 10.5. PDF files with embedded fonts are a likely culprit.
- Garbage collection is not supported, nor will it be supported unless someone volunteers to do it. With the present mix of Cocoa, CoreFoundation, and CoreGraphics using Obj-C, C, and C++, writing a dual-mode framework is outside the scope of a hobby project.
Users
Currently the only app using the framework is BibDesk as far as I know, which is using an older version with a bunch of local changes.
Support
Feel free to email with questions: amaxwell at mac dot com.
|
http://code.google.com/p/fileview/
|
crawl-002
|
refinedweb
| 475
| 53.1
|
Parameter Substitution in Pig
Motivation
This document describes a proposal for implementing parameter substitution in pig. This proposal is motivated by multiple requests from users who would like to create a template pig script and then use it with different parameters on a regular basis. For instance, if you have daily processing that is identical every day except the date it needs to process, it would be very convenient to put a placeholder for the date and provide the actual value at run time.
Requirements
- Ability to have parameters within a pig script and provide values for this parameters at run time.
- Ability to provide parameter values on the command line
- Ability to provide parameter values in a file
- Ability to generate parameter values at run time by running a binary or a script.
- Ability to provide default values for parameters
- Ability to retain the script with all parameters resolved. This is mostly for debugging purposes.
Interface
Using Parameters
Parameters in a pig script are in the form of $<identifier>.
A = load '/data/mydata/$date'; B = filter A by $0>'5'; .....
In this example, the value of the date is expected to be passed on each invocation of the script and is substituted before running the pig script. An error is generated if the value for any parameter is not found.
A parameter name have a structure of a standard language identifier: it must start with a letter or underscore followed by any number of letters, digits, and underscores. The names are case insensitive. The names can be escaped with \ in which case substitution does not take place.
In the initial version of the software the parameters are only allowed when pig script is specified. They are disabled with -e switch or in the interactive mode.
Specifying Parameters
Parameter value can be supplied in four different ways.
Command Line
Parameters can be passed via pig command line using -param <param>=<val> construct. Multiple parameters can be specified. If the same parameter is specified multiple times, the last value will be used and a warning will be generated.
pig -param date=20080201
Parameter File
Parameters can also be specified in a file that can be passed to pig using -param_file <file> construct. Multiple files can be specified. If the same parameter is present multiple times in the file, the last value will be used and a warning will be generated. If a parameter present in multiple files, the value from the last file will be used and a warning will be generated.
A parameter file will contain one line per parameter. Empty lines are allowed. Perl style (#) comment lines are also allowed. Comments must take a full line and # must be the first character on the line. Each parameter line will be of the form: <param_name>=<param_value>. White spaces around = are allowed but are optional.
# my parameters date = 20080201 cmd = `generate_name`
Files and command line parameters can be combined with command line parameters taking precedence.
Declare Statement
declare command can be used from within pig script. The use case for this is to describe one parameter in terms of other(s).
%declare CMD `$mycmd $date` A = load '/data/mydata/$CMD'; B = filter A by $0>'5'; .....
The format is %declare <param> <value>
declare command starts with % to indicate that this is a preprocessor command that is processed prior to executing pig script. It takes the highest precedence. The scope of parameter value defined via declare is all the lines following declare command until the next declare command that defines the same parameter is encountered.
Default Statement
default command can be used to provide a default value for a parameter. This value is used if the parameter has no value defined by any other means. (default has the lowest priority.).
default has the format and scoping rules identical do declare.
%default DATE '20080101'
Processing Order
- Configuration files are scanned in the order they are specified on the command line. Within each file, the parameters are processed in the order they are specified.
- Command line parameters are scanned in the order they are specified on the command line.
- declare/default statements are processed in the order they appear in the pig script.
Value Format
Value formats are identical regardless of how the parameter is specified and can be of two types. First is a sequence of characters enclosed in single or double quotes. In this case the unquoted version of the value is used during substitution. Quotes within the value can be escaped. Single word values that dont use special characters such as % or = don't have to be quoted.
%declare DESC 'Joe\'s URL' A = load 'data' as (name, desc, url); B = FILTER A by desc eq '$DESC';
Note that the constant given to the filter needs to be enclosed in quotes because the parameter value is the unquoted version of the string.
Second is a command enclosed in backticks. In this case, the command is executed and its stdout is used as the parameter value:
%declare CMD `generate_date` A = load '/data/mydata/$CMD'; B = filter A by $0>'5'; .....
The values of both types can be expressed in terms of other parameters as long a the values of the dependent parameters are defined prior to this value.
%declare CMD `$mycmd $date` A = load '/data/mydata/$CMD'; B = filter A by $0>'5'; .....
In this example, parameters mycmd and date are substituted first when declare statement is encountered. Then the resulting command is executed and its stdout is placed into the path prior to running the load statement.
Debugging
If -debug option is specified to pig, it will produce fully substituted pig script in the current working directory named <original name>.substituted
A -dryrun option will be added to pig in which case no execution is performed and substituted script is produced. We can also use the same option to produce just the execution plan.
=== Logging ===
Pig uses apache commons() in conjunction with log4j() and we should to the same in the parameter substitution code.
The following code can be used to instanciate a logger:
import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; .... class ParameterSubstitutionPreprocessor { private final Log log = LogFactory.getLog(getClass()); .... }
Note that this code will work once we integrate this into Pig.
Pig uses INFO as the default log level. Any messages that you want users to see during normal operation should be logged at this level. Anything that is only useful for debugging, should be logged at DEBUG level. Warnings should be logged at WARN level.
Error Handling
All the errors should be propagated via exceptions. (The code should not use exit calls to make sure that the caller can react to the error.)
The following exceptions should be used:
ParseExceptions - for any errors due to parsing command line or config file parameters or pig script.
If the underlying code throws an exception and the exception is derived from RuntimeException - just let it propagate
If the underlying code throws an exception that is not derived from RuntimeException, catch it and throw a RuntimeException with the original exception as the cause. (We
want to make sure that we don't have to declare additional exceptions in our APIs.)
Any exception that the code originates should be either RuntimeException or its derivation if appropriate.
Design
A C-style preprocessor will be written to perform parameter substitution. The preprocessor will do the following:
Create an empty <original name>.substituted file in the current working directory
Create parameter hash that maps parameter names to parameter values.
- Read parameters from files in the order they are specified on the command line
Resolve each parameter:
- search the parameter value for variables that need to be replaced and perform replacement if needed. Generate an error and abort if replacement is needed but the
correspondent parameter is not found in the parameter hash.
- if the parameter value is enclosed in backticks, run the command and capture its stdout. If the command succeeds (returns 0), store the parameter in the hash with the
value equal to stdout of the command. If the command fails (returns non-0 value), report the error and abort the processing.
- if the value is not a command, store it in the parameter hash.
- if this is a duplicate parameter, warn and replace the old value with newly generated one.
- Resolve each command line parameter in the order they are specified on the command line
- use the same resolution steps as for parameters passed in a file
- For each line in the input script
- if comment or empty line, copy over
- if declare line resolve the parameter using the same steps as for parameters passed in a file
- if default line is encountered, the parameter defined is looked up in the parameter hash. If the parameter is not found, processing identical to declare line is
performed; otherwise, the line is skipped.
- for all other lines
- search the line for variables that need to be replaced and perform replacement if needed. Generate an error and abort if replacement is needed but the correspondent
parameter is not found in the parameter hash. (Reuse the code from the parameter substitution in declare statement.)
- place the substituted line into the output file.
- If -dryrun is not specified, pass the output file to grunt to execute. Otherwise, print the name of the file and exit.
- if neither -debug nor -dryrun are specified, remove the output file.
Future Features
One nice feature to add later is to be able to constrain parameter names. For instance in the statement below the intent might be to replace only $date and leave latest in the path.
A = load 'data/$date_latest'; ...
This can be specified with perl-style syntax:
A = load 'data/${date}_latest'; ...
|
http://wiki.apache.org/pig/ParameterSubstitution?highlight=RuntimeException
|
CC-MAIN-2014-23
|
refinedweb
| 1,624
| 55.03
|
DtSearchGetKeytypes(library call) DtSearchGetKeytypes(library call)
NAME [Toc] [Back]
DtSearchGetKeytypes - Access the Keytypes array for a DtSearch
database
SYNOPSIS [Toc] [Back]
#include <Dt/Search.h>
int DtSearchGetKeytypes(
char *dbname,
int *ktcount,
DtSrKeytype **keytypes);
DESCRIPTION [Toc] [Back]
The DtSearchGetKeytypes function returns a pointer to the keytypes
array of the specified database. The caller may modify the is_selected
member of any DtSrKeytype but should not alter any other member
values.
This function may be called anytime after DtSearchInit.
ARGUMENTS [Toc] [Back]
dbname Specifies for which database the keytypes are requested. It
is any one of the database name strings from the array
returned from DtSearchInit or DtSearchReinit. If dbname is
NULL, the first database name string in the array is used.
ktcount Specifies the address of the integer where the keytypes
array size will be stored.
keytypes Specifies the address where pointer to keytypes array will
be stored.
RETURN VALUE [Toc] [Back]
Returns DtSrOK and keytypes pointer and size.
Any API function can also return DtSrREINIT or the return codes for
fatal engine errors, as well as messages on the MessageList, at any
time.
SEE ALSO [Toc] [Back]
dtsrcreate(1), DtSearchQuery(3), DtSrAPI(3), DtSearch(5)
- 1 - Formatted: January 24, 2005
|
http://nixdoc.net/man-pages/HP-UX/man3/DtSearchGetKeytypes.3.html
|
CC-MAIN-2019-26
|
refinedweb
| 199
| 55.34
|
Links from class:
Recommended Java API pages
There are a number of important features in Java that can help you write smaller, faster, bug-free code. Here are some of my favorites:
- String
- Collections
- Arrays
- Math
- StringBuilder – instead of string buffer, use these to build your incomplete strings
- Scanner
- BufferedReader – for some problems, it can be easier to use BufferedReader than Scanner
Union-Find Disjoint Sets
Section 2.3.2 in Competitive Programming gives a nice description of Union-Find Disjoint Sets. Below is Java code that could be used to solve problem C or any union-find problem in general. This data structure comes up again and again in programming contests.
public class UnionFind { int sets[] = new int[100]; int nextId = 0; public void init() { for (int i = 0; i < sets.length; i++) { sets[i] = i; } } public void union(int s1, int s2) { sets[s2] = s1; } public int find(int index) { if (sets[index] == index) { return index; } return sets[index] = find(sets[index]); } public void run() throws Exception { init(); /* Do stuff */ } public static void main(String[] args) throws Exception { new UnionFind().run(); } }
For next class
Assigned readings:
- Programming Challenges: Chapter 2 – skip over Binary Indexed (Fenwick) Trees and Segment Trees
Assigned problems:
- Automatic Answer
- List of Conquests – Hint: Do a frequency count and sort the output (Is there a data structure that does this for you?) SOLUTION
- Event Planning
- Minesweeper (Optional)
- Magic Square Palindromes (Optional)
Assigned exercise:
- Look over the solutions to this class’ problems
|
http://cs.nyu.edu/courses/summer13/CSCI-UA.0380-001/class/2013/07/11/class-02-data-structures.html
|
CC-MAIN-2015-27
|
refinedweb
| 246
| 54.26
|
On Wed, 2009-12-02 at 11:27 -0500, Mathieu Desnoyers wrote:> A few questions about the semantic:> > Is "declare" here always only used as a declaration ? (e.g. only in> headers, never impacted by CREATE_TRACE_POINT ?)Well yes it is impacted by CREATE_TRACE_POINT, but so is DECLARE_TRACEfor that matter ;-)The difference is that DECLARE_EVENT_CLASS will at most (withCREATE_TRACE_POINT) only create the functions that can be used by otherevents. It does not create an event itself. That is, it's not muchdifferent than making a "static inline function" except that functionwill not be static nor will it be inline ;-)> > Is "define" here always mapping to a definition ? (e.g. to be used in a> C file to define the class or event handling stub)The DEFINE_* will create something that can be hooked to the tracepoints in other C files.> > I feel that your DEFINE_EVENT_CLASS might actually be doing a bit more> than just "defining", it would actually also perform the declaration.> Same goes for "DEFINE_EVENT". So can you tell us a bit more about that> is the context of templates ?Well, the macros used by these are totally off the wall anyway :-) Soany name we come up with will not match what the rest of the kernel doesregardless. But we need to give something that is close.I'm liking more the:DECLARE_EVENT_CLASS, DEFINE_EVENT, DEFINE_EVENT_CLASS, because I thinkthat comes the closest to other semantics in the kernel. That is (onceagain)DECLARE_EVENT_CLASS - makes only the class. It does create helperfunctions, but if there's no DEFINE_EVENT that uses them, then they arejust wasting space.The DEFINE_EVENT will create the trace points in the C file that hasCREATE_TRACE_POINTS defined. But it requires the helper functionscreated by a previous DECLARE_EVENT_CLASS.DEFINE_EVENT_CLASS will do both create a EVENT_CLASS template, as wellas a EVENT that uses the class. The name of the class is a separatenamespace as the event. Here both the class and the event have the samename, but other events can use this class by referencing the name.DEFINE_EVENT_CLASS(x, ...);DEFINE_EVENT(x, y, ...);The DEFINE_EVENT_CLASS will create a class x and an event x, then theDEFINE_EVENT will create another event y that uses the same class x.Actually, with the above, we may not need to have DECLARE_EVENT_CLASS()at all, because why declare a class if you don't have an event to useit? But then again, you may not want the name of the class also a nameof an event.-- Steve
|
http://lkml.org/lkml/2009/12/2/290
|
CC-MAIN-2015-32
|
refinedweb
| 409
| 61.97
|
We:
Are you writing C code? -
for i, j in enumerate(arr): print(i, j)
Well, now it looks better and more Pythonic. What about converting a list into a string?
# The C way string = '' for i in arr: string += i # The Python way string = ''.join(arr)
Just like
join, Python has a plethora of magical keywords, so don’t work for the language, make the language work for you.
Remember PEP8?
It’s like the rulebook you probably throw away because you are one of the cool kids. I’m not asking you to religiously follow it, all I’m asking for is to follow most of it because our founding father said that “Code is read much more often than it is written,” and he was dang right.
Do you ever look at your code and wonder? What the heck does that do? Why is it there? Why do I exist? Well, PEP8 is the answer to most of these questions. While comments are a good way of explaining your code, you still need to change the code and if you can’t remember what
i,
j,
count, etc stand for, you’re wasting your valuable time as well as of the poor human that’ll have to read and change your code.
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”
Prefer CamelCase for classes, UPPER_WITH_UNDERSCORES for constants, and lower_with_underscores for variables, methods, and module names. Avoid single name functions, even when using
lambda. In the spirit of that, let’s change our previous code to -
for ind, val in enumerate(arr): print(ind, val)
Are list comprehensions your best friend?
If you don’t know about it yet or aren't convinced why you should use it, let me give you an example-
# bad code positives = [] for val in arr: if val >= 0: positives.append(val) # good code positives = [val for val in arr if val >= 0]
You can use this for dictionaries and sets. It’s even perfect for playing code golf while being readable.
Still explicitly closing files?
If you are a forgetful person like I am, Python has your back. Instead of explicitly opening your file and then typing
filename.close() every time, simply use
with -
with open('filename.txt', 'w') as filename: filename.write('Hello') # when you come out of the 'with' block, the file is closed
Iterators or generators?
Both iterators and generators are powerful tools in Python that are worth mastering. An iterator returns an iterator object, one value at a time whereas generators yield a sequence of values while being memory efficient as they do not store the entire range of values, rather generate one only when you ask for it, unlike iterators that return the entire range of values. This makes generators extremely fast, compact, and simple.
To
yield or not to
yield?
When using generators, blindly use
yield. It will freeze the state of the generator and resume again from where you left off, should you require another value. But don’t use it, just for the sake of not using
return. Both have their place and it is neither fancy to use
yield nor ordinary to use
return.
Ever heard of itertools?
A faster, more memory-efficient way to perform iterator algebra. From
count and
cycle to
groupby and
product, this module has some amazing tools to make your life easier. If you want all the combinations of characters in a string or of numbers in a list you can simply write-
from itertools import combinations names = 'ABC' for combination in combinations(names, 2): print(combination) ''' Output - ('A', 'B') ('A', 'C') ('B', 'C') '''
May I introduce you to Python’s collections?
The collections module provides alternatives to built-in data types like
dict,
tuple, etc. They have various containers like
defaultdict,
OrderedDict,
namedtuple,
Counter,
deque, etc that work really efficiently for some problems. Allow me to demonstrate -
# frequency of all characters in a string in sorted order from collections import (OrderedDict, Counter) string = 'abcbcaaba' freq = Counter(string) freq_sorted = OrderedDict(freq.most_common()) for key, val in freq_sorted.items(): print(key, oval) ''' Output - ('a', 4) ('b', 3) ('c', 2) '''
Is OOP your religion?
Don’t overuse classes. Hang on Java and C++ peeps who are out there to get me. You can keep all your fancy classes and functions that are slaves to objects (yes Java, I’m looking at you), but when using Python you can just re-use code with the help of functions and modules. You don’t have to create classes when there is absolutely zilch need for it.
Docs or Tutorials?
You have truly progressed as a programmer when you prefer docs over tutorials or StackOverflow. I absolutely do not mean that documentation is the better being and tutorials shouldn't exist, however, once you are comfortable with a language, read docs more often, especially for something so beautiful like Python where everything is so perfectly explained that you could read it like a story-book. You will find interesting things to play around with and even if you don’t use them, it will be a fun ice-breaker at an all-coders party :p
Bonus code snippets for the lucky people who stayed till the end -
# bad code def is_positive(num): if num >= 0: return True else: return False # good code def is_positive(num): return num >= 0 # bad code if value == None: # some task # good code if value is None: # some task # unpacking a list into a variable and another list roll, *marks = roll_and_marks
There are numerous other ways in which we can write more Pythonic code but these were my two cents on the most basic ones. Comment down below if you agree with me, disagree, or just want to wish me a good day. You can also reach out to me on Twitter. Congratulations! You are now a Pythonista.
Discussion (14)
Very useful article, I just started using Python from a very heavy OOP background
Good luck learning Python! May you become embrace your inner Pythonista.
This is a vast topic.. It needs much concentrations more than explained.. Thanks.. I am shocked at some point even as Intermediate pythonic programmer.. I think you must create regular series beginning to advance pythonic coding.. It will be pleasure for us.. If possible it would be highly appreciated..
Yessss really loved this article, everytime I learn a more pythonic way of doing something I feel the power surge through my bones. Likewise, every time I have to work in another language that doesn't have listcomp's or other syntax tricks that I've taken for granted I get very, very sad lol
You said no classes, but what would you recommend for repeating arguments? Currently I am using dataclasses for that.
I never said 'no classes'. Obviously, they have their place like in your case. I said prefer functions over classes and group similar functions into modules rather than merely use a class for behavior grouping :)
Aha thanks, happy coding😉.
You too :D
Nice article.
In list comprehensions example, on "bad code", is missing 'value' in append() method.
Thank you for pointing that out! Fixed it :)
Thank you very much for this article! :D
Thank you for taking the time to read it :)
A good refresher article for me. Good one.
Thank you :)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dsckiitdev/how-to-write-better-python-code-4ia4
|
CC-MAIN-2021-10
|
refinedweb
| 1,231
| 72.16
|
Based on a section of easy-to-read XML source data, I'll show you how to select and locate XML nodes and navigate through them using
XPathNavigator and
XPathNodeIterator. I will provide a few straightforward samples about XPath expression with which you could follow without difficulty. In the last part, there is some sample code to update, insert and remove XML nodes.
To keep this article simple and clear, I'll break it down into two parts, and put XSL, XSLT to my next article.
Here is the source XML data:
<?xml version="1.0" encoding="ISO-8859-1"?> <catalog> <cd country="USA"> <title>Empire Burlesque</title> <artist>Bob Dylan</artist> <price>10.90</price> </cd> <cd country="UK"> <title>Hide your heart</title> <artist>Bonnie Tyler</artist> <price>10.0</price> </cd> <cd country="USA"> <title>Greatest Hits</title> <artist>Dolly Parton</artist> <price>9.90</price> </cd> </catalog>
If you want to select all of the price elements, here is the code:
using System.Xml; using System.Xml.XPath; .... string fileName = "data.xml"; XPathDocument doc = new XPathDocument(fileName); XPathNavigator nav = doc.CreateNavigator(); // Compile a standard XPath expression XPathExpression expr; expr = nav.Compile("/catalog/cd/price"); XPathNodeIterator iterator = nav.Select(expr); // Iterate on the node set listBox1.Items.Clear(); try { while (iterator.MoveNext()) { XPathNavigator nav2 = iterator.Current.Clone(); listBox1.Items.Add("price: " + nav2.Value); } } catch(Exception ex) { Console.WriteLine(ex.Message); }
In the above code, we used
"/catalog/cd/price" to select all the price elements. If you just want to select all the
cd elements with price greater than 10.0, you can use
"/catalog/cd[price>10.0]". Here are some more examples of XPath expressions:
To update a
cd node, first I search out which node you are updating by
SelectSingleNode, and then create a new
cd element. After setting the
InnerXml of the new node, call
ReplaceChild method of
XmlElement to update the document. The code is as follows:
XmlTextReader reader = new XmlTextReader(FILE_NAME); XmlDocument doc = new XmlDocument(); doc.Load(reader); reader.Close(); //Select the cd node with the matching title XmlNode oldCd; XmlElement root = doc.DocumentElement; oldCd = root.SelectSingleNode("/catalog/cd[title='" + oldTitle + "']"); XmlElement newCd = doc.CreateElement("cd"); newCd.SetAttribute("country",country.Text); newCd.InnerXml = "<title>" + this.comboBox1.Text + "</title>" + "<artist>" + artist.Text + "</artist>" + "<price>" + price.Text + "</price>"; root.ReplaceChild(newCd, oldCd); //save the output to a file doc.Save(FILE_NAME);
Similarly, use
InsertAfter and
RemoveChild to insert and remove a node, check it out in the demo. When you run the application, make sure that "data.xml" is in the same directory as the EXE file.
Anyway,
XmlDocument is an in-memory or cached tree representation of an XML document. It is somewhat resource-intensive, if you have a large XML document and not enough memory to consume, use
XmlReader and
XmlWriter for better performance.
Version 1.0, it's my first article on CP, I expect there are many flaws. The XML source data and the knowledge comes from the web and MSDN, I just wrote a demo app to show them. No copyright reserved.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/cpp/myXPath.aspx
|
crawl-002
|
refinedweb
| 521
| 52.46
|
calling unix application mplayer
Matt Zollinhofer
Ranch Hand
Joined: Jul 09, 2004
Posts: 33
posted
Jul 11, 2004 13:04:00
0
I thought maybe the thread group would have a little more insight about my problem. I'm trying to write little script that automates two unix (now OS X) programs mplayer and lame. In theory, mplayer will download the appropriate files I tell it to and then lame will encode them from WAV to MP3. Both of the two previous functions (mplayer and lame) work great when I run them from a command line.
Now here's the thing, I tell
java
to execute the mplayer command using runtime.exec(). It begins the execution which should take a long time, as it is downloading a stream of audio, but then without missing a beat moves on and terminates my script application as if all went well. I used a
BufferedReader
to get the output from mplayer (really from the runtime object), and it spits me back the first 6 lines almost exactly as I get when I run mplayer directly from the command line. But I do not get the rest of it, which should be another 50ish lines. The code follows, pardon the ugliness of variable names.
import java.io.*; import java.lang.Runtime; public class StreamToMP3 { public static void main( String[] args ) { int i; String line=""; TempReader tr; Thread myThread; try { System.out.println("Beginning"); tr = new TempReader(); myThread = new Thread( tr ); myThread.start(); System.out.println("start fired"); myThread.join(); System.out.println("End"); } catch (Exception e) { e.printStackTrace(); } } } import java.lang.*; import java.io.*; public class TempReader implements Runnable { BufferedReader br = null; public void run() { String line = ""; boolean flag = false; System.out.println("First"); while ( (true) && (!flag) ) { pause(); System.out.println("in while"); try { Runtime rt; rt = Runtime.getRuntime(); Process p = rt.exec("/Applications/MPlayer.app/Contents/Resources/mplayer.app/Contents/MacOS/mplayer -playlist ~/Desktop/new.txt -ao pcm -aofile ~/mystream.wav -vc dummy -vo null"); br = new BufferedReader( new InputStreamReader( p.getInputStream() ) ); if ( br != null) { while( (line = br.readLine()) != null ) { System.out.println(line); } flag = true; System.out.println("buffered reader finished in run()"); } } catch( Exception e ) { e.printStackTrace(); } } } public void pause() { for (int i=0; i<100000000; i++) { } } }
any ideas are GREATLY appreciated
Yaroslav Chinskiy
Ranch Hand
Joined: Jan 09, 2001
Posts: 147
posted
Jul 12, 2004 07:50:00
0
You should read more about runtime.exec() and Process.
In the process you have an option to wait for exit value of the process.
Under unix (or OSX) you whould usually get 0 for all good and 1 for error.
So try to get the exit value and evaluate it.
you can get more info on JavaWorld:
Hope that helps.
I agree. Here's the link:
subject: calling unix application mplayer
Similar Threads
runtime exec - mplayer doesn't complete
How to close dos 'pause' command in Java
java program run external command on Unix
who can my question about this code
Reg:Executing Commands
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/232624/threads/java/calling-unix-application-mplayer
|
CC-MAIN-2015-40
|
refinedweb
| 520
| 65.83
|
Question:
The puzzle
A little puzzle I heard while I was in high school went something like this...
- The questioner would ask me to give him a number;
- On hearing the number, the questioner would do some sort of transformation on it repeatedly (for example, he might say ten is three) until eventually arriving at the number 4 (at which point he would finish with four is magic).
- Any number seems to be transformable into four eventually, no matter what.
The goal was to try to figure out the transformation function and then be able to reliably proctor this puzzle yourself.
The solution
The transformation function at any step was to
- Take the number in question,
- Count the number of letters in its English word representation, ignoring a hyphen or spaces or "and" (e.g., "ten" has 3 letters in it, "thirty-four" has 10 letters in it, "one hundred forty-three" has 20 letters in it).
- Return that number of letters.
For all of the numbers I have ever cared to test, this converges to 4. Since "four" also has four letters in it, there would be an infinite loop here; instead it is merely referred to as magic by convention to end the sequence.
The challenge
Your challenge is to create a piece of code that will read a number from the user and then print lines showing the transformation function being repeatedly applied until "four is magic" is reached.
Specifically:
- Solutions must be complete programs in and of themselves. They cannot merely be functions which take in a number-- factor in the input.
- Input must be read from standard input. (Piping from "echo" or using input redirection is fine since that also goes from stdin)
- The input should be in numeric form.
- For every application of the transformation function, a line should be printed:
a is b., where a and b are numeric forms of the numbers in the transformation.
- Full stops (periods) ARE required!
- The last line should naturally say,
4 is magic..
- The code should produce correct output for all numbers from 0 to 99.
Examples:
> 4 4 is magic. > 12 12 is 6. 6 is 3. 3 is 5. 5 is 4. 4 is magic. > 42 42 is 8. 8 is 5. 5 is 4. 4 is magic. > 0 0 is 4. 4 is magic. > 99 99 is 10. 10 is 3. 3 is 5. 5 is 4. 4 is magic.
The winner is the shortest submission by source code character count which is also correct.
BONUS
You may also try to write a version of the code which prints out the ENGLISH NAMES for the numbers with each application of the transformation function. The original input is still numeric, but the output lines should have the word form of the number.
(Double bonus for drawing shapes with your code)
(EDIT) Some clarifications:
- I do want the word to appear on both sides in all applicable cases, e.g.
Nine is four. Four is magic.
- I don't care about capitalization, though. And I don't care how you separate the word tokens, though they should be separated:
ninety-nineis okay,
ninety nineis okay,
ninetynineis not okay.
I'm considering these a separate category for bonus competition with regard to the challenge, so if you go for this, don't worry about your code being longer than the numeric version.
Feel free to submit one solution for each version.
Solution:1
GolfScript -
101 96 93 92 91 90 94 86 bytes
90 â' 94: Fixed output for multiples of 10.
94 â' 86: Restructured code. Using base 100 to remove non-printable characters.
86 â' 85: Shorter cast to string.
{n+~."+#,#6$DWOXB79Bd")base`1/10/~{~2${~1$+}%(;+~}%++=" is "\". "1$4$4-}do;;;"magic."
Solution:2
Perl, about 147 char
Loosely based on Platinum Azure's solution:
chop ($_.= <>);@ u="433 5443554 366 887 798 866 555 766 "=~ /\d /gx ;#4 sub r{4 -$_ ?$_ <20 ?$u [$_ ]:( $'? $u[ $'] :0) +$u[18+$&]:magic}print" $_ is ",$_=r(),'.'while /\d /x; 444
Solution:3
Common Lisp 157 Chars
New more conforming version, now reading form standard input and ignoring spaces and hyphens:
(labels((g (x)(if(= x 4)(princ"4 is magic.")(let((n(length(remove-if(lambda(x)(find x" -"))(format nil"~r"x)))))(format t"~a is ~a.~%"x n)(g n)))))(g(read)))
In human-readable form:
(labels ((g (x) (if (= x 4) (princ "4 is magic.") (let ((n (length (remove-if (lambda(x) (find x " -")) (format nil "~r" x))))) (format t"~a is ~a.~%" x n) (g n))))) (g (read)))
And some test runs:
>24 24 is 10. 10 is 3. 3 is 5. 5 is 4. 4 is magic. >23152436 23152436 is 64. 64 is 9. 9 is 4. 4 is magic.
And the bonus version, at 165 chars:
(labels((g(x)(if(= x 4)(princ"four is magic.")(let*((f(format nil"~r"x))(n(length(remove-if(lambda(x)(find x" -"))f))))(format t"~a is ~r.~%"f n)(g n)))))(g(read)))
Giving
>24 twenty-four is ten. ten is three. three is five. five is four. four is magic. >234235 two hundred thirty-four thousand two hundred thirty-five is forty-eight. forty-eight is ten. ten is three. three is five. five is four. four is magic.
Solution:4
Python 2.x, 144
150 154 166 chars
This separates the number into tens and ones and sum them up. The undesirable property of the pseudo-ternary operator
a and b or c that
c is returned if
b is 0 is being abused here.
n=input() x=0x4d2d0f47815890bd2 while n-4:p=n<20and x/10**n%10or 44378/4**(n/10-2)%4+x/10**(n%10)%10+4;print n,"is %d."%p;n=p print"4 is magic."
The previous naive version (150 chars). Just encode all lengths as an integer.
n=input() while n-4:p=3+int('1yrof7i9b1lsi207bozyzg2m7sclycst0zsczde5oks6zt8pedmnup5omwfx56b29',36)/10**n%10;print n,"is %d."%p;n=p print"4 is magic."
Solution:5
C - with number words
445 431 427 421 399 386 371 359* 356 354â 348 347 characters
That's it. I don't think I can make this any shorter.
All newlines are for readability and can be removed:);}
Below, it is somewhat unminified, but still pretty hard to read. See below for a more readable version.); }
Expanded and commented:
int count; /* type int is assumed in the minified version */ void print(int index){ /* the minified version assumes a return type of int, but it's ignored */ /* see explanation of this string after code */ char *word = /* 1 - 9 */ ",one,two,three,four,five,six,sM,eight,nine," /* 10 - 19 */ "tL,elM,twelve,NP,4P,fifP,6P,7P,8O,9P," /* 20 - 90, by tens */ "twLQ,NQ,forQ,fifQ,6Q,7Q,8y,9Q," /* lookup table */ "en,evL,thir,eL,tO,ty, is ,.\n,4RmagicS,zero,"; while(index >= 0){ if(*word == ',') index--; else if(index == 0) /* we found the right word */ if(*word >= '0' && *word < 'a') /* a compression marker */ print(*word - '0'/*convert to a number*/); else{ putchar(*word); /* write the letter to the output */ ++count; } ++word; } } int main(int argc, char **argv){ /* see note about this after code */ scanf("%d", &argc); /* parse user input to an integer */ while(argc != 4){ count = 0; if(argc == 0) print(37/*index of "zero"*/); else{ if(argc > 19){ print(argc / 10/*high digit*/ + 20/*offset of "twenty"*/ - 2/*20 / 10*/); argc %= 10; /* get low digit */ if(argc != 0) /* we need a hyphen before the low digit */ putchar('-'); } print(argc/* if 0, then nothing is printed or counted */); } argc = count; print(34/*" is "*/); print(argc); /* print count as word */ print(35/*".\n"*/); } print(36/*"four is magic.\n"*/); }
About the encoded string near the beginning
The names of the numbers are compressed using a very simple scheme. Frequently used substrings are replaced with one-character indices into the name array. A "lookup table" of extra name entries is added to the end for substrings not used in their entirety in the first set. Lookups are recursive: entries can refer to other entries.
For instance, the compressed name for 11 is
elM. The
print() function outputs the characters
e and
l (lower-case 'L', not number '1') verbatim, but then it finds the
M, so it calls itself with the index of the 29th entry (ASCII 'M' - ASCII '0') into the lookup table. This string is
evL, so it outputs
e and
v, then calls itself again with the index of the 28th entry in the lookup table, which is
en, and is output verbatim. This is useful because
en is also used in
eL for
een (used after
eight in
eighteen), which is used in
tO for
teen (used for every other
-teen name).
This scheme results in a fairly significant compression of the number names, while requiring only a small amount of code to decompress.
The commas at the beginning and end of the string account for the simplistic way that substrings are found within this string. Adding two characters here saves more characters later.
About the abuse of
main()
argv is ignored (and therefore not declared in the compressed version), argc's value is ignored, but the storage is reused to hold the current number. This just saves me from having to declare an extra variable.
About the lack of
#include
Some will complain that omitting
#include <stdio.h> is cheating. It is not at all. The given is a completely legal C program that will compile correctly on any C compiler I know of (albeit with warnings). Lacking protoypes for the stdio functions, the compiler will assume that they are cdecl functions returning
int, and will trust that you know what arguments to pass. The return values are ignored in this program, anyway, and they are all cdecl ("C" calling convention) functions, and we do indeed know what arguments to pass.
Output
Output is as expected:
0 zero is four. four is magic.
1 one is three. three is five. five is four. four is magic.
4 four is magic.
20 twenty is six. six is three. three is five. five is four. four is magic.
21 twenty-one is nine. nine is four. four is magic.
* The previous version missed the mark on two parts of the spec: it didn't handle zero, and it took input on the command line instead of stdin. Handling zeros added characters, but using stdin instead of command line args, as well as a couple of other optimzations saved the same number of characters, resulting in a wash.
â The requirements have been changed to make clear that the number word should be printed on both sides of " is ". This new version meets that requirement, and implements a couple more optimizations to (more than) account for the extra size necessary.
Solution:6
J, 107
112 characters
:
(Newline for readability only)
Usage and output:
:12 12 is 6. 6 is 3. 3 is 5. 5 is 4. 4 is magic.
Solution:7
T-SQL, 413
451 499 chars
CREATE FUNCTION d(@N int) RETURNS int AS BEGIN Declare @l char(50), @s char(50) Select @l='0066555766',@s='03354435543668877987' if @N<20 return 0+substring(@s,@N+1,1) return 0+substring(@l,(@N/10)+1,1) + 0+(substring(@s,@N%10+1,1))END GO CREATE proc M(@x int) as BEGIN WITH r(p,n)AS(SELECT p=@x,n=dbo.d(@x) UNION ALL SELECT p=n,n=dbo.d(n) FROM r where n<>4)Select p,'is',n,'.' from r print '4 is magic.'END
(Not that I'm seriously suggesting you'd do this... really I just wanted to write a CTE)
To use:
M 95
Returns
p n ----------- ---- ----------- 95 is 10. 10 is 3. 3 is 5. 5 is 4. 4 is magic.
Solution:8
Java (with boilerplate),
308 290 286 282 280 characters
class A{public static void main(String[]a){int i=4,j=0;for(;;)System.out.printf("%d is %s.%n",i=i==4?new java.util.Scanner(System.in).nextInt():j,i!=4?j="43354435543668877988699;::9;;:699;::9;;:588:998::9588:998::9588:998::97::<;;:<<;699;::9;;:699;::9;;:".charAt(i)-48:"magic");}}
I'm sure Groovy would get rid of much of that.
Explanation and formatting (all comments, newlines and leading/trailing whitespace removed in count):
Reasonably straight forward, but
//boilerplate class A{ public static void main(String[]a){ //i is current/left number, j right/next number. i=4 signals to start //by reading input int i=4,j=0; for(;;) //print in the form "<left> is <right>." System.out.printf( "%d is %s.%n", i=i==4? //<left>: if i is 4 <left> will be a new starting number new java.util.Scanner(System.in).nextInt(): //otherwise it's the next val j, i!=4? //use string to map number to its length (:;< come after 9 in ASCII) //48 is value of '0'. store in j for next iteration j="43354435543668877988699;::9;;:699;::9;;:588:998::9588:998::9588:998::97::<;;:<<;699;::9;;:699;::9;;:".charAt(i)-48: //i==4 is special case for right; print "magic" "magic"); } }
Edit: No longer use hex, this is less keystrokes
Solution:9
Windows PowerShell: 152
153 184 bytes
based on the previous solution, with more influence from other solutions
$o="03354435543668877988" for($input|sv b;($a=$b)-4){if(!($b=$o[$a])){$b=$o[$a%10]-48+"66555766"[($a-$a%10)/10-2]}$b-=48-4*!$a "$a is $b."}'4 is magic.'
Solution:10
C, 158 characters
main(n,c){char*d="03354435543668877988";for(scanf("%d",&n);n-4;n=c)printf("%d is %d.\n",n,c=n?n<19?d[n]-48:d[n%10]-"_,**+++)**"[n/10]:4);puts("4 is magic.");}
(originally based on Vlad's Python code, borrowed a trick from Tom Sirgedas' C++ solution to squeeze out a few more characters)
expanded version:
main(n, c) { char *d = "03354435543668877988"; for (scanf("%d",&n); n-4; n = c) printf("%d is %d.\n", n, c = n ? n<19 ? d[n]-48 : d[n%10] - "_,**+++)**"[n/10] : 4); puts("4 is magic."); }
Solution:11
Python, 129
133 137 148 chars
As a warm-up, here is my first version (improves couple of chars over previous best Python).
PS. After a few redactions now it is about twenty char's shorter:
n=input() while n-4:p=(922148248>>n/10*3&7)+(632179416>>n%10*3&7)+(737280>>n&1)+4*(n<1);print n,'is %d.'%p;n=p print'4 is magic.'
Solution:12
C#: 210 Characters.
Squished:));}}
Expanded:) ); } }
Tricks this approach uses:
- Create a lookup table for number name lengths based on digits that appear in the number.
- Use character array lookup on a string, and char arithmetic instead of a numeric array.
- Use class name aliasing to short
Console.to
C.
- Use the conditional (ternary) operator (
?:) instead of
if/else.
- Use the
\nwith
Writeescape code instead of
WriteLine
- Use the fact that C# has a defined order of evaluation to allow assignments inside the
Writefunction call
- Use the assignment expressions to eliminate extra statements, and thus extra braces
Solution:13
Perl: 148 characters
(Perl:
233 181 212 206 200 199 198 185 179 149 148 characters)
- Moved exceptions hash into unit array. This resulted in my being able to cut a lot of characters :-)
- mobrule pointed out a nasty bug. Quick fix adds 31 characters, ouch!
- Refactored for zero special case, mild golfing done as well.
- Direct list access for single use rather than storing to array? Hell yes!
- SO MUCH REFACTORING for just ONE bloody character. This, truly, is the life of a golfer. :-(
- Oops, easy whitespace fix. 198 now.
- Refactored some redundant code.
- Last return keyword in
ris unnecessary, shaved some more off.
- Massive refactoring per comments; unfortunately I could only get it to 149 because I had to fix a bug that was present in both my earlier code and the commenters' versions.
- Trying bareword "magic".
Let's get this ball rolling with a modest attempt in Perl.
@u=split'','4335443554366887798866555766';$_=<>;chop;print"$_ is ".($_=$_==4?0:$_<20?$u[$_]:($u[$_/10+18]+($_%10&&$u[$_%10]))or magic).". "while$_
Tricks:
Too many!
Solution:14
JavaScript 1.8 (SpiderMonkey) - 153 Chars
l='4335443554366887798866555766'.split('') for(b=readline();(a=+b)-4;print(a,'is '+b+'.'))b=a<20?l[a]:+l[18+a/10|0]+(a%10&&+l[a%10]) print('4 is magic.')
Usage:
echo 42 | js golf.js
Output:
42 is 8. 8 is 5. 5 is 4. 4 is magic.
With bonus - 364 chars
l='zero one two three four five six seven eight nine ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen twenty thirty fourty fifty sixty seventy eighty ninety'.split(' ') z=function(a)a<20?l[a]:l[18+a/10|0]+(a%10?' '+l[a%10]:'') for(b=+readline();(a=b)-4;print(z(a),'is '+z(b)+'.'))b=z(a).replace(' ','').length print('four is magic.')
Output:
ninety nine is ten. ten is three. three is five. five is four. four is magic.
Solution:15
Haskell, 224
270 characters
o="43354435543668877988" x!i=read[x!!i] n x|x<20=o!x|0<1="0066555766"!div x 10+o!mod x 10 f x=zipWith(\a b->a++" is "++b++".")l(tail l)where l=map show(takeWhile(/=4)$iterate n x)++["4","magic"] main=readLn>>=mapM putStrLn.f
And little more readable -
ones = [4,3,3,5,4,4,3,5,5,4,3,6,6,8,8,7,7,9,8,8] tens = [0,0,6,6,5,5,5,7,6,6] n x = if x < 20 then ones !! x else (tens !! div x 10) + (ones !! mod x 10) f x = zipWith (\a b -> a ++ " is " ++ b ++ ".") l (tail l) where l = map show (takeWhile (/=4) (iterate n x)) ++ ["4", "magic"] main = readLn >>= mapM putStrLn . f
Solution:16
C++ Stdio version, minified: 196 characters
#include <cstdio> #define P;printf( char*o="43354435543668877988";main(int p){scanf("%d",&p)P"%d",p);while(p!=4){p=p<20?o[p]-48:"0366555966"[p/10]-96+o[p%10]P" is %d.\n%d",p,p);}P" is magic.\n");}
C++ Iostreams version, minified: 195 characters
#include <iostream> #define O;std::cout<< char*o="43354435543668877988";main(int p){std::cin>>p;O p;while(p!=4){p=p<20?o[p]-48:"0366555966"[p/10]-96+o[p%10]O" is "<<p<<".\n"<<p;}O" is magic.\n";}
Original, un-minified: 344 characters
#include <cstdio> int ones[] = { 4, 3, 3, 5, 4, 4, 3, 5, 5, 4, 3, 6, 6, 8, 8, 7, 7, 9, 8, 8 }; int tens[] = { 0, 3, 6, 6, 5, 5, 5, 9, 6, 6 }; int n(int n) { return n<20 ? ones[n] : tens[n/10] + ones[n%10]; } int main(int p) { scanf("%d", &p); while(p!=4) { int q = n(p); printf("%i is %i\n", p, q); p = q; } printf("%i is magic\n", p); }
Solution:17
Delphi: 329 characters
Single Line Version:.
Formated:.
Probably room for some more squeezing... :-P
Solution:18
C#
314 286 283 274 289 273 252 chars.
Squished:
252
Normal:
using C = System.Console; class P { static void Main() { var x = "4335443554366877798866555766"; int m, o, v = int.Parse(C.ReadLine()); do { C.Write("{0} is {1}.\n", o = v, v == 4 ? (object)"magic" : v = v < 20 ? x[v] - 48 : x[17 + v / 10] - 96 + ((m = v % 10) > 0 ? x[m] : 48)); } while (o != 4); C.ReadLine(); } }
Edit Dykam: Did quite some carefull insertions and changes:
- Changed the l.ToString() into a cast to
objectof the
string
"magic".
- Created a temporary variable
o, so I could move the
breakoutside the
forloop, that is, resulting in a
do-while.
- Inlined the
oassignment, aswell the
vassignment, continueing in inserting the calculation of
lin the function arguments altogether, removing the need for
l. Also inlined the assignment of
m.
- Removed a space in
int[] x,
int[]xis legit too.
- Tried to transform the array into a string transformation, but the
using System.Linqwas too much to make this an improvement.
Edit 2 Dykam Changed the int array to a char array/string, added proper arithmics to correct this.
Solution:19
Lua, 176 Characters."
or."
Solution:20
C - without number words
180 175* 172 167 characters
All newlines are for readability and can be removed:.");}
Slightly unminified:."); }
* The previous version missed the mark on two parts of the spec: it didn't handle zero, and it took input on the command line instead of stdin. Handling zero added characters, but using stdin instead of command line args saved even more, resulting in a net savings.
Solution:21
perl,
123 122 characters
Just realized that there is no requirement to output to STDOUT, so output to STDERR instead and knock off another character.
@u='0335443554366887798866555766'=~/./g;$_+=<>;warn"$_ is ",$_=$_-4?$_<20?$u[$_]||4:$u[chop]+$u[$_+18]:magic,".\n"until/g/
And, a version that returns spelled out numbers:
279 278 276 280 characters
@p=(Thir,Four,Fif,Six,Seven,Eigh,Nine);@n=("",One,Two,Three,Four,Five,@p[3..6],Ten,Eleven,Twelve,map$_.teen,@p);s/u//for@m=map$_.ty,Twen,@p;$n[8].=t;sub n{$n=shift;$n?$n<20?$n[$n]:"$m[$n/10-2] $n[$n%10]":Zero}$p+=<>;warnt$m=n($p)," is ",$_=$p-4?n$p=()=$m=~/\w/g:magic,".\n"until/c/
While that meets the spec, it is not 100% well formatted. It returns an extra space after numbers ending in zero. The spec does say:
"I don't care how you separate the word tokens, though they should be separated"
That's kind of weaselly though. A more correct version at
282 281 279 283 characters
@p=(Thir,Four,Fif,Six,Seven,Eigh,Nine);@n=("\x8",One,Two,Three,Four,Five,@p[3..6],Ten,Eleven,Twelve,map$_.teen,@p);s/u//for@m=map$_.ty,Twen,@p;$n[8].=t;sub n{$n=shift;$n?$n<20?$n[$n]:"$m[$n/10-2]-$n[$n%10]":Zero}$p+=<>;warn$m=n($p)," is ",$_=$p-4?n$p=()=$m=~/\w/g:magic,".\n"until/c/
Solution:22
Python:
#!/usr/bin/env python # Number of letters in each part, we don't count spaces Decades = ( 0, 3, 6, 6, 6, 5, 5, 7, 6, 6, 0 ) Smalls = ( 0, 3, 3, 5, 4, 4, 3, 5, 5, 4 ) Teens = ( 6, 6, 8, 8, 7, 7, 9, 8, 8 ) def Count(n): if n > 10 and n < 20: return Teens[n-11] return Smalls[n % 10 ] + Decades [ n / 10 ] N = input() while N-4: Cnt = Count(N) print "%d is %d" % ( N, Cnt) N = Cnt print "4 is magic"
Solution:23
C++, 171 characters (#include omitted)
void main(){char x,y,*a="03354435543668877988";scanf("%d",&x);for(;x-4;x=y)y=x?x<19?a[x]-48:"_466555766"[x/10]+a[x%10]-96:4,printf("%d is %d.\n",x,y);puts("4 is magic.");}
Solution:24
Ruby, 164 characters."
decoded:."
Solution:25
Lua
185 190 199
added periods, added io.read, removed ()'s on last print.'
with line breaks.'
Solution:26
PhP Code
function get_num_name($num){ switch($num){ case 1:return 'one'; case 2:return 'two'; case 3:return 'three'; case 4:return 'four'; case 5:return 'five'; case 6:return 'six'; case 7:return 'seven'; case 8:return 'eight'; case 9:return 'nine'; } } function num_to_words($number, $real_name, $decimal_digit, $decimal_name){ $res = ''; $real = 0; $decimal = 0; if($number == 0) return 'Zero'.(($real_name == '')?'':' '.$real_name); if($number >= 0){ $real = floor($number); $decimal = number_format($number - $real, $decimal_digit, '.', ','); }else{ $real = ceil($number) * (-1); $number = abs($number); $decimal = number_format($number - $real, $decimal_digit, '.', ','); } $decimal = substr($decimal, strpos($decimal, '.') +1); $unit_name[1] = 'thousand'; $unit_name[2] = 'million'; $unit_name[3] = 'billion'; $unit_name[4] = 'trillion'; $packet = array(); $number = strrev($real); $packet = str_split($number,3); for($i=0;$i<count($packet);$i++){ $tmp = strrev($packet[$i]); $unit = $unit_name[$i]; if((int)$tmp == 0) continue; $tmp_res = ''; if(strlen($tmp) >= 2){ $tmp_proc = substr($tmp,-2); switch($tmp_proc){ case '10': $tmp_res = 'ten'; break; case '11': $tmp_res = 'eleven'; break; case '12': $tmp_res = 'twelve'; break; case '13': $tmp_res = 'thirteen'; break; case '15': $tmp_res = 'fifteen'; break; case '20': $tmp_res = 'twenty'; break; case '30': $tmp_res = 'thirty'; break; case '40': $tmp_res = 'forty'; break; case '50': $tmp_res = 'fifty'; break; case '70': $tmp_res = 'seventy'; break; case '80': $tmp_res = 'eighty'; break; default: $tmp_begin = substr($tmp_proc,0,1); $tmp_end = substr($tmp_proc,1,1); if($tmp_begin == '1') $tmp_res = get_num_name($tmp_end).'teen'; elseif($tmp_begin == '0') $tmp_res = get_num_name($tmp_end); elseif($tmp_end == '0') $tmp_res = get_num_name($tmp_begin).'ty'; else{ if($tmp_begin == '2') $tmp_res = 'twenty'; elseif($tmp_begin == '3') $tmp_res = 'thirty'; elseif($tmp_begin == '4') $tmp_res = 'forty'; elseif($tmp_begin == '5') $tmp_res = 'fifty'; elseif($tmp_begin == '6') $tmp_res = 'sixty'; elseif($tmp_begin == '7') $tmp_res = 'seventy'; elseif($tmp_begin == '8') $tmp_res = 'eighty'; elseif($tmp_begin == '9') $tmp_res = 'ninety'; $tmp_res = $tmp_res.' '.get_num_name($tmp_end); } break; } if(strlen($tmp) == 3){ $tmp_begin = substr($tmp,0,1); $space = ''; if(substr($tmp_res,0,1) != ' ' && $tmp_res != '') $space = ' '; if($tmp_begin != 0){ if($tmp_begin != '0'){ if($tmp_res != '') $tmp_res = 'and'.$space.$tmp_res; } $tmp_res = get_num_name($tmp_begin).' hundred'.$space.$tmp_res; } } }else $tmp_res = get_num_name($tmp); $space = ''; if(substr($res,0,1) != ' ' && $res != '') $space = ' '; $res = $tmp_res.' '.$unit.$space.$res; } $space = ''; if(substr($res,-1) != ' ' && $res != '') $space = ' '; if($res) $res .= $space.$real_name.(($real > 1 && $real_name != '')?'s':''); if($decimal > 0) $res .= ' '.num_to_words($decimal, '', 0, '').' '.$decimal_name.(($decimal > 1 && $decimal_name != '')?'s':''); return ucfirst($res); }
//////////// testing ////////////////
$str2num = 12; while($str2num!=4){ $str = num_to_words($str2num, '', 0, ''); $str2num = strlen($str)-1; echo $str . '=' . $str2num .'<br/>'; if ($str2num == 4) echo 'four is magic'; }
////// Results /////////
Twelve =6 Six =3 Three =5 Five =4 four is magic
Solution:27
Perl - 130 chars
5.12.1 Â (130 chars)
121 123 132 136 140
# @u='4335443554366887798866555766'=~/./g;$_=pop;say"$_ is ",$_=$_-4?$_<20?$u[$_]:$u[$_/10+18]+(($_%=10)&&$u[$_]):magic,"."until/\D/
5.10.1 Â (134 chars)
125 127 136 140 144
# 1234 @u='4335443554366887798866555766'=~/./g;$_=pop;print"$_ is ",$_=$_-4?$_<20?$u[$_]:$u[$_/10+18]+(($_%=10)&&$u[$_]):magic,".\n"until/\D/
Change History:
20100714:2223 - reverted change at the attention of mobrule, but
($_%10&&$u[$_%10]) â'
(($_%=10)&&$u[$_]), which is the same # of chars, but I did it in case someone might see a way to improve it
20100714:0041 -
split//,'...' â'
'...'=~/./g
20100714:0025 -
($_%10&&$u[$_%10]) â'
$u[$_%10]
20100713:2340 -
while$_ â'
until/\D/ + removed unnecessary parentheses
20100713:xxxx -
$=<>;chop; â'
$_=pop; - courtesy to mobrule
Note: I was tired of improving others' answers in comments, so now I'm being greedy and can just add my changes here :) This is a split off from Platinum Azure's answer - credit in part to Hobbs, mobrule, and Platinum Azure.
Solution:28
Shameless Perl with Number Words (329 characters)
Adapted fairly directly from P Daddy's C code, with some tweaks to
p() to make it do the same thing using Perl primitives instead of C ones, and a mostly-rewritten mainloop. See his for an explanation. Newlines are all optional.
@t=(qw(zero 4SmagicT)," is ",".\n"); sub p{local$_=$t[pop];1while s/[0-Z]/$t[-48+ord$&]/e; print;length}$_=<>;chop;while($_-4){ $_=($_>19?(p($_/10+18),$_&&print("-"),$_%=10)[0]:0)+p$_; p 35;p$_;p 36}p 34
Side note: it's too bad that perl
Solution:29
Ruby, 141 chars:
n=gets.to_i;m="4335443554366887798866555766";loop{s=n;n=n>20?m[18+n/10]+m[n%10]-96: m[n]-48;puts"#{s} is #{n==s ? 'magic': n}.";n==s &&break}
Solution:30
while(true) { string a; ReadLine(a) WriteLine(4); }
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2019/05/tutorial-code-golf-four-is-magic.html
|
CC-MAIN-2020-45
|
refinedweb
| 4,681
| 67.96
|
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 1.8.5
- Fix Version/s: 1.8.6, 2.0-beta-3
- Component/s: None
- Labels:None
Description
As a side effect to
GROOVY-5150, the class constant pool is used for non final fields too:
I have this class:
public class A { public static int i = 5; }
compile it with javac and you get the following related to i (using javap):
public static int i; static {}; Code: Stack=1, Locals=0, Args_size=0 0: iconst_5 1: putstatic #2; //Field i:I 4: return
Compile it with groovyc and these days I seem to get this:
public static int i; Constant value: int 5
Now I thought the 'constant value' attribute for field objects was for proper constants (i.e. final fields), not for initialization values. That seems to be what it indicates here:
===
4.7.2 The ConstantValue Attribute
The ConstantValue attribute is a fixed-length attribute used in the attributes table of the field_info (§4.5) structures. A ConstantValue attribute represents the value of a constant field that must be (explicitly or implicitly) static;
===
Indeed javac will only use that attribute if I make the field final.
Now the JVM doesn't seem to be particularly strict here and the static does get the right value initially and allow it to be changed (at least on the VM version I'm using, haven't tried any others). It just seems to be a slight abuse of the meaning of constant value. I guess I just wanted to check it was deliberate.
thanks,
Andy
Activity
- All
- Work Log
- History
- Activity
- Transitions
|
https://issues.apache.org/jira/browse/GROOVY-5286
|
CC-MAIN-2017-09
|
refinedweb
| 275
| 56.79
|
JSON.Net, the Groovy way
On Ajaxian, the other day, I spotted an article about JSON.Net, a project aiming at simplifying the production and consumption of JSON data for the .Net world, and I wanted to contrast what I've read with what we are doing with Groovy and Grails. I rarely speak about the Microsoft world, but the latest features of C# 3 are very interesting and powerful, particularly the anonymous types, their closures (whatever they are called), and LINQ for querying relational or tree structured data.
For instance, here's how JSON.Net produces JSON content:
JObject o = JObject.FromObject(new
{
channel = new
{
title = "James Newton-King",
link = "",
description = "James Newton-King's blog.",
item =
from p in posts
orderby p.Title
select new
{
title = p.Title,
description = p.Description,
link = p.Link,
category = p.Categories
}
}
});
Here, we can see the new anonymous type feature, with the new {} construct to easily create new data structure, without requiring the creation of classes or interfaces. And in the item element, we notice LINQ at work providing an SQL like notation to select the posts ordered by title.
In Groovy and Grails land, we are reusing the map notation to create JSON content, that we then coerce to JSON using the as operator:
import grails.converters.JSON
def data = [
channel: [
title: "James Newton-King",
link: "",
description: "James Newton-King's blog.",
item:
posts.sort { it.title }.collect {
[
title: it.title,
description: it.description,
link: it.link,
category: it.categories
]
}
]
]
// then, if you want to render this structure as JSON inside a Grails controller:
render data as JSON
Unlike LINQ with its SQL-like notation, Groovy favors a more functional approach, using the sort() and collect() taking closures to do the filtering and aggregation. These methods are added by the Groovy Development Kit to the usual Java collection classes.
- Login or register to post comments
- 2451 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
|
http://css.dzone.com/news/jsonnet-groovy-way
|
crawl-002
|
refinedweb
| 341
| 57.67
|
It is all very simple and uses only 6 PINS TO INTERFACE WITH!
Note: the image is not mine and comes from
Step 1: Parts needed
1x '''Arduino (any kind will do)'''
1x '''HD44780 character LCD'''
'''lots of non-stranded wire'''
One 10k Potientiometer
Step 2: 4: Test Code 1: Hello World
Second, copy and paste the file into the Arduino IDE.
Lastly, click the UPLOAD TO I/O board button, or CTRL+U
if everything went to plan, it should now say "Hello World!".
Step 5: Test Code 2: Using 2 lines
open and upload this next program.
it will now say:
Hello
World
BYTE isn't declared in this scope :/
You can fix that by changing the line "lcd.print(1, BYTE);" to "lcd.write(byte(1));"
It should do the exact same thing
Thanks a lot man!! Really!!!
Thank you
I've just been creating my own symbols for use on a weather station I'm making.
I've found out that the arduino IDE (0.17 at least, not sure about 0.18) and the LiquidCrystal library that comes with it is only capable of assigning 8 custom characters.
Hopefully someone knows of a workaround or can supply us with a library.
Also, the 5x8 matrix is a technical limitation to do with how the LCD is made and controlled, completely different approach than graphical LCDs.
If you wanted to use a 6x10 matrix you'd have to make each char take up 2 charactors horizontally and 2 vertically. I guess it might be possible, but you'd end up with an 8x1 display.
Hope I've explained this clearly enough. The Cageybee
I just received my first arduino + 20 x 4 display 6 days ago, so don't shoot me, I'm just a beginner.
As PC-programmer I normally would define all characters in the setup-part of my program, but I tried to define different characters in the loop part for my Arduino and... it works !
Just define all characters you want in several arrays and use those to recreate the characters you want while... you're in the loop. Every time you need a new character you'll have to recreate it. "Old" characters will be replaced by new ones, so... you'll have to recreate the old ones as well if you want to use those again. It's probably easiest to recreate every non-standard character each time you use it.
A small example which displays a smiley with and one without a nose using the same self defined character address . I've used the pins for the display (12, 11, 5, 4, 3, 2) as used in the arduino examples and you might need to change the setup part if you've got a different lcd-display.
#include <LiquidCrystal.h>
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
byte smiley[8] = {
B00000,
B10001,
B00000,
B00000,
B10001,
B01110,
B00000,
};
byte nose[8] = {
B00000,
B10001,
B00000,
B00100,
B10001,
B01110,
B00000,
};
void setup() {
lcd.begin(20, 4);
}
void loop() {
lcd.createChar(0, smiley);
lcd.write(0);
delay (1000);
lcd.createChar(0, nose);
lcd.write(0);
delay (1000);
}
I've connected the LCD as described on most pages of. which is a little different as described in this article.
One should replace my
LiquidCrystal lcd(12, 11, 5, 4, 3, 2); - line with
LiquidCrystal lcd(12, 11, 2, 7, 8, 9, 10); to use the setup of this instructable.
Next, it's possible to create more as 8 characters using my routine, but I guess there still is a limit. With 5x8 pixels for each character one would need 1600 self defined characters to display all possibilities. ((5*8)^2)
Arduino will probably not be able to do that.
How ever, one should be able to create characters like the waveform-characters of this instructable by converting the outcome of analog readings (they look analog to me) to self defined characters, that won't use much memory.
Besides, most of those 1600 characters won't be very useful anyway.
icontexto.com/charactercreator/
www1.elfa.se/elfa~ex_en/b2b/catalogstart.do
|
http://www.instructables.com/id/Controlling-a-character-LCD-with-an-Arduino/meta-stats
|
CC-MAIN-2015-18
|
refinedweb
| 687
| 72.05
|
Concept
At Elastic, we are constantly looking for ways to make it easy for new users to experience the magic of the Elastic Stack. How can we shorten the time from "I have heard about this Elasticsearch thing" to "Oh, drill downs in Kibana are so amazing"?
During the recent reorganization of our examples repo (contributions always welcome!!), we updated the legacy Docker examples to reflect the publication of our official images. And then we had a whacky idea. Can we provide examples for our new users to experience the full stack in a single command? One command. Uno.
Now, if you have been following the evolution of the Elastic Stack, you would know that recent releases have focused on simplifying the getting started experience. They build on the idea that simple things should be simple. Logstash modules and Beats modules are both steps in that direction, providing both the necessary ingest pipelines for parsing data, as well as supporting dashboards for common data sources. We wanted to simplify this even further for new users in the exploratory mode looking to simply "get a feel" for the capabilities of the stack. Remember, all down to one command.
Before continuing, keep in mind that this example is for exploration purposes ONLY and is not appropriate for production use or as a means of initiating a production architecture - it simply provides a quick and easy way for a user to experience a fully featured stack with little effort.
Technology
The above restrictions were relevant when deciding on the appropriate technology for this problem. Despite all the recent developments in orchestration tooling, we decided Docker Compose still represents the easiest way of formulating a full stack example targeted at a single machine. Compose is a tool for defining and orchestrating multiple Docker containers to form an application. The containers, and their respective interactions, are largely defined through YAML files. These YAML files can be executed by Compose with a single command, including the initialization and startup all of the containers defined. We assume the reader has basic knowledge of Docker before proceeding.
Architecture
On confirming the technology, we had to decide what specifically to include. Ideally, the Compose example would simply deploy a complete stack. On closer inspection this aim was a little unrealistic. Our Beats modules, especially Filebeat and Metricbeat, have grown rapidly, allowing a wide range of technologies to be monitored. For now, we have therefore focused on deploying a range of Beats modules only, whilst ensuring appropriate data sources are available and automatically ingested to populate any dashboards. We will, however, update the example on the release of Logstash modules in 5.6. We settled on the following architecture which captures and populates as much data as possible:
As illustrated above, as well as starting containers for Elasticsearch, Kibana and each of our Beats, we spin up instances of NGINX, Apache2 and MySQL. These provide interfaces for Metricbeat modules (apache, nginx, mysql) to monitor, as well as generating logs that can be consumed by equivalent modules in Filebeat (apache2, nginx, mysql). Furthermore, with some careful bind-mounting of local filesystem locations, Metricbeat can be used to monitor both the host's system statistics (via the system module) and the Docker containers themselves (via the docker module). Filebeat can additionally be used to collect and ingest the host's system logs using its equivalent of the system module, as well as the Docker JSON logs generated as a result of the containers sending their output to stdout. We use Packetbeat to collect and monitor any DNS, HTTP, or other layer-7 traffic traffic occurring on the host, including MySQL transaction data. Although not illustrated above (to avoid a spider web of connecting lines), Heartbeat monitors all other containers via ICMP, performs health checks against Kibana, Elasticsearch, Apache2 and NGINX over HTTP, and against MySQL through a raw TCP socket connection.
All of the above provides a fairly comprehensive set of monitoring (and duplication for purposes of example), for an architecture you might deploy, whilst maximising the number of modules deployed and dashboards populated. A full list of the dashboards for which data will be available is listed here. Deploying more modules would unfortunately require a prohibitive number of containers for hosts with limited resources, but users with larger systems could easily add further functionality - see Customising and Contributing.
Usage
To use the example, simply download and extract the archive provided here (Linux/OSX) or here (Windows). This provides a Docker Compose file for each operating system, supporting configuration files for each of the Elastic Stack components and some small datasets for ingesting through Logstash. Ensure you have Docker installed locally. The example itself was tested on Docker version 17.07.0 which includes docker-compose by default on Windows and OSX.
A few port considerations
TCP ports 80 (NGINX), 8000 (Apache2), 5061 (Libana), 9200 (Elasticsearch), 3306 (MySQL), 5000 and 6000 (both Logstash) are all mapped through to the host. Ensure these are available on the host and any existing services which might use them are stopped.
For those using Windows or OSX
For Linux, Docker uses native resource isolation features of the kernel such as cgroups, namespaces and a union-capable file system such as OverlayFS to provide Docker functionality. Docker for Windows (in Linux mode on Windows 10/2012) utilises a small Linux VM for the purposes of providing Docker functionality, powered by Hyper-V. OSX utilises a similar technique using HyperKit. Older implementations i.e. Docker Toolbox, utilised a Virtualbox VM. Supported versions are listed here.
The above has some important implications, specifically:
- The VM used by Windows and OSX will default to using only 2GB of memory. Given we assign 2GB to the Elasticsearch container alone, we strongly recommend increasing this to 4GB in the preferences. Further instructions for OSX here and Windows here.
- The Compose file provided relies on mounting several locations from the host operating system. These include:
- The "/private/var/logs" and "/var/logs" directories for OSX and Linux respectively, in order to access the system logs of the host rather than those of the Filebeat container. This is not supported on windows.
- "/proc" and "/sys/fs/cgroup" on Linux for the Metricbeat system module to report on the host memory, disk, network and CPU usage for the host machine rather than just the Metricbeat container. For OSX and Windows, this module will report the stats of the VM hosting Docker.
- "/var/run/docker.sock" to provide details of the Docker containers to the Metricbeat docker module. This should report the correct containers for all operating systems.
- The Packetbeat container binds itself to the host network in order to capture HTTP, DNS, ICMP and SQL traffic created by the user. For Windows and OSX, it appears the container only has visibility of network traffic within the host VM. Further investigation is underway, to see if this can be resolved, and contributions are welcome.
- For those using the older Docker implementation for Windows i.e. Docker Toolbox that utilises a Virtualbox VM, you will need to install the loopback adapter to allow communication with "localhost". Furthermore, you will need to configure port forwarding on the NAT interface for the Virtualbox VM. See here for additional details. Docker Toolbox for OSX has not been tested.
Important - In addition to the above, the Filebeat container needs access to the NGINX, Apache2 and MySQL logs. To achieve this, these containers mount a "./logs" volume to which their logs are written. Filebeat in turn also mounts this directory as read only. For OSX and Windows the user needs to ensure that this folder is bind-mounted and thus available to the Docker containers. Further instructions on achieving this - Windows, OSX. The appropriate OSX configuration panel is shown below.
Note: This step can be skipped if you extract the example into a subdirectory of "/Users" on OSX or "C:\Users" on Windows. These directories are bind mounted by default.
Deploying the Stack
Starting a terminal, navigate to the extracted folder full_stack_example. On Linux ensure all config files are owned by root:
chown -R root:root ./config
Simply run the following command, adjusting for your supported operating system - OSX, Windows or linux.
docker-compose -f docker-compose-<operating_system>.yml up
e.g.
docker-compose -f docker-compose-osx.yml up
For those not familiar with Docker, this command initiates deployment of the architecture described in the Compose file. In order to do this, it first needs to download the images for each container. Whilst we make an effort to minimise the size of these images for each stack component, they still require a base operating system (currently Centos 7) and hence this might a good time to make a nice cup of tea.
To confirm the stack is fully deployed, issue the following command:
docker ps -a --format "{{.Names}}: {{.Status}}"
This should list the following containers:
- filebeat: Up 10 minutes
- packetbeat: Up 10 minutes
- heartbeat: Up 10 minutes
- metricbeat: Up 10 minutes
- logstash: Up 10 minutes
- configure_stack: Exited (0) 10 minutes ago
- kibana: Up 11 minutes (healthy)
- nginx: Up 11 minutes (healthy)
- mqsql: Up 11 minutes (healthy)
- elasticsearch: Up 11 minutes (healthy)
- apache2: Up 11 minutes (healthy)
You may have noticed that the container "configure_stack" above has actually exited. This container, shown in the earlier diagram, is deliberately short-lived and is responsible for some configuration details - including setting a password for Elasticsearch, loading the Beats dashboards, and adding a default Kibana index pattern.
Further technical details and instructions can be found in our examples repository here.
Exploring the data
On completion of the deployment, navigate to the Kibana Dashboard view. For Docker for Windows you will need to use the url. The default credentials of "elastic" and "changeme" should apply unless these have been modified - see Customising and Contributing. The complete list of dashboards for which data is populated is significant - 22 out of 35 at the time of writing. Below we can see the "Metricbeat Docker" dashboard populated with the details of our containers.
Adding More data
The majority of these dashboards will simply populate due to inherent "noise" caused by the images. However, we do expose a few additional ports for interaction to allow unique generation. These include:
- MySQL - port 3306 is exposed allowing the user to connect. Any subsequent MySQl traffic will in turn be visible in the dashboards "Filebeat MySQL Dashboard", "Metricbeat MySQL" and "Packetbeat MySQL performance".
- NGINX - port 80. Currently we don't host any content in NGINX so requests will result in 404s. However, content can easily be added as described here.
- Apache2 - port 8000. Other than the default Apache2 "It works" pages the stack doesn't host any content. Again easily changed.
- Docker logs - Any activity to the Docker containers, including requests to Kibana, are logged. These logs are captured in JSON form and indexed into a index "docker-logs-<yyyy.mm.dd>".
Customising & Contributing
Firstly, we welcome contributions! Obvious customisations might be to add containers for other products often used in conjunction with the Elastic Stack, for which modules exist e.g. Kafka or Redis. As discussed earlier we balanced the containers started against what could realistically be hosted on a single machine. As we release further modules for Beats, as well Logstash, we will continue to enrich and maintain this example where possible.
Further details on customising this architecture, such as the Elasticsearch password, version or memory size, can be found here.
For new users to the Elastic Stack, hopefully the above has simplified your experience from hearing about Elasticsearch to getting started. Using Docker Compose we have shown how a full stack can be deployed in a single command, with data from Beats modules used to populate a wide range of rich and interactive dashboards.
In writing this blog, I would like to mention a special thanks to Toby McLaughlin and Dimitrios Liappis for the initial inspiration and guidance, as well as Jamie Smith for converting grey boxes into usable diagrams. Finally, thanks to Rathin Sawhney for acting as a Windows guinea pig.
1 The Docker logs are collected by the Filebeat container mounting the host directory
/var/lib/docker/containers. These JSON files are in turn collected and processed by a custom ingest pipeline.
|
https://www.elastic.co/es/blog/a-full-stack-in-one-command
|
CC-MAIN-2019-13
|
refinedweb
| 2,044
| 53.81
|
H2O AutoML in Python Comprehensive Tutorial
What is AutoML and Why AutoML?
- AutoML automates methods for model selection, hyperparameter tuning, and model ensemble. It does not help feature engineering.
- AutoML works best for common cases including tabular data(66% of data used at work are tabular), time series, and text data. It does not work as good in deep learning because deep learning requires massive calculation and proper layer architect, which does not function well with the hyperparameter tuning part in AutoML.
- AutoML can simplify machine learning coding and thus reduce labor costs. If you are using some common models on a simple dataset such as GBM, Random Forest, or GLM, AutoML is a great choice.
There are several popular platforms for AutoML including Auto-SKLearn, MLbox, TPOT, H2O, Auto-Keras. I will focus on H2O today. H2O AutoML is built in Java and can be applied to Python, R, Java, Hadoop, Spark, and even AWS. If you want to know more about other tools, check out this article.
How to use H2O in Python
I am going to use the classic dataset Titanic as an example here.
h2o installation: click here.
Notes: To run H2O you have to have JDK because H2O is based on Java. Currently, only Java 8–13 are supported.
Result: 13 useful lines lead to an AUC of 84.5%
------------------------Tutorial Starts Here------------------------
Initialize H2O & Import Data
h2o.init(max_mem_size='8G')
Initialization of H2O, in which you can set up maximum/minimum memory, set up the IP and Port. If you use H2O as a company, there are a lot more parameters to check out here. You will get a result similar to:
It is really useful you use the H2O connection URL to visualize the entire automation process. On that interface, you can select the model, check the log of the training, and do predicting work without coding.
from h2o.automl import H2OAutoML
train = h2o.import_file("train.csv")
test = h2o.import_file("test.csv")
After setting up H2O, we read the data in. The train and test here are called “H2OFrame”, which is very similar to DataFrame. It is Java-based so you will see the “enum” type, which represents categorical data in Python. Functions like “describe” are provided. There are some other parameters and functions including the “asfactor” we are going to use. Check them all here.
x = train.columns
y = "Survived"
train[y] = train[y].asfactor()
x.remove(y)
These four lines specify the target. We use “asfactor()” because H2O read the “Survived” column as “int”, which instead should be an “object”.
H2O does not do feature engineering for you. If you want a better result, I suggest you use Python classic methods to do feature engineering instead of the basic manipulations provided by H2O.
Model Training
%%time
aml = H2OAutoML(max_models=20, max_runtime_secs=12000)
aml.train(x=x, y=y, training_frame=train)
Training Customization
nfolds=5, balance_classes=False, class_sampling_factors=None, max_after_balance_size=5.0, max_runtime_secs=None, max_runtime_secs_per_model=None, max_models=None, stopping_metric='AUTO', stopping_tolerance=None, stopping_rounds=3, seed=None, project_name=None, exclude_algos=None, include_algos=None, exploitation_ratio=0, modeling_plan=None, preprocessing=None, monotone_constraints=None, keep_cross_validation_predictions=False, keep_cross_validation_models=False, keep_cross_validation_fold_assignment=False, sort_metric='AUTO'
Massive parameters are provided for you to customize training. The most common ones are nfolds for cross-validation; balance_classes for imbalanced data(set it to True to do other sampling methods); max_runtime_secs; exclude_algos; and sort_metric.
Check the leaderboard
lb = aml.leaderboard
lb.head(rows=15)
In the leaderboard, you can check model performance by AUC, logloss, mean_per_class_error, RMSE, and MSE. You can set up the rank in the training process by specifying sort_metric.
aml.leader #Best model
This is one of the most important features provided by H2O AutoML. You can get the best model parameters, Confusion Matrix, Gain/Lift Table, Scoring History, and Variable Importance by this single line of code.
If your leader is an ensemble model:
metalearner = h2o.get_model(aml.leader.metalearner()['name'])
You can check the variable importance by:
aml.leader.varimp()
model = h2o.get_model("XRT_1_AutoML_20201030_001219")
model.varimp_plot(num_of_features=8)
To predict and get the result:
pred = aml.predict(test)
pred = pred[0].as_data_frame().values.flatten()
It is just so simple and convenient. For the best performance, you have to set up more parameters. Thank you for reading and if you like my article, please leave me a thumb.
|
https://seanzhang-data.medium.com/h2o-automl-in-python-comprehensive-tutorial-f25001c11b80?source=user_profile---------5----------------------------
|
CC-MAIN-2022-40
|
refinedweb
| 719
| 50.33
|
How to get IP address of a URL in Python
In this article, we show you how to get the IP address of a URL or website in Python. In order to find the IP address of a URL or website, we can use the socket module that is available in Python. This tutorial will help you to learn about something new and useful.
Python program to get the IP address of a URL
First of all, you have to know about the IP address and URL. An Internet Protocol (IP) address is a unique string of numbers that identifies each computer using the internet protocol to communicate over a network. URL stands for Uniform Resource Locator. A URL is the address of a resource on the internet. Some of the examples for URL are,,, etc. Let’s look at the following example.
import socket url="" print("IP:",socket.gethostbyname(url))
In this example, we have imported socket module to get the IP address of a URL in Python. A URL is assigned to a variable. This variable acts as an argument for the gethostbyname() function. The gethostbyname() function takes hostname as an argument and returns the IP address of a host.
- Syntax: gethostbyname(hostname)
- hostname- The name of the host system for which IP address resolution is needed.
There are two versions of IP that currently coexist in the global internet. They are IP version 4 (IPV4) and IP version 6 (IPV6). IPV4 addresses are 32 bits long and IPV6 address is 128 bits long. In this program, the returned IP address is an IPV4 address. Let us look at the sample input and its corresponding output.
Output:
IP:104.27.178.5
I hope that you have learned something useful from this tutorial.
Also read:
|
https://www.codespeedy.com/get-ip-address-of-a-url-in-python/
|
CC-MAIN-2020-34
|
refinedweb
| 297
| 74.49
|
The basic class to represent a genetic sequence is called Sequence (quelle surprise). It stores a label, i.e., a name for the sequence, and its sites. The Sequence class itself is agnostic of the format/encoding of its content, that is, whether it stores nucleotide or amino acid or any other form of data. This offers flexibility when working with sequence data. There are however some functions that are specialized for e.g., nucleotide sequences; that is, they work with sequences of the characters
ACGT-.
A sequence comes rarely alone. A collection of sequences is stored in a SequenceSet. The sequences in such a set do not have to have the same length, i.e., it is not an alignment.
The code examples in this tutorial assume that you use
at the beginning of your code.
Reading is done using reader classes like FastaReader and PhylipReader. See there for details. Basic usage:
Writing is done the other way round, using e.g., FastaWriter or PhylipWriter:
All the readers and writers can also be normally stored in a variable, for example in order to change their settings and then use them multiple times:
Lastly, conversion between different sequence file formats is of course easily done by reading it in one format, and writing it in another.
Access to the sites of a Sequence is given via its member function sites().
It is also possible to directly iterate the Sequences in a SequenceSet and the single sites of a Sequence:
As printing a Sequence or a whole SequenceSet is common in order to inspect the sites, we offer a class PrinterSimple that does this more easily and with various settings:
It also offers printing using colors for the different sites (i.e., color each of the nucleotides differently). See the class description of PrinterSimple for details.
Furthermore, when dealing with many sequences, printing each character might be to much. For such large datasets, we offer PrinterBitmap, which prints the sites as pixels in a bitmap, each Sequence on a separate line, thus offering a more dense representation of the data:
Often, it is desired to summarize a collection of Sequences into a consensus sequence. For this, Genesis offers a couple of different algorithms:
ACGT), that uses a threshold for the character frequency to determine the consensus at each site.
ACGT), that uses a
similarity_factorto calculate consensus with ambiguity characters.
ACGT), which uses the method by Cavener, 1987.
See the documentation of those functions (and their variants) for details.
Related to the calculation of consensus sequences is the calculation of the entropy of a collection of Sequences. The entropy is a measure of information contained in the sites of such a collection.
We offer two modes of calculating the Sequence entropy:
as well as the single-site functions site_entropy() and site_information().
Instead of a SequenceSet, they take a SiteCounts object as input, which is a summarization of the occurence frequency of the sites in a SequenceSet. See there for details.
Finally, we want to point out some other interesting functions:
There are more classes and functions to work with Sequences, see namespace sequence for the full list.
|
http://doc.genesis-lib.org/tutorials_sequence.html
|
CC-MAIN-2018-17
|
refinedweb
| 525
| 55.03
|
Ruby on Rails: Using Capistrano to deploy application and run custom task
Sometimes we need to deploy our rails application to different stages(staging or production environment)which probably are remote machines, and we may need to do the deployment frequently, for each time the application has new version we would need to update the code to all stages environment. To save our time from manually doing repeated work in each time doing deployment, we could using Capistrano.
What is Capistrano?
Capistrano is a framework written in Ruby that provides automated deploy scripts. It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows. Capistrano can be used to deploy web application to any number of machines simultaneously, in sequence or as a rolling set.
How to use it?
Put Capistrano in Gemfile
group :development do
gem "capistrano", "~> 3.10", require: false
gem "capistrano-rails", "~> 1.6", require: false
end
Install the gem and run the generator to create a basic set of configuration files
bundle install
bundle exec cap install
cap install will create below files and folders
$project_root/
|
|--Capfile
|
|--config/
| |
| |--deploy.rb
| |
| |--deploy/
| |
| |--production.rb
| |--staging.rb
|
|--lib/
|
|--capistrano/
|
|--tasks/
Capfile is a document used for importing the third party library would be used in deployment. Those third parties library already define many tasks like db migration, sidekiq setup. We could simply integration those task in our deployment through using these third party library.
config/deploy.rb is main configuration document of deployment. All the task we would like to do in deployment could be defined in this document.
config/deploy/*.rb are for stage configurations. Mostly it would include server ip and user login information used for the stage environment.
lib/capistrano/tasks is a folder for us to place the script file of our custom task. The task script file should be a rake file.
Getting start
First of all, make sure you could SSH from development system to deployment system, because Capistrano use SSH to deploy. Then, you could run command line
bundle exec cap installto generate the files and folder we need in deployment. There are something we need to change in below files.
- modify
Capfile
require 'capistrano/setup'# Include default deployment tasks
require 'capistrano/deploy'
require 'capistrano/scm/git'
install_plugin Capistrano::SCM::Git
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/puma'
install_plugin Capistrano::Puma
install_plugin Capistrano::Puma::Workers
require 'capistrano/sidekiq'# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
- modify
config/deploy.rb
set :application, 'my_app' # my_app is your project name
set :repo_url, '<repo url>' # your project remote repository url
# it could be github project url
ask :branch, :master # Default deploy branch is :master
set :deploy_to, '/home/my_app' # deploy project to /home/my_app
- modify
config/deploy/staging.rbor
config/deploy/production.rband set the IP address or the domain name of the machine where your code will be deployed, the role of the machine. Capistrano uses a concept of a role to control over which tasks are run on which servers. For example, you may want to apply a task only to a database server and skip it when deploying to a web server. The role acts as a filter that tells Capistrano to group tasks with the same role together before executing them on a server matching a specific role.
server '10.0.3.193', user: 'deployer', roles: %w[db web]
server '10.0.3.22', user: 'deployer', roles: %w[app]set :default_env, path: '/usr/local/ruby-2.5.3/bin:$PATH'set :puma_workers, 2
set :sidekiq_processes, 2
There are three common roles used in Capistrano tasks:
app role is used for tasks that runs on an application server, a server that generates dynamic content. In Rails this is
puma server. If using capistrona-sidekiq, this role is the defualt role for capistrano to run sidekiq server.
db role is used for tasks that execute database server. For example, the
deploy:migrate task for migrating the Rails database schema.
web role is used for tasks that deal with web servers that serve static content
we also could set custom role to specific machine. For example, if we would like to run a server only for redis. we could set a role call redis_server for a machine, and run custom task to start redis while deploy this specific machine. we would discuss how to run custom task in deployment later.
- Start deployment by use capistrano command line. After finishing the modification of above configuration, we finally could start deployment. Here are some command line would be use to start the deployment. Usually we would use
bundle exec cap 'environment' deployto deploy our project to specific stage environment.
# list all available tasks
$ bundle exec cap -T
# deploy to the staging environment
$ bundle exec cap staging deploy
# deploy to the production environment
$ bundle exec cap production deploy
The deploy log would look like
After the deployment, the structure in your destination server would look like below
├── current -> /home/my_app_name/releases/20210511115200/
├── releases
│ ├── 20210509115200
│ └── 20210511115200
├── repo
│ └── <VCS related data>
├── revisions.log
└── shared
└── <linked_files and linked_dirs>
current is a symlink pointing to the latest release.
releases holds all deployments in a timestamped folder. These folders are the target of the current symlink.
repo holds the version control system configured. In case of a git repository the content will be a raw git repository (e.g. objects, refs, etc.).
revisions.log is used to log every deploy or rollback. Each entry is timestamped and the executing user.
shared contains the linked_files and linked_dirs which are symlinked into each release. This data persists across deployments and releases. It should be used for things like database configuration files and static and persistent user storage handed over from one release to the next.
Use custom role and run custom task
We could define custom task file in
lib/capistrano/tasks and run it in deployment. For example, if we want to print
Hello World in deployment, we could create a task to print
Hello World and call it in deployment, the task script would like below script, and the
on roles method could be integrated in the script to instruct Capistrano to run task only when login specific server.
namespace :greetings do
on roles(:app) do
execute "echo Hello Word"
end
end
We could print
Hello World before deployment or after deployment. If print Hello World task need to run after deployment, the code below should be put in
config/deploy.rb
after :deploy, 'greetings:hello'
After running
bundle exec cap 'environment' deploy , we would see capistrano executing the task and showing Hello World in deploy log. We also would see that
Hello World was printed only capistrano login to server 10.0.3.22
If you want
greetings.rake running before deployment, then below code should be put in
config/deploy.rb
before :deploy, 'greetings:hello'
The deploy log print the
Hello World at the start of the deployment
Besides setting default roles(app, web, db) to specific machine, we could define custom role for running custom task. For example, If we need one of a staging server only run for sidekiq, we could set a sidekiq_server role for the specific machine, and start sidekiq would while capistrano login the machine during deployment. The custom role should set in
config/deploy/staging.rb
server '10.0.3.193', user: 'deployer', roles: %w[db web]
server '10.0.3.22', user: 'deployer', roles: %w[app]
server '10.0.3.20', user: 'deployer', roles: %w[sidekiq_server]set :default_env, path: '/usr/local/ruby-2.5.3/bin:$PATH'set :puma_workers, 2
set :sidekiq_processes, 2
The custom task script is should set as below. The script indicates that sidekiq should be run when the login machine role is sidekiq_server.
namespace :sidekiq_server do
task :start do
on roles(:sidekiq_server) do
within current_path do
execute :bundle, "exec sidekiq --environment staging --logfile /home/my_app/shared/log/sidekiq.log --daemon"
end
end
end
end
If we want the task run after deployment, we could put below code in
config/deploy.rb
after :deploy, 'sidekiq_server:start'
Finally, we run
bundle exec cap staging deploy , and we could see the task run only login to sidekiq_server machine(10.0.3.20)
When we open the sidekiq web page, we could find the machine 10.0.3.20 running sidekiq queque.
Reference
A remote server automation and deployment tool written in Ruby.
Capistrano extends the Rake DSL with methods specific to running commands on() servers. Capistrano is written in Ruby…
capistranorb.com
capistrano/rails
Rails specific tasks for Capistrano v3: Add these Capistrano gems to your application's Gemfile using require: false…
github.com
Working with Capistrano: Tasks, Roles, and Variables
Capistrano is a great tool for automating the deployment and maintenance of web applications. I use it to deploy Ruby…
piotrmurach.com
Capistrano 3: Passing Parameters to Your Task
The new way of passing parameters in Capistrano v3 is to use the same solution as Rake (in some sort Capistrano 3 is…
jtway.co
Ruby on Rails custom Capistrano tasks
Writing and using custom Capistrano tasks is very easy and automating your deployment process may save you a lot of…
|
https://icelandcheng.medium.com/ruby-on-rails-using-capistrano-to-deploy-application-and-run-custom-task-e32ef93949be?source=post_internal_links---------5----------------------------
|
CC-MAIN-2022-40
|
refinedweb
| 1,547
| 55.03
|
alabaster 0.5.0
A configurable sidebar-enabled Sphinx theme
This theme is a modified "Kr" Sphinx theme from @kennethreitz (especially as
used in his [Requests]() project), which was itself
originally based on @mitsuhiko's theme used for
[Flask]() & related projects.
A live example of what this theme looks like can be seen on e.g.
[paramiko.org]().
Features (compared to Kenneth's original theme):
* Easy ability to install/use as a Python package (tip o' the hat to [Dave &
Eric's sphinx_rtd_theme]() for
showing the way);
* Style tweaks, such as better code-block alignment, Gittip and Github button
placement, page source link moved to footer, etc;
* Additional customization hooks, such as header/link/etc colors;
* Improved documentation for all customizations (pre-existing & new).
To use:
1. `pip install alabaster` (or equivalent command)
1. Enable the 'alabaster' theme + mini-extension in your `conf.py`:
```python
import alabaster
html_theme_path = [alabaster.get_path()]
extensions = ['alabaster']
html_theme = 'alabaster'
html_sidebars = {
'**': [
'about.html', 'navigation.html', 'searchbox.html', 'donate.html',
]
}
```
* Modify the call to `abspath` if your `_themes` folder doesn't live right
next to your `conf.py`.
* Feel free to adjust `html_sidebars` as desired - the theme is designed
assuming you'll have `about.html` activated, but otherwise it doesn't care
much.
* See [the Sphinx
docs]() for
details on how this setting behaves.
* Alabaster provides `about.html` (logo, github buttom + blurb),
`donate.html` (Gittip blurb/button) and `navigation.html` (a more
flexible version of the builtin `localtoc`/`globaltoc` templates); the
others listed come from Sphinx itself.
1. If you're using either of the image-related options outlined below (logo or
touch-icon), you'll also want to tell Sphinx where to get your images from.
If so, add a line like this (changing the path if necessary; see [the Sphinx
docs]()):
```python
html_static_path = ['_static']
```
1. Add one more section to `conf.py` setting one or more theme options, like in
this example (*note*: snippet doesn't include all possible options, see
following list!):
```python
html_theme_options = {
'logo': 'logo.png',
'github_user': 'bitprophet',
'github_repo': 'alabaster',
}
```
The available theme options (which are all optional) are as follows:
**Variables and feature toggles**
* `logo`: Relative path (from `$PROJECT/_static/`) to a logo image, which
will appear in the upper left corner above the name of the project.
* If `logo` is not set, your `project` name setting (from the top level
Sphinx config) will be used in a text header instead. This preserves a
link back to your homepage from inner doc pages.
* `logo_name`: Set to `true` to insert your site's `project` name under the
logo image as text. Useful if your logo doesn't include the project name
itself. Defaults to `false`.
* `logo_text_align`: Which CSS `text-align` value to use for logo text (if
there is any.)
* `description`: Text blurb about your project, to appear under the logo.
* `github_user`, `github_repo`: Used by `github_button` and `github_banner`
(see below); does nothing if both of those are set to `false`.
* `github_button`: `true` or `false` (default: `true`) - whether to link to
your Github.
* If `true`, requires that you set `github_user` and `github_repo`.
* See also these other related options, which behave as described
in [Github Buttons' README]():
* `github_button_type`: Defaults to `watch`.
* `github_button_count`: Defaults to `true`.
* `github_banner`: `true` or `false` (default: `false`) - whether to apply a
'Fork me on Github' banner in the top right corner of the page.
* If `true`, requires that you set `github_user` and `github_repo`.
* `travis_button`: `true`, `false` or a Github-style `"account/repo"`
string - used to display a Travis-CI build status button in the sidebar. If
`true`, uses your `github_(user|repo)` settings; defaults to `false.`
* `gittip_user`: Set to your [Gittip]() username if you
want a Gittip 'Donate' section in your sidebar.
* `analytics_id`: Set to your [Google
Analytics]() ID (e.g. `UA-#######-##`) to
enable tracking.
* `touch_icon`: Path to an image (as with `logo`, relative to
`$PROJECT/_static/`) to be used for an iOS application icon, for when pages
are saved to an iOS device's home screen via Safari.
* `extra_nav_links`: Dictionary mapping link names to link targets; these
will be added in a UL below the main sidebar navigation (provided you've
enabled `navigation.html`.) Useful for static links outside your Sphinx
doctree.
* `sidebar_includehidden`: Boolean determining whether the TOC sidebar
should include hidden Sphinx toctree elements. Defaults to `true` so you can
use `:hidden:` in your index page's root toctree & avoid having 2x copies of
your navigation on your landing page.
**Style colors**
These should be fully qualified CSS color specifiers such as `#004B6B` or
`#444`. The first few items in the list are "global" colors used as defaults
for many of the others; update these to make sweeping changes to the
colorscheme. The more granular settings can be used to override as needed.
* `gray_1`: Dark gray.
* `gray_2`: Light gray.
* `gray_3`: Medium gray.
* `body_text`: Main content text.
* `footer_text`: Footer text (includes links.)
* `link`: Non-hovered body links.
* `link_hover`: Body links, hovered.
* `sidebar_header`: Sidebar headers. Defaults to `gray_1`.
* `sidebar_text`: Sidebar paragraph text.
* `sidebar_link`: Sidebar links (there is no hover variant.) Applies to both
header & text links. Defaults to `gray_1.
* `sidebar_link_underscore`: Sidebar links' underline (technically a
bottom-border.)
* `sidebar_search_button`: Background color of the search field's 'Go'
button.
* `sidebar_list`: Foreground color of sidebar list bullets & unlinked text.
* `sidebar_hr`: Color of sidebar horizontal rule dividers. Defaults to
`gray_3`.
* `anchor`: Foreground color of section anchor links (the 'paragraph' symbol
that shows up when you mouseover page section headers.)
* `anchor_hover_fg`: Foreground color of section anchor links (as above)
when moused over. Defaults to `gray_1.
* `anchor_hover_bg`: Background color of above.
* `note_bg`: Background of `.. note::` blocks. Defaults to `gray_2`.
* `note_border`: Border of same.
* `footnote_bg`: Background of footnote blocks.
* `footnote_border`: Border of same. Defaults to `gray_2`.
* `pre_bg`: Background of preformatted text blocks (including code
snippets.) Defaults to `gray_2`.
* `narrow_sidebar_bg`: Background of 'sidebar' when narrow window forces it
to the bottom of the page.
* `narrow_sidebar_fg`: Text color of same.
* `narrow_sidebar_link`: Link color of same. Defaults to `gray_3`.
## Additional info / background
* [Fabric #419]() contains a lot of
general exposition & thoughts as I developed Alabaster, specifically with a
mind towards using it on two nearly identical 'sister' sites (single-version
www 'info' site & versioned API docs site).
* Alabaster includes/requires a tiny Sphinx extension on top of the theme
itself; this is just so we can inject dynamic metadata (like Alabaster's own
version number) into template contexts. It doesn't add any additional
directives or the like, at least not yet.
- Author: Jeff Forcier
- Categories
- Package Index Owner: bitprophet
- DOAP record: alabaster-0.5.0.xml
|
https://pypi.python.org/pypi/alabaster/0.5.0
|
CC-MAIN-2017-04
|
refinedweb
| 1,076
| 68.16
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.