text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
The QCompleter class provides completions based on an item model. More...
#include <QCompleter>
Inherits QObject.
This class was introduced in Qt 4.2.DirModelDirModel.DirModel.DirModel.
See also QAbstractItemModel, QLineEdit, QComboBox, and Completer Example. completion model. The completion model is a read-only list model that contains all the possible matches for the current completion prefix. The completion model is auto-updated to reflect the current completions.
See also completionPrefix and model(). path for the given index. The completer object uses this to obtain the completion text from the underlying model.
The default implementation returns the edit role of the item for list models. It returns the absolute file path if the model is a QDirModel.
See also splitPath().
Returns the popup used to display completions.
See also setPopup().
Sets the current row to the row specified. Returns true if successful; otherwise returns false.
This function may be used along with currentCompletion() to iterate through all the possible completions.
See also currentRow(), currentCompletion(), and completionCount().
Sets the model which provides completions to model. The model can be list model or a tree model. If a model has been already previously set and it has the QCompleter as its parent, it is deleted.
For convenience, if model is a QDirModel, QCompleter switches its caseSensitivity to Qt::CaseInsensitive on Windows and Qt::CaseSensitive on other platforms.
See also completionModel(), modelSorting, and Handling Tree Models.).
See also popup().().
Splits the given path into strings that are used to match at each level in the model().
The default implementation of splitPath() splits a file system path based on QDir::separator() when the sourceModel() is a QDirModel.
When used with list models, the first item in the returned list is used for matching.
See also pathFromIndex() and Handling Tree Models.
Returns the widget for which the completer object is providing completions.
See also setWidget(). | http://doc.qt.nokia.com/4.5-snapshot/qcompleter.html | crawl-003 | refinedweb | 309 | 52.87 |
.
Promises
Return a promise from your test, and Jest will wait for that promise to resolve. If the promise is rejected, the test will fail.
For example, let's say that
fetchData returns a promise that is supposed to resolve to the string
'peanut butter'. We could test it with:
test('the data is peanut butter', () => {
return fetchData().then(data => {
expect(data).toBe('peanut butter');
});
});
AsyncMatch('error');
});
In these cases,
async and
await are effectively syntactic sugar for the same logic as the promises example uses.
caution
Be sure to return (or
await) the promise - if you omit the
return/
await statement, your test will complete before the promise returned from
fetchData resolves or rejects.'));
});
Callbacks
If you don't use promises, you can use callbacks. For example, let's say that
fetchData, instead of returning a promise, expects a callback, i.e. fetches some data and calls
callback(null,(error, data) {
if (error) {
throw error;
}(error, data) {
if (error) {
done(error);
return;
}
try {
expect(data).toBe('peanut butter');
done();
} catch (error) {
done(error);
}
}
fetchData(callback);
});
If
done() is never called, the test will fail (with timeout error), which is what you want to happen.
If the
expect statement fails, it throws an error and
done() is not called. If we want to see in the test log why it failed, we have to wrap
expect in a
try block and pass the error in the
catch block to
done. Otherwise, we end up with an opaque timeout error that doesn't show what value was received by
expect(data).
Note:
done() should not be mixed with Promises as this tends to lead to memory leaks in your tests.
');
});
None of these forms is particularly superior to the others, and you can mix and match them across a codebase or even in a single file. It just depends on which style you feel makes your tests simpler. | https://jestjs.io/docs/26.x/asynchronous | CC-MAIN-2022-21 | refinedweb | 317 | 69.62 |
This document describes what actions you can perform to identify and
diagnose problems with your Zope site, particularly, how to recognize
when a crash has occurred and how to report it.
This document is for all Zope administrators; people who have installed
and run Zope on their servers.
Matthew T. Kromer (matt@zope.com)
April 1, 2002
Zope can crash for a number of reasons. Most of them are very esoteric;
the average Zope administrator will be very frustrated trying to do
blind diagnosis on Zope.
Generally, crashes occur when Zope or Python attempt to perform a
machine instruction which is not legitimate for the current state
of the processor. This includes attempting to dereference a NULL
pointer, or overwriting memory in storage.
Most user code cannot directly cause a crash, but it can trigger
bugs in the underlying implementation.
Zope is usually two processes working in combination. One process is
a controller, and is responsible for restarting the other process,
which is where the Zope application work is performed. If the normal
work process crashes, it should be automatically restarted by the
controlling process.
On the control panel, there are several pieces of important diagnostic
information. These are:
In particular, observing the Running For value will identify the
"uptime" of the current Zope process. If this time is much less than
what an administrator knows it should be, then a stability issue is
causing Zope to restart.
There were three recent causes of crashes in the Zope 2.4 and 2.5
series of code, all of which have been addressed (at the time of
this writing) by Zope 2.5.1b1 and Python 2.1.2. Those causes were:
Each of these problems is currently resolved to the best of our
knowledge.
Python 2.1.2 contains all known fixes to the Python run-time system,
and also contains checks to identify known problems with the Python
compiler package which were present in earlier versions of Python.
compiler
Zope 2.5.1b1 contains all fixes to Zope, including an updated Zope
compiler package, and a fix to the security machinery. Zope 2.5+
should be run with Python 2.1.2.
Zope 2.4.4b1 contains all backports of known fixes. Zope 2.4.4+
should be run with Python 2.1.2.
Other problems exist, often related to specific systems. Sometimes,
there is a workaround, sometimes there is not. For example, the
following list of conditions are known problems on some systems:
nohup
Usually in a crash, the problem is caused by a C module operating on
erroneous data. Most of the time, this means doing things like
releasing memory, then continuing to access it.
Any extra modules or components loaded into Zope which have compiled
components COULD be suspect when there is a crash. Often, these
include database adapters, or other special purpose modules.
Occasionally, normal Python can cause some recursion errors which
consume all available memory. This is highly unusual, but it can
happen.
There are a number of things to try when Zope is crashing to see if
the crashes can be contained, or parameterized to assist diagnosis
and corrective action. These things are:
Script_magic
import gc
gc.disable()
Enabling the following will allow you to capture supplemental
information about what Zope was doing when it crashed:
Sometimes, the easy workarounds don't fix the problem. Instead, it
becomes necessary to attach the debugger to Zope to find out where
Zope is crashing. Under unix systems, gdb is often installed and
available to perform diagnosis.
To attach gdb to a running Zope instance, first start Zope with the
parameters -t 1 -Z '' to run in single threaded mode without
running a separate monitor process. Obtain the process ID via ps
or by looking in the STUPID_LOG_FILE set log file to see the process
ID reported.
ps
Attaching gdb is a matter of issuing gdb python processid and
then hitting RETURN until you get a gdb prompt. Type c and press
RETURN to allow Zope to resume execution.
gdb python processid
c
Use Zope until a crash occurs. When this happens, gdb will return to
the prompt and will identify where Zope process is at the point the
failure occurred. Use the "w" command to find out "where" the program
was at the time of failure.
Report to Zope Corporation via the Zope Collector the output of your
gdb session (copy and paste it to a file).
If Zope Corporation lacks knowledge about a problem, it will not be
able to provide remedies in a timely manner. To that end, the Zope
Collector exists to collect problem data.
When issuing a report to the collector, it is useful to use an editor
to compose a problem report. This problem report should contain:
We may ask you to include portions of your system log or the "big M
log."
It is very important to include your name and email address in the
issue itself if you file an anonymous report to the collector. Without
this information we cannot correspond with you.
Go to
and click "New Issue" in the
left hand actions box. If you have a Zope.org membership, log in before
submitting the issue.
While Zope Corporation does not require paid support contracts with
customers to resolve reported defects in the problem, it also does not
rely on customer requirements for a solution without a paid support
contract in place. If the solution is critical to you to have addressed
on your schedule, you should examine the offerings listed at
to see if your needs may
be best addressed by purchasing a support contract.
The purchase of a support contract guarantees that engineering
resources will be assigned to your problem when you report it, and
may also entitle you to specific engineering work on a priority basis
for problem resolution. | http://old.zope.org/Members/matt/StabilityHOWTO/document_view | CC-MAIN-2016-30 | refinedweb | 976 | 62.98 |
Details
Description
Issue Links
- is duplicated by
DERBY-5178 NPE in BaseDataFileFactory.jarClassPath(final Class cls)
- Closed
- is related to
DERBY-4715 Write jvm information and path of derby.jar to derby.log
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
Trying to make a repro, this snippet did work for me, though. Could you help me determine how
your setup differs?
public class Foo {
public static void main(String[] args) throws Exception {
URL[] urls ={new URL("")}
;
ClassLoader mycl = URLClassLoader.newInstance(urls);
Class drc = Class.forName("org.apache.derby.jdbc.EmbeddedDriver",
true,
mycl);
Driver drv = (Driver)drc.newInstance();
Properties cp = new Properties();
cp.setProperty("create", "true");
Connection c = drv.connect("jdbc:derby:wombat", cp);
c.close();
}
}
In derby.log I saw:
Loaded from
The patch passed regression tests..
Regression related to
DERBY-4715
Thank you Michael.
I don't see addRepository method in the ClassLoader or URLClassLoader API, so assume it is a custom class loader subclass. Can you describe it in a bit more detail to help us modify Dag's attempt at a reproduction, so we can get a test case?
I wonder too is it ever possible for cls.getProtectionDomain() to return Null which might cause an NPE further up at:
cs = cls.getProtectionDomain().getCodeSource();
Hi Michael,
what Javadoc are you looking at to arrive at that conclusion? ("the apidoc of CodeSource implies that getLocation() can return null")
Looking at the Javadoc for Java 1.6[1], I see this:
> CodeSource#getLocation:
>
> public final URL getLocation()
>
> Returns the location associated with this CodeSource.
>
> Returns:
> the location (URL).
It seems there is no indication that the returned value could be null?
[1]
The javadoc for CodeSource#implies says:
"3. If this object's location (getLocation()) is not null, (...)"
"Note that if this CodeSource has a null location and a null certificate chain, then it implies every other CodeSource."
That seems to indicate that null is a valid return value.
Hi Dag,
sorry for not being clear enough, (obviously, the pun on "implies" did not work).
I am looking at the official javadoc you cited, but as i said: "it implies, that...", not "it states that, .."
the apidoc for
boolean CodeSource.implies(CodeSource)
says: "Note that if this CodeSource has a null location..."
Furthermore, looking into the source of java.security.CodeSource or java.lang.ClassLoader
reveals a couple of locations where location is checked if it is null.
Patch committed to trunk as svn 1050000, resolving.
[bulk update] Close all resolved issues that haven't been updated for more than one year.
reopening for backport to 10.7 as requested.
Commit 1665680 from Myrna van Lunteren in branch 'code/branches/10.7'
[ ]
DERBY-4944; Embedded Derby does not start when derby.jar is dynamically uploaded / added to the classpath
merge -c of revision 1050000 from trunk
Closing again. Although the build at apache failed, that seemed to be because of a problem with ant, not with derby, and it worked fine in my environment when I ran suites.All.. | https://issues.apache.org/jira/browse/DERBY-4944 | CC-MAIN-2016-30 | refinedweb | 501 | 60.82 |
I, this post is for you.
We are going to create a deep learning framework using Numpy arrays while we briefly study the theory of basic artificial neural networks. I won’t go into much detail with the theory, but you will find really good resources at the end of the post.
Why create this from scratch?
Well, firstly, it is cool. I am a big fan of doing things just because it is cool to do them.
Secondly, you will learn a lot by implementing things from scratch. For instance, the backpropagation algorithm can be a little tricky when you first study it. A project like this may help you to better understand what is going on inside a neural network.
Neural networks in a nutshell
This is a brief summary of an immensely big topic. If you are a visual learner, I recommend you to check Fast.ai, 3Blue1Brown or, if you speak Spanish, DotCSV.
You can also check the post From the neuron to the net, where aporras explains the basics of a neural network.
Neural networks are universal function generators, so a neural network is just a function that maps an input into an output.
$$ f:x \rightarrow y $$
The basic unit of a neural networks is the neuron, which is itself a linear regression
\( s \ (w, \theta) \) (usually called “weighted sum”) passed through a non-linear function like Sigmoid or Hyperbolic Tangent.
$$
y_{j} = \sigma \left( \sum_{j=1}^{n} w_{jk} \cdot x_{j} + \theta_{j} \right)
$$
Neurons are grouped in a higher topological level called layer. Basically, the way a layer is constructed, and how the layers of a net interact among themselves, determines the type of neural network you are working with.
In this case, we will create a framework to construct Linear (Dense in Keras) neural networks.
Each linear layer has its own weights and biases matrices. The weights
\( w_{jk} \) estimate the connection strength between 2 neurons while the biases \( \theta_{j} \) act as a threshold, being a measure of how active a neuron is.
Gradient descent and backpropagation
It is time we learn how neural networks are trained. Typically, the learning consists of reducing the error of the cost function used to evaluate the network. A common cost function may be
Mean Squared Error:
$$
J = \frac{1}{n} \sum_{i=1}^{n} \left(Y_{i} – \hat Y_{i} \right)^2
$$
How can we tune our parameters to reduce the loss? Well, we can find what is the error of each parameter and update it according to that value. That is, we want to know how each parameter contributes to the final error.
To do that, maths provides us with a very useful tool: derivatives. Let’s say you want to know how much contribution a change in the weights parameters has on the final loss. We can do:
$$
\frac{\partial J}{\partial w}
$$
Now, taking into account that a neural network is just a function composition:
$$
J (y \ (s \ (w,\theta)))
$$
We can compute \( w\) gradient by applying the
chain rule. For example, the weights gradient of the last layer:
$$
\frac{\partial J}{\partial w} = \frac{\partial J}{\partial y} \cdot \frac{\partial y}{\partial s} \cdot \frac{\partial s}{\partial w}
$$
And the same for the biases:
$$
\frac{\partial J}{\partial \theta} = \frac{\partial J}{\partial y} \cdot \frac{\partial y}{\partial s} \cdot \frac{\partial s}{\partial \theta}
$$
If we want to compute the gradient for the previous layers we just have to repeat the process recursively:
$$
\frac{\partial J}{\partial w^{L-1}} = \frac{\partial J}{\partial y^{L}} \cdot \frac{\partial y^{L}}{\partial s^{L}} \cdot \frac{\partial s^{L}}{\partial y^{L-1}} \cdot \frac{\partial y^{L-1}}{\partial s^{L-1}} \cdot \frac{\partial s^{L-1}}{\partial w^{L-1}}
$$
Once we have the gradient, we can use it to optimize our parameters:
$$
w = w – \eta \cdot \nabla J(w, \theta)
$$
$$
\theta = \theta – \eta \cdot \nabla J(w, \theta)
$$
The \( \eta \) parameter is what we call “learning rate”, and determines the step size of each iteration. These slides from the UCL may help you understand how changes in the learning rate value influence the learning process. The smaller the size, the more steps you need.
There are other algorithms to optimize the parameters, but we will stick to the most basic one.
After backpropagating the error, we compute, again, the forward results of the net to check if the error has changed.
Each iteration over the dataset is called
epoch. With enough epochs, we will reduce the error and increase the accuracy of our predictions.
Putting things together
To understand backpropagation, we need to calculate the derivatives we were talking about.
Starting with the loss function:
$$
\frac{\partial J}{\partial y} =
\frac{2}{n} \sum_{i=1}^{n} \left(Y_{i} – \hat Y_{i} \right)
$$
Regarding the activation function, we will only solve for the sigmoid case (ReLU is on the code):
$$
\frac{\partial y}{\partial s} = \sigma \cdot (1 – \sigma(s))
$$
Lastly, we have the weighted sum \( s \):
$$
\frac{\partial s}{\partial w} = x
$$
$$
\frac{\partial s}{\partial \theta } = 1
$$
We usually name \( \delta\) to
$$
\delta = \frac{\partial J}{\partial y} \cdot \frac{\partial y}{\partial s}
$$
Now, the algorithm for backpropagation can be written as follows:
Note that we have added the superscripts to clarify the inputs and outputs of each step.
Give me the code!
Here it is! A neural network will be created using the
Model class.
class Model: def __init__(self): self.layers = [] self.loss = [] def add(self, layer): self.layers.append(layer) def predict(self, X): # Forward pass for i, _ in enumerate(self.layers): forward = self.layers[i].forward(X) X = forward return forward def train( self, X_train, Y_train, learning_rate, epochs, verbose=False ): for epoch in range(epochs): loss = self._run_epoch(X_train, Y_train, learning_rate) if verbose: if epoch % 50 == 0: print(f'Epoch: {epoch}. Loss: {loss}') def _run_epoch(self, X, Y, learning_rate): # Forward pass for i, _ in enumerate(self.layers): forward = self.layers[i].forward(input_val=X) X = forward # Compute loss and first gradient bce = BinaryCrossEntropy(forward, Y) error = bce.forward() gradient = bce.backward() self.loss.append(error) # Backpropagation for i, _ in reversed(list(enumerate(self.layers))): if self.layers[i].type != 'Linear': gradient = self.layers[i].backward(gradient) else: gradient, dW, dB = self.layers[i].backward(gradient) self.layers[i].optimize(dW, dB, learning_rate) return error
As you can see, the class
Model has 3 methods:
add,
train and
predict that allow us to control the network behaviour.
The private method
_run_epoch computes only one epoch. It does it by following the next procedure:
- Compute forward pass.
- Calculate error and gradient on the last layer.
- Backpropagates the gradient.
Notice that we don’t actually need the error in backpropagation, just the gradient. We use the error to see how far we are from our objective.
You will find the code for the classes below:
class Layer: """Layer abstract class""" def __init__(self): pass def __len__(self): pass def __str__(self): pass def forward(self): pass def backward(self): pass def optimize(self): pass class Linear(Layer): def __init__(self, input_dim, output_dim): self.weights = np.random.rand(output_dim, input_dim) self.biases = np.random.rand(output_dim, 1) self.type = 'Linear' def __str__(self): return f"{self.type} Layer" def forward(self, input_val): self._prev_acti = input_val return np.matmul(self.weights, input_val) + self.biases def backward(self, dA): dW = np.dot(dA, self._prev_acti.T) dB = dA.mean(axis=1, keepdims=True) delta = np.dot(self.weights.T, dA) return delta, dW, dB def optimize(self, dW, dB, rate): self.weights = self.weights - rate * dW self.biases = self.biases - rate * dB class ReLU(Layer): def __init__(self, output_dim): self.units = output_dim self.type = 'ReLU' def __str__(self): return f"{self.type} Layer" def forward(self, input_val): self._prev_acti = np.maximum(0, input_val) return self._prev_acti def backward(self, dJ): return dJ * np.heaviside(self._prev_acti, 0) class Sigmoid(Layer): def __init__(self, output_dim): self.units = output_dim self.type = 'Sigmoid' def __str__(self): return f"{self.type} Layer" def forward(self, input_val): self._prev_acti = 1 / (1 + np.exp(-input_val)) return self._prev_acti def backward(self, dJ): sig = self._prev_acti return dJ * sig * (1 - sig)
To calculate the error, we have a lot of options. Probably, the most basic one is the Mean Squared Error we saw earlier. I have added another one called Binary Cross-Entropy (the one that is in the code) because we will test our model using the latter in the following sections.
class MeanSquaredError(Layer): def __init__(self, predicted, real): self.predicted = predicted self.real = real self.type = 'Mean Squared Error' def forward(self): return np.power(self.predicted - self.real, 2).mean() def backward(self): return 2 * (self.predicted - self.real).mean() class BinaryCrossEntropy(Layer): def __init__(self, predicted, real): self.real = real self.predicted = predicted self.type = 'Binary Cross-Entropy' def forward(self): n = len(self.real) loss = np.nansum(-self.real * np.log(self.predicted) - (1 - self.real) * np.log(1 - self.predicted)) / n return np.squeeze(loss) def backward(self): n = len(self.real) return (-(self.real / self.predicted) + ((1 - self.real) / (1 - self.predicted))) / n
The layers can compute in 2 directions: forward and backward. This is an inherited behaviour from the
computational graphs design, and it makes computationally easier to calculate the derivatives. In fact, we could have split the Linear layer into “multiply and “add” classes, as TensorFlow does it.
The weights and biases are initialized using a uniform distribution. There are other ways to initialize these parameters, like kaiming initialization.
The forward pass of a linear layer just computes the formula of a neuron we saw previously. The backward pass is a little trickier to understand: once we compute the gradient on the last layer, we backpropagate it by multiplying the corresponding derivatives of the actual layer with the incoming gradient of the following layer.
In the linear layer, we need to calculate the weights and biases gradients too.
The
optimize method updates the weights and biases parameters with the local gradient of the layer if it is of linear type.
Christopher Olah’s post contains more information on computing derivatives using computational graphs.
The results
To check the results we will generate a dataset for binary classification using
sklearn and a little help from
pandas:
def generate_data(samples, shape_type='circles', noise=0.05): # We import in the method for the shake of simplicity import matplotlib import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_moons, make_circles if shape_type is 'moons': X, Y = make_moons(n_samples=samples, noise=noise) elif shape_type is 'circles': X, Y = make_circles(n_samples=samples, noise=noise) else: raise ValueError(f"The introduced shape {shape_type} is not valid. Please use 'moons' or 'circles' ") data = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=Y)) return data def plot_generated_data(data): ax = data.plot.scatter(x='x', y='y', figsize=(16,12), color=data['label'], cmap=matplotlib.colors.ListedColormap(['skyblue', 'salmon']), grid=True); return ax
The resulting data is shown in the following picture:
data = generate_data(samples=5000, shape_type='circles', noise=0.04) plot_generated_data(data);
The creation and addition of layers to the model is very straightforward because it works pretty much the same as in
Keras. Below you will find the code to create and train a classification model:
X = data[['x', 'y']].values Y = data['label'].T.values # Create model model = Model() # Add layers model.add(Linear(2, 5)) model.add(ReLU(5)) model.add(Linear(5,2)) model.add(ReLU(2)) model.add(Linear(2,1)) model.add(Sigmoid(1)) # Train model model.train(X_train = X.T, Y_train = Y, learning_rate = 0.05, epochs=9000, verbose=True)
After training, we can plot the loss of the model:
plt.figure(figsize=(17,10)) plt.plot(model.loss)
The loss curve is not ideal, but is good enough for our purposes.
Using a slightly modified version of this code, we can also visualize the decision boundary of our model:
Everything seems to work reasonably well, although we need a high number of epochs to converge. This is probably due to a lack of optimization when compared with other professional Frameworks like PyTorch.
The last test we can perform is to use a metric to test the classification result. I chose roc auc score.
from sklearn.metrics import roc_auc_score # Make predictions predictions = model.predict(X.T).T # Format the predictions new_pred = [] for p in predictions: if p < 0.5: new_pred.append(0) else: new_pred.append(1) # Calculate the score roc_auc_score(y_true=Y, y_score=new_pred)
On average in 10 different computations (with the same data and model configuration), the accuracy is 0.8061, which I consider a success.
Conclusions
We have seen how neural networks work in a nutshell, we have also learned how to create a really basic deep learning framework that we can use to test our knowledge about the topic and to play around.
Of course, we left out of this post a lot of interesting theories: regularization, batch size, recurrent nets, overfitting, cross-validation, etc. Maybe, in another post, we will cover some of these topics; but now, you know the basics to keep researching on your own.
To go even deeper into the code, you can go to Github and check this repository, with the DOC string I omitted in this article and more activation functions to play with.
I hope you find this post useful. If you have any doubt, do not hesitate to leave a message below.
References
- Sebastian Ruder – An overview of gradient descent optimization algorithms.
- Chris Olah – Calculus on Computational Graphs: Backpropagation.
- 3Blue1Brown – Neural Networks.
- DotCSV – Aprendiendo Inteligencia Artificial.
- Fast.ai – Making neural nets uncool again.
- Justin Johnson – Backpropagation for a linear layer.
- Pedro Almagro Blanco – Algoritmo de Retropropagación.
- Terence Parr and Jeremy Howard – The matrix calculus you need for Deep Learning.
- Fei-Fei Li, Justin Johnson & Serena Yeung – Lecture 4: Neural Networks and Backpropagation. | https://quantdare.com/create-your-own-deep-learning-framework-using-numpy/ | CC-MAIN-2022-40 | refinedweb | 2,336 | 50.12 |
Provided by: libtickit-dev_0.2-5_amd64
NAME
tickit - Terminal Interface Construction KIT
SYNOPSIS
#include <tickit.h> typedef struct Tickit;
DESCRIPTION
tickit is a library for building full-screen interactive programs that use a terminal interface. A program using this library would start by creating a toplevel Tickit instance, from which one or more divisions of the terminal area, called "windows" are created. These form a heirarchial tree that subdivides the content area into independent regions that can be managed by different parts of the program structure. Each window can react to input events such as keyboard or mouse interaction. As well as creating the initial root window, the toplevel Tickit instance also performs a few other jobs for the containing program. It can act as a containing event loop for the program, performing IO multiplexing tasks both for tickit's own needs and the needs of the program as a whole.
FUNCTIONS
A new toplevel instance is created by using tickit_new_stdio(3). A toplevel instance stores a reference count to make it easier for applications to manage its lifetime. A new toplevel instance starts with a count of one, and it can be adjusted using tickit_ref(3) and tickit_unref(3). When the count reaches zero the instance is destroyed. The toplevel instance manages a tree of TickitWindow instances. The root of this tree is obtained by tickit_get_rootwin(3) and thereafter can be divided further by other functions on the window, described more in tickit_window(7). The TickitTerm instance behind the toplevel instance can be obtained by tickit_get_term(3), and is described more in tickit_term(7). Event handling callback functions can be installed to be called at a later time, by using tickit_timer_after_msec(3), tickit_timer_after_tv(3), or tickit_later(3). The main IO event loop is controlled using tickit_run(3) and tickit_stop(3).
TYPICAL STRUCTURE
A typical program using this library would start by creating the toplevel instance, by calling tickit_new_stdio(3), then obtain its root window on it by calling tickit_get_rootwin_WINDOW_ON_EXPOSE event, but might also wish to handle other kinds like geometry change for dynamic resizing, or keyboard or mouse to react to user input. Finally, once the intial window tree is created, the program would enter the main event loop by invoking tickit_run(3).
COMMON TYPES
The flags argument to the various tickit_..._bind_event() functions should be zero, or a bitmask of the following constants. typedef enum { TICKIT_BIND_FIRST, TICKIT_BIND_UNBIND, TICKIT_BIND_DESTROY, } TickitBindFlags; TICKIT_BIND_FIRST indicates that this handler should be inserted at the start of the list, rather than the default position at the end. TICKIT_BIND_UNBIND indicates that this handler should also be invoked at the time it is unbound, either due to a specific call to the tickit_..._unbind_event() function, or because the bound object is being destroyed. TICKIT_BIND_DESTROY indicates that this handler should also be invoked at the time that the bound object is being destroyed.;
COMMON EVENTS
Every object instance that supports events supports the following type of event, in addition to the specific ones listed for that kind of object: TICKIT_..._ON_DESTROY Invoked when the object instance is being destroyed. This will be the last time the application can use the stored data argument; it may perform any resource reclaiming operations that are required at this time.
EVENT FLAGS
When an event handler function is invoked, it is passed a bitmask of flags to indicate the reason for its invocation. typedef enum { TICKIT_EV_FIRE, TICKIT_EV_UNBIND, TICKIT_EV_DESTROY, } TickitEventFlags; TICKIT_EV_FIRE This handler is being invoked because its associated event has occurred. The info pointer will point to a structure containing the relevant information. TICKIT_EV_UNBIND This handler is being invoked because it is being removed from the object. This will only be observed if it was bound with the TICKIT_BIND_UNBIND flag. The info pointer will be NULL. TICKIT_EV_DESTROY This handler is being invoked because the object instance itself is being destroyed. This will be observed if it was bound with the TICKIT_BIND_DESTROY flag, or because it is bound to the TICKIT_..._ON_DESTROY event. The info pointer will be NULL. Any event handlers for this event will be invoked in reverse order; the newest is run first and the oldest last.
SEE ALSO
tickit_window(7), tickit_term(7), tickit_pen(7), tickit_rect(7), tickit_rectset(7), tickit_renderbuffer(7), tickit_string(7), tickit_utf8_count(3) TICKIT(7) | http://manpages.ubuntu.com/manpages/eoan/man7/tickit.7.html | CC-MAIN-2020-34 | refinedweb | 708 | 53.92 |
TL;DR: In this article, we're going to continue developing the Kanban Board application from part 1 of this series to add basic data persistence, and Progressive Web Application features, such as offline support and adding to your mobile home screen.
The source code for this project is available in a GitHub repository.
Progressive Web Applications (PWAs) are normal web apps that exhibit a few important properties that aim to enrich the user experience of the application in a few different ways. Some of these are:
- Progressive - The application must gracefully degrade or enhance based on the capabilities of the user's browser
- Responsive - The application displays appropriately for a wide variety of screen sizes and devices
- Available everywhere - The application should work whether you have a great internet connection or no connectivity at all!
- Secure - The application must make use of HTTPS technology to help keep users safe
- App-like - The application should employ techniques and features that make it feel more like a regular mobile application, such as push notifications and home screen buttons
In this tutorial, we're going to take the Kanban Board application from part 1 and fulfill some of this criteria that we haven't already covered. In addition, we're going to add some basic data persistence that records the backlog items into Local Storage, so that the items are persisted whenever the page is refreshed. Not only is this quite easy to do, but it will make your life a lot easier when it comes to adding the other features and testing out your application.
If you haven't managed to complete part 1, the source code for part 1 of the tutorial is available in on GitHub, so you can pick right up from here. To get started with the application, clone the GitHub repository to your local machine, and navigate your terminal to the project directory. You can then run the following commands to start the application:
$ npm install
To run the application:
$ npm run dev
To see the application running, you should then be able to open in your browser. Here's an example of the running application with some sample data:
Integrating Vuex with Local Storage
The first thing we're going to tackle is the ability to persist our data store so that when the page is refreshed not all of the data is lost. Since we've previously put all of our data storage logic in one place, this task is fairly trivial but wins us a lot of user experience points.
To do this, we're going to create a Vuex Plugin that will serialize our Vuex state into Local Storage. Then, we can easily register our plugin with our store.
Begin by creating a new folder called
plugins inside the
src folder, and then create a new file inside
plugins called
localStorage.js. Your folder structure should look something like this:
├── src │ ├── App.vue │ ├── assets/ │ ├── components/ │ ├── main.js │ ├── plugins/ │ │ └── localStorage.js │ ├── router/ │ ├── store.js
Then, populate
localStorage.js with the following:
// src/plugins/localStorage.js export default store => { store.subscribe((m, state) => { // Save the entire state to Local Storage localStorage.setItem('boardState', JSON.stringify(state)); }); };
Here we use the
subscribe method on the store to register an event handler function, which is executed every time the store's state is changed. The function is given two parameters:
m, which is the name of the mutation that caused the state to change, and
state, which is the store's current state. By handling this event, we can save the entire state of the store to Local Storage whenever it is changed. This is OK for this application since nothing particularly sensitive lives in the Vuex store: only what you as the user have typed in yourself when creating your backlog items on your kanban board.
For the plugin to take effect, it needs to be registered with the store. Open
src/store.js and modify the definition of the store so that it includes the plugin.
// src/store.js import Vue from 'vue'; import Vuex from 'vuex'; // Import the plugin module here import localStoragePlugin from './plugins/localStorage'; Vue.use(Vuex); export default new Vuex.Store({ // Next, register the plugin using the `plugins` property on the store plugins: [localStoragePlugin], // The rest of the store remains the same.. state: { ... }, mutations: { ... } });
Now that we can save our backlog items, the next piece to implement is the ability to recall them when the application starts up. We're going to add a method to our store that will allow us to do that. Still, within
src/store.js, add a new mutation to the store which will read the data from local storage and overwrite the current state of the store:
// src/store.js import Vue from "vue"; import Vuex from "vuex"; import localStoragePlugin from './plugins/localStorage'; Vue.use(Vuex); /* eslint-disable no-param-reassign */ export default new Vuex.Store({ // .. other store creation options mutations { // .. other mutations // Add this mutation which allows us to load our state from the store initializeStore() { const data = localStorage.getItem('boardState'); if (data) { this.replaceState(Object.assign(this.state, JSON.parse(data))); } } } }) // ...
This new method will fetch the items from Local Storage, deserialize them using
JSON.parse then call
replaceState on the Vuex store.
replaceState is a Vuex API method which will replace the entire Vuex state with whatever data we give it. In this case, it will have the effect of overwriting the store data with whatever we just fetched from Local Storage.
The last task is to call this new method at the right time, and we're going to do that once the Vue system has been initialized. Open
src/main.js and modify the view options to call our method when the
created lifecycle hook is called:
// src/main.js new Vue({ el: '#app', router, store, template: '<App/>', components: { App }, // New code - initialize the store created() { store.commit('initializeStore'); } });
Because
initializeStore is a mutation, we "call" it by using the store's
commit method with the mutation name. Normally, mutations would be accompanied by data that describes how the store is to mutate but, in this case, we have no data to commit since the mutation itself will supply the data.
If you run the application now, you should find that you can add in new backlog items, refresh the page, and see that your items are still there. Great! One final thing I'm going to cover is how to delete items, as we will quickly start to build up a set of backlog items and it could get a bit messy if we had no way at all to delete them.
Deleting Backlog Items
We can begin by adding a new mutation into our store that will allow us to remove items as and when we need. This mutation will take the item to be deleted as its single argument. The implementation is as follows:
// src/store.js export default new Vuex.Store({ // .. other options mutations: { // .. other mutations // Add this mutation which removes an item from the backlog, given the item id removeItem(state, item) { [state.items.todo, state.items.inProgress, state.items.done].forEach( array => { const indexInArray = array.findIndex(i => i.id === item.id); if (indexInArray > -1) { array.splice(indexInArray, 1); } } ); } } });
The code may look a little confusing at first, but let me explain what it's doing. As a quick recap, the store works by keeping three arrays to track the backlog items:
todo,
inProgress and
done, where a 'todo' item will appear in the
todo array, items in progess in the
inProgress array, and so on. In order to delete an item, we need to know which of the three arrays the item is in. We can do that by creating a new array that contains these three arrays, iterating over them, and then using
findIndex to find out which one has an index of zero or greater. Once we have found that array, we can remove the item.
As this is a mutation, we need to call it from somewhere. We'll do this by adding a new "delete" button to each item in the backlog, which will allow the user to delete that particular item.
Let's start by modifying the
src/components/Backlog.vue template so that it includes the new button. Be careful as I've also moved the position of the badge element slightly. Your component should end up looking something like this:
<!-- src/components/Backlog.vue -->
<template>
<div class="backlog-view">
<new-item></new-item>
<div class="card" v-
<div class="card-block">
<h5 class="card-title"><span class="text-muted">#{{item.id}}</span>
{{item.text}}
<!-- NEW - button to delete the item -->
<button type="button" class="close-button pull-right" @
<span>×</span>
</button>
<span :{{badgeText(item)}}</span>
<!-- /NEW -->
</h5>
</div>
</div>
</div>
</template>
The button we've added works by invoking the
removeItem method whenever it's clicked, passing along
item. We can implement that method on the component now. Further down the same code file, modify the component code so that it includes the new
removeItem method:
// src/components/Backlog.vue export default { // .. other component options methods: { // .. other methods removeItem(item) { this.$store.commit('removeItem', item); } } };
As you can see, the method itself simply executes the mutation that we have already implemented on our store, giving it the item that we want to delete. At this point, you should be able to run the app, see the delete button, and begin to remove items from your backlog!
Finally, you'll notice that the styling is a little off. We can put in some minor fixes here to make the application look a little more pleasing. To start, find the
badgeClass method inside the component script and add in the
pull-right class from Bootstrap. To do this, I've reworked the string into a template literal, just to make it a little easier to work with:
// src/components/Backlog.vue badgeClass(item) { const lane = this.itemLane(item); return `${badgeDetail[lane].class} pull-right`; }
Then, inside the
<style> tag of the
App component, add in these rules:
/* src/App.vue */ .card-title { margin-bottom: 0; } h5 { margin-bottom: 0; } .close-button { background: transparent; border: 0; margin: 0 0 0 20px; padding: 0; color: white; opacity: 0.3; } .close-button:hover { cursor: pointer; }
When you look at the page now, everything should be nicely aligned and a little more pleasing to the eye.
Now that we can persist and remove our backlog items, let's look at how we can make the whole application work offline!
An Introduction to Service Workers
Service workers are pieces of JavaScript, registered by your application, that execute in a thread separately from your main JavaScript thread. They have a few limitations, most notably that they can't access the DOM, local storage or session storage. In addition, service workers can only be used on pages served over HTTPS. All of these limitations serve to make using service workers safe and secure.
However, they do have access to other important resources, such as the cache API, and IndexedDB. In addition, they are able to intercept network requests that are generated from the browser and return modified responses. Together with strong browser support, these capabilities make Service Workers extremely suitable for helping us provide offline support to our app users.
"Learn how to use Service Workers to add offline capabilities to an existing web app"
Implementing a Service Worker for Offline Access
We're going to implement a service worker which will allow us to cache all the assets that we need in order for the application to work offline. These assets are:
- The web page that serves the JavaScript and CSS assets
- The compiled JavaScript files as output by Webpack
- The stylesheets and fonts that make up our application's look and feel
These will be cached in two stages:
- We will first pre-cache the web page and the application JavaScript files, as we know what those are up-front
- We will then cache the stylesheets and fonts in real-time as they are made in the browser. These are Cross-Origin Requests (CORS) that we can cache as-and-when they come through
Relating to this particular project, there are a few requests that we don't want to bother caching — mainly relating to Webpack's hot module replacement feature — but we will get to that soon. I also want to mention the fact that if you were to create a new Vue project using the CLI tool, there is a built-in template that allows you to create a PWA out of the box without having to write your own service worker. However, we're not using that here since writing the service worker for offline access is part of the point of this article!
With that in mind, let's begin to implement our service worker. We need to do two things:
- Create our service worker script
- Configure Vue to load it for us into our HTML template
Creating the service worker script
Start by creating a new file in the
src directory, called
sw.js. We can start by implementing the ability to pre-cache assets that we know about:
// src/sw.js // ESLint global registration /* global serviceWorkerOption: false */ const cacheName = 'kanban-cache'; const isExcluded = f => /hot-update|sockjs/.test(f); const filesToCache = [ ...serviceWorkerOption.assets.filter(file => !isExcluded(file)), '/', '', '' ]; // Cache known assets up-front const preCache = () => caches.open(cacheName).then(cache => { cache.addAll(filesToCache); }); // Handle the 'install' event self.addEventListener('install', event => { event.waitUntil(preCache()); });
The first thing we do is set up some constants and functions that we're going to use in our service worker. One is the name of the cache, then we have a helper method which allows us to determine whether or not the request — based on the URL — should be excluded from the cache or not.
The next thing we do us to create a list of the known URLs and files that we know we want to cache. For us, this includes everything in the
serviceWorkerOptions.assets array (filtered using our exclusion function), the root URL
/, and our Bootstrap and FontAwesome assets. It would be beneficial to cache these assets up-front, so that we can make sure the user doesn't see an unstyled application if they happen to go offline immediately after hitting the application.
serviceWorkerOptionis an object given to us by a Webpack plugin that we will install in the next section.
Finally, we use
addEventListener to handle the
install event and use that opportunity to call
preCache, which caches all of the resources that we know about up-front. This involves opening the cache using a given name, adding all of the files to it that we want and then wrapping it in
event.waitUntil, which waits until
preCache is finished.
Integrating our service worker
At this point, we have a basic service worker which caches our known assets and the root URL. Although we haven't quite finished our service worker implementation, let's see how we can integrate it into our application.
Normally we integrate the service worker by referencing it inside a
script tag on our HTML template and then writing some code to register it with the browser. However, we have the added problem that the assets we want to cache are dynamically generated by our build system — Webpack. So how do achieve this?
There are is a Webpack plugin that can help us here:
serviceworker-webpack-plugin. This plugin provides a small API to help us register our service worker as well as the files we need to cache. Remember
serviceWorkerOption from the previous code snippet? This plugin is what provides this list of files to us, based on the assets that have flowed through the Webpack compilation pipeline. This is extremely convenient since Webpack could be configured to output files that contain hashes that you have no way of calculating up-front. A Webpack configuration can also emit different file names depending on whether you are in production or development mode, for example. This way the service worker simply gets told what the file names are, by making
serviceWorkerOption available to us inside the service worker script.
To get this working, first install the
serviceworker-webpack-plugin dependency using the command line.
$ npm install -D serviceworker-webpack-plugin@0.2.3
Note that I'm installing version
0.2.3, even though the latest version (at the time of writing) is
1.0.1. This is because
0.2.3supports Webpack 3, which is how our project is configured by default. If you use a different template that uses Webpack 4 or have upgraded your project to use Webpack 4, then you will want to install the latest version of
serviceworker-webpack-plugin.
Next, we need to modify the Webpack configuration to load this new plugin. Open
build/webpack.base.conf.js and make the following changes:
const utils = require('./utils') const config = require('../config') const vueLoaderConfig = require('./vue-loader.conf') // NEW - include the plugin const ServiceWorkerWebpackPlugin = require('serviceworker-webpack-plugin') //... // Modify the exports to include a new 'plugins` key containing // the ServiceWorkerWebpackPlugin configuration module.exports = { context: path.resolve(__dirname, '../'), entry: { ... }, output: { ... }, resolve: { ... }, module: { ... }, node: { ... }, plugins: [ new ServiceWorkerWebpackPlugin({ entry: path.join(__dirname, '../src/sw.js') }) ] }
The
plugins key won't exist in the default template, so it should be added inside
module.exports. The only thing we need to do with it is to specify where our service worker script is, and the plugin will take care of the rest.
The last thing we need to do is register the service worker when the application starts up. We can do this from our main startup script, so open
src/main.js and make the following changes:
// src/main.js import Vue from 'vue'; // NEW import runtime from 'serviceworker-webpack-plugin/lib/runtime'; // ... Vue.config.productionTip = false; // NEW - register the service worker if ('serviceWorker' in navigator) { runtime.register(); }
Verifying service worker installation
At this point, you will be able to start the application and find that the service worker has been registered and that two requests have been cached:
/app.js, and
/. Using Chrome, let's see how we can verify the installation of the service worker and that the URLs have been cached.
Start the application using
npm run dev in the command line, or restart the application if it's already running, and then browse to. Once the page has been loaded, open up the Chrome Developer Tools and open the Application tab. If you click the Service Workers option on the left, you'll see information about the service workers that have been registered for this application. If you have followed the steps until now, you should see that our service worker has been registered and it is running.
There are a couple of important options at the top of this screen which aid the development of your service worker:
- Offline — simulates the disconnection of your network, allowing you to test that your application works when it is offline
- Update on reload — normally the service worker is only reloaded if it has changed, and you have started a new browser session (i.e. closed and reopened the browser window or tab). Checking this box means that, to speed up development, you can get your service worker to update whenever you reload the page.
You may also see service workers for your application that have been stopped on this screen. These are old service worker instances that have been replaced by new versions whenever you update the service worker script, and will be removed once the browser tab is closed.
Next, we can inspect the cache to see if the URLs we expect are there. Further down the Application pane, expand the Cache Storage item and click the 'kanban-cache' item. This will bring up the list of requests that are currently in the cache, and you should be able to see our two items in there.
Let's see whether our application works offline with what we have. Go back to the Service Worker tab inside Chrome Developer Tools, and check the "offline" option at the top. As far as your application is concerned, you now have no internet connection. When you refresh the page, you'll find that you're still unable to access the app; whilst we have written the code to pre-cache our known files, we still have to write the logic for retrieving items from the cache. Let's do that now.
Implementing the fetching strategy
A service worker is incredibly flexible in that you can completely control which assets are cached, and when. For this reason, the implementation of a service worker can grow to be quite complex.
Let's start by handling the
fetch event, which is the event that is raised when a resource needs to be fetched from somewhere. You can put this code underneath the code we already have in
src/sw.js:
// src/sw.js self.addEventListener('fetch', event => { event.respondWith( fetchFromNetwork(event.request).catch(() => fetchFromCache(event.request)) ); });
Hopefully, the code here is fairly self-descriptive. Our basic strategy for getting our application to work offline is to try fetching from the network first — as the browser would normally do — and if that fails, try to fetch from the cache. The hidden detail here is that
fetchFromNetwork will place things into the cache as it receives the responses. Let's implement that now. Add the following method above our
preCache method we defined earlier:
// src/sw.js const fetchFromNetwork = request => new Promise((resolve, reject) => { fetch(request).then(response => { if (!isExcluded(request.url) && response) { updateCache(request, response.clone()).then(() => resolve(response)); } else { resolve(response); } }, reject); }); const preCache = ...
Here we fetch the resource from the network and assuming we receive a valid response, and it does not match our exclusion filter, then we can put it into the cache. The whole operation is wrapped inside another
Promise so that we can more precisely control exactly when the promise is fulfilled or rejected.
You'll also notice in the call to
updateCache that, instead of passing
response to the function, we pass a clone of the response (through
response.clone). We do this because when we're dealing with responses, we're actually dealing with streams of data. Once a stream has been read, it can't be used again. In our case the response is ultimately used twice — once to put it into the cache, and again to return it to the browser client. Therefore, we must clone the response so that we can have two copies of it and fulfill both actions.
Two more functions to implement; next is the
fetchFromCache function:
// src/sw.js const fetchFromNetwork = ... // Try to fetch existing responses from the cache // Implement this between fetchFromNetwork and preCache const fetchFromCache = request => caches.match(request).then(response => response || Promise.reject('failed')); const preCache = ...
This is a very simple case of just calling the
match method on the Cache API, which will return a response. The response could be valid or
undefined if nothing was found.
Finally, let's implement
updateCache. This will take the request and response objects, and again make a call into the Cache API to save those items into the cache:
// src/sw.js // Store a response inside the cache // Implement this above fetchFromNetwork const updateCache = (request, response) => caches.open(cacheName).then(cache => cache.put(request, response)); const fetchFromNetwork = ...
Testing offline access
With these functions in place, you should now have the capabilities to run your application offline! To test this out, make sure your assets appear in the cache, and then do one of these things:
- Use the 'offline' checkbox at the top of the Application window inside Chrome Developer Tools
- Simply stop your server running in the command line (Ctrl+C)
You should find that you can refresh the page and your application still appears and works as normal!
As a further test of the caching strategy, make a change to your
index.html file in the root of the project so that the changes would be visible in the browser (e.g. add some text or a heading). When you refresh the app while it's still offline, your changes will of course not appear. However, come back online by restarting the application (or untick 'Offline' in Chrome Developer Tools, depending on which method you used earlier) and your changes should now appear immediately.
At this point, you now have an application which will work when there is no internet connection (or the server is down), as well as one that prefers the most up-to-date content if it is available.
Note: At the time of writing, support for offline access varies across the different platforms. For example, service workers are currently supported on Chrome for Android, but not Chrome for iOS (Chrome for iOS uses WKWebView). However, Safari 11.4+ on iOS does now support service workers. Remember that, when testing on mobile, the application must be served from an HTTPS endpoint. That means either generating a self-signed certificate or hosting your application on a platform that supports HTTPS.
Adding to the Home Screen
To make this web application appear more integrated and "app-like", we can further enhance the experience by providing the ability to add the application to the home screen of your mobile phone. To get started, we need to create a Web App Manifest file. This file describes particular aspects of your application, e.g. it's full name, a short name, icons, colors, and a few other things. Devices can use this file to best describe your application when it comes to running them from your home screen.
Creating the application manifest file
Let's create a manifest file for our application. We can do this by creating a static JSON file and then linking to it inside our
index.html page. Create the file
static/manifest.json and populate it with the following content:
{ "name": "Kanban Board", "short_name": "Kanban Board", "start_url": "/", "background_color": "#263e52", "theme_color": "#263e52", "orientation": "portrait-primary", "display": "standalone", "lang": "en-US", "description": "A Kanban Board written in Vue.js, with Progressive Web App features", "icons": [ { "src": "/static/app_icon_512.png", "sizes": "512x512", "type": "image/png" }, { "src": "/static/app_icon_192.png", "sizes": "192x192", "type": "image/png" } ] }
Some of these properties are straightforward, but let me explain some of them:
- start_url - the URL that should be loaded first when starting the application from the home screen
- background_color - the color of the background that should be used on the splash screen
- theme_color - Some browsers use this to set the color of the chrome around the web page
- orientation - the primary orientation of the application. Other values include ones that allow you to specify that your application should be viewed in landscape mode
- icons - various sizes of icons that different devices use. Google specifies that, for an application to be considered a PWA and for certain features to work, at least 192x192 and 512x512 icons must be specified
- display - indicates the preferred display mode for the app. Valid values are: 'fullscreen', 'standalone', 'minimal-ui' and 'browser'
I've used an icon here courtesy of Anton Sapturo and, and you can download them to your repository using the following commands in your terminal in the project's root directory:
curl -o static/app_icon_192.png -O curl -o static/app_icon_512.png -O curl -o static/CREDITS.md -O
Now, open
index.html in the root of the project and add in the link to the manifest file. While we're here, we're also going to set an additional meta tag which affects the color of the browser chrome for Chrome on Android, just to make it look that little bit more integrated:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <!-- NEW - specify the theme color for Chrome on Android --> <meta name="theme-color" content="#263e52"> <title>Kanban Board</title> "> <!-- NEW - specify the manifest file --> <link rel="manifest" href="/static/manifest.json"> </head> <body> <div class="container"> <div id="app"></div> </div> <!-- built files will be auto injected --> </body> </html>
With this in place, you should be able to add the application to the home screen of your mobile device and have it pick up some of the metadata present in the manifest file. To reiterate, support for some or all of the metadata right now is patchy across different devices, and your experience may differ slightly from that outlined here.
Note: In order to connect to your running application from your mobile device, you may have to start the application so that it binds to the address
0.0.0.0. You can do this from the command line:
HOST=0.0.0.0 npm run dev
Then you should be able to connect to the app using the IP address of the machine that the app is running on.
To illustrate the mobile features, on Safari on iOS, you can add your application to the home screen by hitting the 'share' button at the bottom of the screen.
Then you have the opportunity to tweak the application name:
Finally, running the application from the home screen removes all of the browser chrome, making it feel more like an integrated app.
In Chrome, there is a further way to test that the manifest file is being registered correctly. With the application loaded in Chrome, open the Chrome Developer Tools, switch to the Application tab and click the Manifest tab on the left. All the information about your manifest file and any registered icons should appear here. Chrome will even warn you if something doesn't look quite right.
Creating an Install Prompt
Chrome on Android goes a little bit further in that it can raise DOM events inside your application which you can use to prompt your users to install your application, increasing engagement with your users and a better user experience. This event has limited support right now and it's not on the standards track, so do not rely on its availability. However, if you are looking to integrate a similar feature into your application, let's see how it's done.
The general workflow is as follows:
- If the application can be installed locally, the browser raises the
beforeinstallpromptevent on the
windowobject
- We present some UI to the user, prompting them to install the application
- If they choose to install the application, we call the
promptmethod on the event object given to us in step 1, which causes the browser to do its thing and install the app
This is known as the deferred flow. This flow is necessary because
prompt can only be called as a result of the user performing some gesture or action (such as a button click) and cannot be called on behalf of the user without some kind of interaction from them.
It's worth noting at this point that the
beforeinstallprompt is only fired if the browser is satisfied that the application can be installed locally. In the case of Chrome, there are a few properties of your manifest file that must hold to in order to determine if the application can be installed. These are:
- It must have
nameor
short_namepopulated
- It must have at least 512px and 192px icons
start_urlmust be set
- The
displayfield must be set to
standalone,
fullscreenor
minimal-ui
In addition to the manifest file criteria, the event is only fired if the following are also true:
- You are serving the application over HTTPS
- The application is not already installed
- You have a service worker that handles the
fetchevent
Creating the install prompt component
Let's create this as a new component and display it at the top of the application. Create a new file in the
components folder called
InstallPrompt.vue. Start populating it by adding the template:
<!-- src/components/InstallPrompt.vue -->
<template>
<div class="alert alert-dismissible alert-info" v-
<button type="button" class="close" data-×</button>
Do you want to <a href="#" @click.add this app to your home screen?</a>
</div>
</template>
This is just a standard Bootstrap alert box. The real functionality is encapsulated in the
install function, which executes when the user clicks the link. Note that this alert is only shown to the user if
showInstallBanner is true; this value is driven from our event handler when the
beforeinstallprompt event is raised by the browser.
Underneath the template, add the functionality for installing the application inside a
script tag:
<script>
// src/components/InstallPrompt.vue
let installEvent;
export default {
name: 'installPrompt',
data() {
return {
showInstallBanner: false
};
},
created() {
window.addEventListener('beforeinstallprompt', e => {
e.preventDefault();
installEvent = e;
this.showInstallBanner = true;
});
},
methods: {
install() {
this.showInstallBanner = false;
installEvent.prompt();
installEvent.userChoice.then(() => {
installEvent = null;
});
}
}
};
</script>
This component works by handling
beforeinstallprompt when it is raised on the
window object. If that event is fired, we first prevent the default action, we save the event object for later and we then set
showInstallBanner to true (note that this flag is
false by default). This mechanism of saving the event object for later is part of the deferred flow we talked about earlier; calling
prompt on this object now would not work as we're not currently in the context of a user action or gesture.
At some point later, when the user clicks the link to install the application, the
install method is called which in turn calls
installEvent.prompt(). At this point the browser will show its own dialog, asking the user if they really want to install this application. We then get to handle the result of that action by waiting on the
userChoice promise. In our case, we can just set
installEvent to null and carry on with our business.
The last thing to do is put this new component on the page somewhere. Open up
src/App.vue and modify it to include our new
InstallPrompt component:
<!-- src/App.vue -->
<template>
<div id="app">
<div class="page-header">
<!-- NEW - put the InstallPrompt component on the page -->
<install-prompt></install-prompt>
<h1>Kanban Board</h1>
<p class="lead">An example of a Kanban board, written in VueJS</p>
</div>
<menu-bar></menu-bar>
<router-view/>
</div>
</template>
<script>
import MenuBar from '@/components/MenuBar';
// NEW - import the new component
import InstallPrompt from '@/components/InstallPrompt';
// NEW - register the InstallPrompt component
export default {
name: 'app',
components: {
'menu-bar': MenuBar,
InstallPrompt
}
};
</script>
<style>
.page-header h1 {
font-weight: 300;
}
/ NEW - added a bit of padding to the top of the screen /
body {
padding-top: 1rem;
}
</style>
These changes just amount to the normal component registration changes that you've done before. Note that I've tweaked the styles a little just to add a bit of padding to the top of the screen.
With the component in place, you should be able to load the application in the browser and the banner will not be visible. There are a couple of ways you can test its functionality. One is to load the site on an Android phone (you'll need to host it with an SSL certificate), or you can use Chrome Desktop.
"I just added an 'install to desktop' feature to my Vue.js application!"
Testing using Chrome Desktop
To test using Chrome Desktop, we need to turn on a Chrome Flag to allow it to install applications locally. To do this, browse to
chrome://flags in the address bar and search for 'Desktop PWAs'. Once you've found the flag, make sure it is enabled and then click the button to relaunch the browser.
Now, when you refresh our Kanban Board application in the browser, the banner to install the application should appear and allow you to install the application to your machine.
Clicking the link to install the application will invoke Google Chrome's own UI for installing the application to the user's machine as a Chrome App.
The application then becomes available in the normal place for Chrome Apps on your operating system.
When looking at your application, you should no longer see the install prompt at the top of the page as the application is now installed, and the
beforeinstallprompt will not fire. To test that flow again, you will first have to uninstall the application in Chrome Apps so that the event will fire, causing the install prompt to appear once more. To uninstall the application, browse to
chrome://apps, right-click on the application icon and select "Remove from Chrome"
Using Lighthouse to Test Your PWA
As a final note, I also wanted to mention a very useful tool that exists as part of the Chrome Developer Tools — Lighthouse. Lighthouse can be used to examine various aspects of any website, including SEO, accessibility, performance, and Progress Web App capabilities. It gives you a score in each of these areas with suggestions for improvement, or things that should be fixed in order to increase your score.
To run it, browse to the Kanban Board application locally and open Chrome Developer Tools and open the Audits tab. You will be able to toggle various tests that you might be interested in, and also whether the CPU is throttled during the test. You might want to use CPU throttling if you're interested in seeing how your application performs on slower devices.
Clicking the "Run Audits" button will start the tests, and after a few seconds, you should have your results.
In the screenshot above, we have clear indicators that we've done a good job with implementing PWA features, SEO and followed some good practices, but not such a good job on performance or accessibility. Drilling down into each section gives you more detail about the score, which audits passed successfully, and which of those weren't so successful. Let's dig into the PWA score to see how we could improve that.
While we have a good score of 88 on the PWA front, the audit has flagged two things that would improve our score, both of which are fairly easy to solve:
- Serving content over HTTPS
- Providing a fallback when JavaScript is not available (even if it's just a message that says "JavaScript must be enabled")
What's also interesting about this report is the level of detail it goes into when checking your application for PWA compliance, as you can see by the "Passed audits" report, as well as the manual checks that it suggests be performed in order to make sure that your application is the most compliant that it can be._16<<
Wrapping Up
Over the course of this article, we improved the original Kanban Board application to include persistent data storage using local storage, as well as added the facility to delete items from the data. We then advanced to looking at service workers and how we could implement one to allow our application to work offline using the Cache API. We then looked at creating a manifest file, and techniques for installing our application to local devices, as well as testing our application for correctness in terms of being a PWA.
Further Reading and References
- Service Workers: an Introduction
- Mozilla Service Worker Cookbook
- Web App Manifest
- App Install Banners
- Lighthouse Tool Reference
- Can I Use: Service Workers | https://auth0.com/blog/vuejs-kanban-board-adding-progressive-web-app-features/ | CC-MAIN-2020-16 | refinedweb | 6,566 | 60.35 |
#include <CmdMol.h>
#include <CmdMol.h>
Inheritance diagram for CmdMolSmoothRep:
Definition at line 422 of file CmdMol.h.
[inline]
Definition at line 429 of file CmdMol.h.
References Command::MOL_SMOOTHREP, repn, whichMol, and winsize.
[protected, virtual]
virtual function which is called when a text version of the command must be created to be printed to the console or a log file.
Reimplemented from Command.
Definition at line 203 of file CmdMol.C.
References Command::cmdText, repn, whichMol, and winsize.
Definition at line 427 of file CmdMol.h.
Referenced by CmdMolSmoothRep, and create_text.
Definition at line 426 of file CmdMol.h.
Definition at line 428 of file CmdMol.h. | http://www.ks.uiuc.edu/Research/vmd/doxygen/classCmdMolSmoothRep.html | crawl-003 | refinedweb | 108 | 54.69 |
Dungeon!
Publisher: Wizards of the Coast
Cost: Between $15.99-$19.99
Release Date: 10/16/2012
Get it Here: Amazon.com
When I was a small child and our whole family got together for winter holidays or summers at our lake cabin, the two board games we played and loved the most were Championship Baseball by Milton Bradley and Dungeon! by TSR. I was playing Dungeon! before I ever played an actual role playing game, and I remember the fantasy characters and monsters captured my imagination from a young age. Even my relatives that never got into fantasy or RPGs (which is to say all of them except my cousin Scott) loved Dungeon!. Recently, for my 35th birthday I picked up a MIB, still shrink-wrapped copy of Championship Baseball as a reminder of my carefree childhood days. I decided not to go for a vintage Dungeon! game to accompany it though, as I knew Wizards of the Coast was bringing out a Fourth Edition version of Dungeon! and that they’d be sending me a copy to review. I wasn’t sure what to expect. My personal favorite was the 1975-1988 version of the game. The 1989 and 1992 remakes offered more character classes, but they just weren’t as fun. I was crossing my fingers that this fourth edition of Dungeon! would be as awesome as I remembered it, but I also knew WotC would be making some changes. After all, this fourth edition of Dungeon would have to mirror Fourth Edition Dungeons & Dragons (it even has the D&D logo on the game, which previous versions did not), whereas my version of Dungeon! was from the era of first edition red box advance-less D&D. So how does the remake hold up? Honestly, pretty well. I’m not a fan of some of the changes, especially the artwork, but those are minor quibbles at best. What you will find as you read this review, is that Dungeon! is still an amazingly fun budget priced board game that anyone can pick up and fall in love with.
First of all, the package Dungeon! comes in is a fraction of the size previous versions were boxed in. The board is roughly the same, though, in terms of layout, although the artwork isn’t as good and things are MUCH shinier and more colourful. The previous versions of the game had a much more realistic and dingy looking dungeon along with artwork of monsters here and there on the board. The layout is still very similar, complete with Level 5 being the hardest (and least rewarding) to get to. The game also has basic rules printed on the side of the board, including what levels are best for each character class. I was a little sad to see how much more hand holding this version of the game is, as even in single digits I instantly got how to play the game and even make house rules for it, but I have to admit having the basic rules on the board is nice for when children invariably lose the rulebook.
Let’s talk character classes, by the way. The original game has Elf, Hero, Superhero and Wizard. Late versions of Dungeon! would change things up and have six different characters: Elf, Warrior, Wizard, Paladin, Dwarf and Thief. This latest version of the game changes things again. We’ve back to four basic character classes, but they are now called Rogue, Cleric, Fighter and Wizard. The Wizard Class is untouched from the original game, the Fighter is the Superhero, Cleric is the original Hero and Rogue is the original Elf class. The Rogue is the weakest class, physically, in the game, but has a 50% chance of finding a secret door instead of the two-in-six chance the other three classes have. I do remember that we used to play with the Elf being able to cast one of each of the three spells in the game to more mimic their “red box” rules, but that was definitely a house rule rather than an “official” one. It was the only way to get someone to play an Elf. The Cleric is just a basic fighter in this game, so don’t look for it to have any spells or healing abilities. The Warrior is exactly the same as the Cleric, except it has a better chance of killing monsters than the Cleric. The Wizard is not very strong, but it can cast powerful magic spells. These spells are limited, and once exhausted, the Wizard has to return to the start space to recharge his or her spells. One thing worth noting is that the magic spells in this newest version of Dungeon! are far more powerful. In the original they gave the Wizard a slightly better chance of success. Here it’s far easier.
So with all this in mind, you’re probably wondering why anyone would play a Cleric or Rogue. God knows we never played as Elves or Heroes as kids, except on rare occasions, because we wanted the toughest and most powerful classes. The answer is simple. To balance out their weaker chance to hit and defeat enemies, Rogues and Clerics only need to amass a total of 10,000 Gold Pieces in loot to win the game. Warriors need 20,000 GP and Wizards need a whopping 30,000! This means Rogues and Clerics can hang out in the easier levels of the dungeon (1-3) where enemies are weaker but there is also less loot. Warriors and Wizards will have to go deeper into the dungeon to face tougher enemies and deal with the greater risk and reward. If all four characters stuck only to Level 1, the Rogues and Clerics would almost be assured a win, as they would collect their totals at a faster pace, even though they are the less powerful characters. So basically, things are balanced out with the more powerful classes having to travel farther, face tougher enemies AND collect more treasure in exchange for more powerful abilities. In fact, with all this in mind, if you played according to the rules, the Cleric, with no special abilities or powerful attacks, actually stood the best chance of winning the game. Of course, I’ve never known anyone that played by the official rules. Everyone I’ve ever talked to had some house rule variant going on for this game, which is part of what has made it so popular and endearing over the decades.
Enemies and Treasure are different from previous versions of the game, but mostly in superficial ways. There are some new treasures along with new artwork of a decidedly lesser quality. The monsters have been completely reworked. There are a lot of new monsters like Dracolitches and Driders, and the rolls for what kills a monster are tweaked as well. How Magic Swords work has changed too. In the original versions of the game, a Magic Sword had a set bonus to your die roll. The further into the dungeon you went, the more likely you were to find a +2 or +3 weapon. Levels closer to the surface were almost always +1 weapons. In the new version of Dungeon!, when you find a magic sword, you roll two dice. You check the result with what the card says, and if you roll high enough, you get a +2 weapon. Otherwise it’s a +1 weapon. I don’t like the randomization, and there are also FAR less Magic Swords in this edition than in other games, with only a single one appearing in Levels 5 or 6. Again, this is a minor quibble that only long time anal fans of the original version will notice or care about.
Let’s take a look at some monsters to better understand how combat works. A sample Level 1 monster is the Goblin. A Rogue needs a 3 or higher (on 2d6) to kill it. A Cleric needs a 4, a Warrior needs a 2, a Wizard needs a 5, a Fireball spell needs 2 and a Lightning Bolt, oddly, needs a 6 or higher. At Level 3, you might encounter an Ogre. Here a Rogue needs an 8, a Cleric a 9, a Warrior a 6, a Wizard an 8, a Fireball a 4 and a Lightning bolt a 5. In the foulest recesses of the dungeon (Level 6), you might be unlucky enough to come across a Blue Dragon. Here a Rogue doesn’t even get a CHANCE to kill it. Nor does a Lightning Bolt. Clerics and Wizards need a 12 and a Warrior needs a 10 or higher. A Fireball needs a 7 or higher, but still, the odds are against everyone here. Of course, with risk comes reward. A sample Level 1 treasure is a 250GP “Sack of Loot.” At Level 3, you might find a Silver Cup worth 1,000GP. At Level 6? 5,000GP emeralds are not uncommon. Again, this balances out the harsher requirements put on the more powerful classes.
Although the game doesn’t contain any of the house rules that have been accumulated and popularized over the past three and a half decades, it does contain some solo rules for playing a single person version of Dungeon! such as “Treasure Hunt,” where you try to survive long enough to find a specific treasure, “Timed Game,” where you try to see how much gold you can amass in a specific time period, and “Become the Hunted,” where a Level 6 monster chases you around the dungeon trying to kill you before you get the allotted amount of treasure you need.
Overall, I’m happy with the game. I’m glad they got rid of the new classes and PvP rules in the 1989 and 1992 versions of the game which really bogged things down. This is a return (for the most part) to the original late seventies and eighties version of the game that was awesome just the way it was. I still think the art here isn’t as good as the earliest versions of the game (although it IS better than 1992′s variant) and I don’t like the change to Magic Swords, but it’s still a fantastic game more than thirty-five years after its original release. Playing this definitely brings back memories of getting hit in the back of the head with snowballs by cousins a foot taller and several years older than me or getting dunked in a chigger filled lake. Wait no, those are BAD childhood memories. Dungeon! recalls the good ones – like the Battle Lake flea market and finding old G.I. Joes and Transformers for cheap, or staying up crazy late to play this weird game we called “Shaderack.” Dungeon! is still probably best left in the hands of younger gamers, but even older ones can have fun with this very simple and streamlined dungeon crawl. With a price tag of less than twenty dollars, this is definitely a game any fantasy fan should be on the lookout for – especially if you played one of the earlier editions as a child. Nostalgia abounds here.
Tags: Dungeon!, Dungeons & Dragons
Pingback: Sword of Fargoal : An Evolutionary Tale | http://diehardgamefan.com/2012/10/16/tabletop-review-dungeon-dungeons-dragons/ | CC-MAIN-2013-20 | refinedweb | 1,888 | 78.89 |
CodePlexProject Hosting for Open Source Software
Anyone know what replaced:
IMessagingService
This is going back to 1.1-1.3 I think. I cannot find it. I think it was under:
using Orchard.Settings;
I was trying to do this:
public class EmailMessagingService : IMessagingService
I see an IMessagingChannel, but no IMessagingService.
Thanks for confirming that. I could only see the source for 1.4 here.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/404252 | CC-MAIN-2017-39 | refinedweb | 101 | 79.56 |
Tree::Simple - A simple tree object
use Tree::Simple; # make a tree root my $tree = Tree::Simple->new("0", Tree::Simple->ROOT); # explicity add a child to it $tree->addChild(Tree::Simple->new("1")); # specify the parent when creating # an instance and it adds the child implicity my $sub_tree = Tree::Simple->new("2", $tree); # chain method calls $tree->getChild(0)->addChild(Tree::Simple->new("1.1")); # add more than one child at a time $sub_tree->addChildren( Tree::Simple->new("2.1"), Tree::Simple->new("2.2") ); # add siblings $sub_tree->addSibling(Tree::Simple->new("3")); # insert children a specified index $sub_tree->insertChild(1, Tree::Simple->new("2.1a")); # clean up circular references $tree->DESTROY();
This module in.
I consider this module to be production stable, it is based on a module which has been in use on a few production systems for approx. 2 years now with no issue. The only difference is that the code has been cleaned up a bit, comments added and the thorough tests written for its public release. I am confident it behaves as I would expect it to, and is (as far as I know) bug-free. I have not stress-tested it under extreme duress, but I::Simple UID see the
getUID method.
This method accepts only Tree::Simple objects or objects derived from Tree::Simple, an exception is thrown otherwise. This method will append the given
$tree to the end of it's children list, and set up the correct parent-child relationships. This method is set up to return its invocant so that method call chaining can be possible. Such as:
my $tree = Tree::Simple->new("root")->addChild(Tree::Simple->new("child one"));
Or the more complex:
my $tree = Tree::Simple->new("root")->addChild( Tree::Simple->new("1.0")->addChild( Tree::Simple->new("1.0 arguments. is bounds checked, if this condition fail, an exception is thrown.
When a child is removed, it results in the shifting up of all children after it, and the removed child is returned. The removed child is properly disconnected from the tree and all its references to its old parent are removed. However, in order to properly clean up and circular references the removed child might have, it is advised to call it's
DESTROY method. See the "CIRCULAR REFERENCES" section for more information.
The
addSibling,
addSiblings,
insertSibling and
insertSiblings methods pass along their arguments to the
addChild,
addChildren,
insertChild and
insertChildren methods of their parent object respectively. This eliminates the need to overload these methods in subclasses which may have specialized versions of the *Child(ren) methods. The one exceptions is that if an attempt it made to add or insert siblings to the ROOT of the tree then an exception is thrown.
NOTE: There is no
removeSibling method as I felt it was probably a bad idea. The same effect can be achieved by manual upwards traversal.
This returns the value stored in the object's node field.
This returns the unique ID associated with this particular tree. This can be custom set using the
setUID method, or you can just use the default. The default is the hex-address extracted from the stringified Tree::Simple). have been traversed.
Here is an example of a traversal function that will print out the hierarchy as a tabbed in list.
$tree->traverse(sub { my ($_tree) = @_; print (("\t" x $_tree->getDepth()), $_tree->getNodeValue(), "\n"); });
Here is an example of a traversal function that will print out the hierarchy in an XML-style format.
$tree->traverse(sub { my ($_tree) = @_; print ((' ' x $_tree->getDepth()), '<', $_tree->getNodeValue(),'>',"\n"); }, sub { my ($_tree) = @_; print ((' ' x $_tree->getDepth()), '</', $_tree->getNodeValue(),'>',"\n"); });
Returns the total number of nodes in the current tree and all its sub-trees.
This method has also been deprecated in favor of the
getHeight method above, it remains as an alias to
getHeight for backwards compatibility. extremely expensive operation for large trees, so we provide two options for cloning, a deep clone and a shallow clone.
When a Tree::Simple object is cloned, the node is deep-copied in the following manner. If we find a normal scalar value (non-reference), we simply copy it. If we find an object, we attempt to call
clone on it, otherwise we just copy the reference (since we assume the object does not want to be cloned). If we find a SCALAR, REF reference we copy the value contained within it. If we find a HASH or ARRAY reference we copy the reference and recursively copy all the elements within it (following these exact guidelines). We also do our best to assure that circular references are cloned only once and connections restored correctly. This cloning will not be able to copy CODE, RegExp and GLOB references, as they are pretty much impossible to clone. We also do not handle
tied objects, and they will simply be copied as plain references, and not re-
tied. happens is that the tree instance that
clone is actually called upon is detached from the tree, and becomes a root node, all if the cloned children are then attached as children of that tree. I personally think this is more intuitive then to have the cloning crawl back up the tree is not what I think most people would expect. necessary circular references. In the past all circular references had to be manually destroyed by calling DESTROY. The call to DESTROY would then call DESTROY on all the children, and therefore cascade down the tree. This however was not always what was needed, nor what made sense, so I have now revised the model to handle things in what I feel is a more consistent and sane way.
Circular references are now managed with the simple idea that the parent makes the decisions.
By default, you are still required to call DESTROY in order for things to happen. However I have now added the option to use weak references, which alleviates the need for the manual call to DESTROY and allows Tree::Simple to manage this automatically. This is accomplished with a compile time setting like this:
use Tree::Simple 'use_weak_refs';
And from that point on Tree::Simple will use weak references to allow for perl's reference counting to clean things up properly.
For those who are unfamiliar with weak references, and how they affect the reference counts, here is a simple illustration. First is the normal model that Tree::Simple uses:
+---------------+ | Tree::Simple1 |<---------------------+ +---------------+ | | parent | | | children |-+ | +---------------+ | | | | | +---------------+ | +->| Tree::Simple2 | | +---------------+ | | parent |-+ | children | +---------------+
Here, Tree::Simple1 has a reference count of 2 (one for the original variable it is assigned to, and one for the parent reference in Tree::Simple2), and Tree::Simple2 has a reference count of 1 (for the child reference in Tree::Simple1).
Now, with weak references:
+---------------+ | Tree::Simple1 |....................... +---------------+ : | parent | : | children |-+ : <--[ weak reference ] +---------------+ | : | : | +---------------+ : +->| Tree::Simple2 | : +---------------+ : | parent |.. | children | +---------------+
Now Tree::Simple1 has a reference count of 1 (for the variable it is assigned to) and 1 weakened reference (for the parent reference in Tree::Simple2). And Tree::Simple2 has a reference count of 1, just as before.
None that I am aware of. The code is pretty thoroughly tested (see "CODE COVERAGE" below) and is based on an (non-publicly released) module which I had used in production systems for about 3 years without incident. Of course, if you find a bug, let me know, and I will be sure to fix it.
I use Devel::Cover to test the code coverage of my tests, below is the Devel::Cover report on this module's test suite.
---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt branch cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ Tree/Simple.pm 99.6 96.0 92.3 100.0 97.0 95.5 98.0 Tree/Simple/Visitor.pm 100.0 96.2 88.2 100.0 100.0 4.5 97.7 ---------------------------- ------ ------ ------ ------ ------ ------ ------ Total 99.7 96.1 91.1 100.0 97.6 100.0 97.9 ---------------------------- ------ ------ ------ ------ ------ ------ ------ hierarchy, this module does an excellent job (and plenty more as well).
I have also recently stumbled upon some packaged distributions of Tree::Simple for the various Unix flavors. Here are some links:
There are a few other Tree modules out there, here is a quick comparison between Tree::Simple and them. Obviously I am biased, so take what I say with a grain of salt, and keep in mind, I wrote Tree::Simple because I could not find a Tree module that suited my needs. If Tree::Simple does not fit your needs, I recommend looking at these modules. Please note that I am only listing Tree::* modules I am familiar with here, if you think I have missed a module, please let me know. I have also seen a few tree-ish modules outside of the Tree::* namespace, but most of them are part of another distribution (HTML::Tree, Pod::Tree, etc) and are likely specialized in purpose.
This module seems pretty stable and very robust with a lot of functionality. However, Tree::DAG_Node does not come with any automated tests. It's test.pl file simply checks the module loads and nothing else. While I am sure the author tested his code, I would feel better if I was able to see that. The module is approx. 3000 lines with POD, and 1,500 without the POD. The shear depth and detail of the documentation and the ratio of code to documentation is impressive, and not to be taken lightly. But given that it is a well known fact that the likeliness of bugs increases along side the size of the code, I do not feel comfortable with large modules like this which have no tests.
All this said, I am not a huge fan of the API either, I prefer the gender neutral approach in Tree::Simple to the mother/daughter style of Tree::DAG_Node. I also feel very strongly that Tree::DAG_Node is trying to do much more than makes sense in a single module, and is offering too many ways to do the same or similar things.
However, of all the Tree::* modules out there, Tree::DAG_Node seems to be one of the favorites, so it may be worth investigating.
I am not very familiar with this module, however, I have heard some good reviews of it, so I thought it deserved mention here. I believe it is based upon C++ code found in the book Algorithms in C++ by Robert Sedgwick. It uses a number of interesting ideas, such as a ::Handle object to traverse the tree with (similar to Visitors, but also seem to be to be kind of like a cursor). However, like Tree::DAG_Node, it is somewhat lacking in tests and has only 6 tests in its suite. It also has one glaring bug, which is that there is currently no way to remove a child node.
It is a (somewhat) direct translation of the N-ary tree from the GLIB library, and the API is based on that. GLIB is a C library, which means this is a very C-ish API. That doesn't appeal to me, it might to you, to each their own.
This module is similar in intent to Tree::Simple. It implements a tree with n branches and has polymorphic node containers. It implements much of the same methods as Tree::Simple and a few others on top of that, but being based on a C library, is not very OO. In most of the method calls the
$self argument is not used and the second argument
$node is. Tree::Simple is a much more OO module than Tree::Nary, so while they are similar in functionality they greatly differ in implementation style.
This module is pretty old, it has not been updated since Oct. 31, 1999 and is still on version 0.01. It also seems to be (from the limited documentation) a binary and a balanced binary tree, Tree::Simple is an n-ary tree, and makes no attempt to balance anything..
Is a wrapper around a C library, again Tree::Simple is pure-perl. The author describes FAT-trees as a combination of a Tree and an array. It looks like a pretty mean and lean module, and good if you need speed and are implementing a custom data-store of some kind. The author points out too that the module is designed for embedding and there is not default embedding, so you can't really use it "out of the box".
getUIDand
setUIDmethods.
Stevan Little, <stevan@iinteractive.com>
Rob Kinyon, <rob@iinteractive.com>
Ron Savage <ron@savage.net.au> has taken over maintenance as of V 1.19..
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~rsavage/Tree-Simple-1.23/lib/Tree/Simple.pm | CC-MAIN-2014-35 | refinedweb | 2,128 | 61.26 |
Make your Data Talk!
Matplotlib and Seaborn are two of the most powerful and popular data visualization libraries in Python. Read on to learn how to create some of the most frequently used graphs and charts using Matplotlib and Seaborn.
By Puneet Grover, Helping Machines Learn.
This article is one of the posts from the
Tackle category, which can be found on my github repo here.
Index
- Introduction
- Single Distribution Plots (Hist, KDE, -[Box, Violin])
- Relational Plots (Line, Scatter, Contour, Pair)
- Categorical Plots(Bar, +[Box, Violin])
- Multiple Plots
- Interactive Plots
- Others
- Further Reading
- References
NOTE:
This post goes along with the Jupyter Notebook available in my Repo on Github: [HowToVisualize]
1. Introduction
What is data, nothing but numbers. If we are not visualizing it to get a better understanding of the world inside it, we are missing out on lots of things. I.e. we can make some sense out of data as numbers, but magic happens when you try to visualize it. It makes more sense and it suddenly it becomes more perceivable.
We are sensual beings, we perceive things around us through our senses. Sight, Sound, Smell, Taste and Touch. We can, to some extent, distinguish things around us according to our senses. For data, Sound and Sight seems to be the best options to represent it as it can be easily transformed. And we mostly use Sight as a medium to perceive data because probably we are accustomed to differentiating different object through this sense and also, though in lower level, we are also are accustomed to perceiving things in higher dimensions through this sense which comes in handy in multivariate data sets.
In this post, we look into two of the most popular libraries for visualization of data in Python and use them to make data talk, through visualization:
1.1 Matplotlib
Matplotlib was made keeping MATLAB’s plotting style in mind, though it also has an object oriented interface.
1. MATLAB style interface: You can use it by importing
pyplot from matplotlib library and use MATLAB like functions.
When using this interface, methods will automatically select current figure and axes to show the plot in. It will be so (i.e. this current figure will be selected again and again for all your method calls) until you use
pyplot.show method or until you execute your cell in IPython.
2. Object Oriented interface: You can use it like this:
import matplotlib.pyplot as plt figure, axes = plt.subplots(2) # for 2 subplots # Now you can configure your plot by using functions available for these objects.
It is low level library and you have total control over your plot.
1.2 Seaborn
Seaborn is a higher level library for visualization, made on top of matplotlib. It is mainly used to make quick and attractive plots without much hassle. Though seaborn tries to give some control over your plots in a fancy way, but still you cannot get everything you desire from it. For that you will have to use matplotlib’s functionality, which you can use with seaborn too (as it is built on matplotlib).
2. Distribution Plots
Distribution plots (or
Probability plots) tells us how one variable is distributed. It gives us probability of finding a variable in particular range. I.e. if we were to randomly select a number from total range of a variable, it gives us probabilities of this variable being in different ranges.
Distribution plots should be
Normally distributed, for better results. This is one of the assumptions of all Linear models, i.e. Normality.
Normal distribution looks like a medium hump on middle with light tails.
Note: If TL;DR (Too Long; Don’t wanna Read), just read initial function used to plot the sub-topic plot and then read through Tips. Eg: here Tip #1 and plt.hist, below.
(Tip #1)
1) You can get away with using
matplotlib.pyplot's function's provided parameters for your plots, in most cases. Do look into function's parameters and their description.
2) All
matplotlib's functions and even
seaborn's functions returns all components of your plot in a dictionary, list or object. From there also you can change any property of your components (in
matplotlib’s language
Artists).
Box Plots and Violin Plots are in Categorical Section.
- Histograms and Kernel Density Estimate Plots (KDEs):
# Simple hist plot _ = plt.hist(train_df['target'], bins=5, edgecolors='white')
# with seaborn _ = sns.distplot(train_df['target'])
(Tip #2)
3) For giving some useful information with your plot or drawing attention to something in plot you can mostly get away with either
plt.text()or
plt.annotate().
4) Most necessary parameter for a plot is‘
label’, and most necessary methods for a plot are ‘
plt.xlabel’, ‘
plt.ylabel’, ‘
’, and ‘’, and ‘
plt.title
plt.legend’.
A] To effectively convey your message you should remove all unwanted distractions from your plot like right and top axis, and any other unwanted structure in your plot.
import matplotlib.pyplot as plt _ = plt.hist(data, bins=10, color='lightblue', label=lbl, density=True, ec='white') plt.legend() plt.title("Target variable distribution", fontdict={'fontsize': 19, 'fontweight':0.5 }, pad=15) plt.xlabel("Target Bins") plt.ylabel("Probability");
<prop fontsize='18' color='blue'> hard to predict.<\prop>");
# Thats all! And look at your plot!!
3. Relational Plots
Relational plots are very useful in getting relationships between two or more variables. These relationships can help us understand our data more, and probably help us make new variables from existing variables.
This is an important step in
Data Exploration and
Feature Engineering.
a) Line Plots
b) Scatter Plots
c) 2D-Histograms, Hex Plots and Contour Plots
d) Pair Plots
a) Line Plots:
Line Plots are useful for checking for linear relationship, and even quadratic, exponential and all such relationships, between two variables.
(Tip #3)
5) You can give an aesthetic look to your plot just by using parameters ‘
color’ / ‘
c’, ‘
alpha’ and ‘
edgecolors’ / ‘
edgecolor’.
6)
Seabornhas a parameter ‘
hue’ in most of its plotting methods, which you can use to show contrast between different classes of a categorical variable in those plots.
B] You should use lighter color for sub parts of plot which you do want in plot but they are not the highlight of the point you want to make.
plt.plot('AveRooms', 'AveBedrms', data=data, label="Average Bedrooms")
plt.legend() # To show label of y-axis variable inside plot
plt.title("Average Rooms vs Average Bedrooms")
plt.xlabel("Avg Rooms ->")
plt.ylabel("Avg BedRooms ->");
You can also color code them manually like this:
plt.plot('AveRooms', 'AveBedrms', data=data, c='lightgreen') plt.plot('AveRooms', 'AveBedrms', data=data[(data['AveRooms']>20)], c='y', alpha=0.7) plt.plot('AveRooms', 'AveBedrms', data=data[(data['AveRooms']>50)], c='r', alpha=0.7)
plt.title("Average Rooms vs Average Bedrooms")
plt.xlabel("Avg Rooms ->")
plt.ylabel("Avg BedRooms ->");
# with seaborn _ = sns.lineplot(x='AveRooms', y='AveBedrms', data=train_df) hard to predict.<\prop>"); # That's all! And look at your plot!!
b) Scatter Plots:
Not every relationship between two variables is linear, actually just a few are. These variables too have some random component in it which makes them almost linear, and other cases have a totally different relationship which we would have had hard time displaying with linear plots.
Also, if we have lots of data points, scatter plot can come in handy to check if most data points are concentrated in one region or not, are there any outliers w.r.t. these two or three variables, etc.
We can plot scatter plot for two or three and even four variables if we color code the fourth variable in 3D plot.
(Tip #4)
7) You can set size of your plot(s) in two ways. Either you can import
figurefrom
matplotliband use method like: ‘
figure(figsize=(width, height))’ {it will set this figure size for current figure} or you can directly specify
figsizewhen using Object Oriented interface like this:
figure, plots = plt.subplots(rows, cols, figsize=(x,y)).
C] You should be concise and to the point when you are trying to get a message across with data.
from matplotlib.pyplot import figure figure(figsize=(10, 7)) plt.scatter('AveRooms', 'AveBedrms', data=data, edgecolors='w', linewidths=0.1)
plt.title("Scatter Plot of Average Rooms and Average Bedrooms")
plt.xlabel("Average Bedrooms ->")
plt.ylabel("Average Rooms ->");
# With Seaborn from matplotlib.pyplot import figure figure(figsize=(10, 7)) sns.scatterplot(x='AveRooms', y='AveBedrms', data=train_df, label="Average Bedrooms");
| https://www.kdnuggets.com/2019/06/make-data-talk.html | CC-MAIN-2019-30 | refinedweb | 1,419 | 57.87 |
105
The following forum message was posted by fabioz at:
The problem may be that the pydev debugger does not handle zip files properly
in a debugging session (i.e.: it's accessing C:\Program
Files\AutoDesk\Maya2008\bin\python25.zip... If you have the python install
extracted in your machine, you should be able to edit
eclipse361\plugins\org.python.pydev.debug_1.6.5.2011012519\pysrc\pydevd_file_utils.py
to translate the paths from the .zip to the actual python install.
Now, you can take a look at that .zip (C:\Program
Files\AutoDesk\Maya2008\bin\python25.zip) and if it contains the actual .py
files (not only .pyc files), you can extract it to a folder with the name same
of the .zip (i.e.: rename the python25.zip to python25.old.zip and create a
folder named python25.zip with the .zip contents -- but that'll only work if
it actually holds the .py files).
Cheers,
Fabio
The following forum message was posted by fabioz at:
Actually, the problem seems to be that you have to set the PATH variable when
you do the run (not the pythonpath). I think that if you have the PATH variable
configured in the same shell you execute Eclipse, it should already work (note
that you have to restart eclipse if it still wasn't configured).
Another choice is editing the PATH variable in the run configuration or in the
interpreter configuration related to that run.
Cheers,
Fabio
The following forum message was posted by sdox1234 at:
Hi all,
I'm trying hard to become a pydev-eclipse convert. I can get all my basic python
programs to run fine, but I'm trying to get some CUDA gpu programming working
using pyCuda and am having path errors linking to nvcc that I'm not sure how
to fix. I'm running in Ubuntu 10.10 on a 64-bit machine. When trying to run
the pycuda example from: I get the following
error:
...
File "/usr/local/lib/python2.6/dist-packages/pycuda-0.94.2-py2.6-linux-x86_64.
egg/pycuda/compiler.py", line 47, in compile_plain
checksum.update(get_nvcc_version(nvcc))
File "<string>", line 2, in get_nvcc_version
File "/usr/lib/pymodules/python2.6/pytools/__init__.py", line 140, in memoize
result = func(*args)
File "/usr/local/lib/python2.6/dist-packages/pycuda-0.94.2-py2.6-linux-x86_64.
egg/pycuda/compiler.py", line 21, in get_nvcc_version
% (nvcc, str(e)))
OSError: nvcc was not found (is it on the PATH?) [error invoking 'nvcc --version':
[Errno 2] No such file or directory]
typing nvcc --version at a terminal gives:
nvcc: NVIDIA (R) Cuda compiler driver
Built on Wed_Nov__3_16:16:57_PDT_2010
Cuda compilation tools, release 3.2, V0.2.1221
which nvcc ... produces
/usr/local/cuda/bin/nvcc
I have included the paths: /usr/local/cuda/bin and /usr/local/cuda/lib64
to the Window->Preferences->Pydev->Interpreter-Python->PYTHONPATH but still
get the error. Executing the same example at the terminal works fine, so I'm
missing something in how pyDev is looking for nvcc?
any help here would be appreciated.
The following forum message was posted by bntheman at:
Ok,
so this is real annoyiing. When i go to debug the above script in Eclipse, i
am back to getting that cannot find error again.
Could not copy all of the files it could not find, since I do not have an internet
connection with my PC, but here are some of them...
[code]
pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly
compiled (internal generated filenames are not absolute)
pydev debugger: The debugger may still function, but it will work slower and
may miss breakpoints.
pydev debugger: Unable to find the real location of 'C:\Program
Files\AutoDesk\Maya2008\bin\python25.zip\threading.py
pydev debugger: Unable to find the real location of 'C:\Program
Files\AutoDesk\Maya2008\bin\python25.zip\stat.py
Also, I found out some things that I was doing wrong.
1. I noticed that mayapy.exe is really picky with which version of python you
are suppose to use. You cannot have Python 2.5 or 2.5.4. Nope, must be Python
2.5.1.
2. Once I fixed that, I then noticed that for each script I run, I must change
the current working directory to 'C:\...\Maya2008\bin'. So now my script looks
like this....
import pydevd; pydevd.settrace()
import maya.standalone as ms
ms.initialize(name='python')
import maya.cmds as cmds
results = cmds.polyplane(name = 'myPlane', ch =1, w = 1, cuv = 2)
print results[0]
[/code]
...but that was not the end of it. When I go to run this script I still get
a traceback error, and the really annoying part is the fact that my script works
in Python's IDLE. So what I did was, I printed the results of cmds in Python
IDLE and it points to...
"C:\Program Files\Autodesk\Maya2008\Python\Libs\site-packages"
.... and when I run the script in Eclipse it is pointing to somewhere else,
I forget where. However, the weird part is that I made sure that C:\Program
Files\Autodesk\Maya2008\Python\libs\site-packages was the last thing listed
in my system libraries path. In spite of this, and I am not sure if this would
be the correct way of doing this, but I had setup a Run configuration, and entered
the above correct path in the Envrioment Tab like so...
I set the variable to PYTHONPATH
and the value to C:\Program Files\Autodesk\Maya2008\Python\libs\site-packages
.... and that fixed that problem, but I am not sure why when I go to debug mode,
I get those cannot find errors.
The following forum message was posted by bntheman at:
hi Fabio, I'm sorry I haven't gotten back to, but I was having some problems
with my pc continuesly rebooting by itself. I think everything is ok now.
So getting back to my remote debugging problem, since you have mentioned that
Maya needs to be openned, that fixed the "cannot find the real location for..."
problem as well as the pydev settrac() to work. However, this still does not
fix the problem with getting my script to work.
... sorry but my ps3 limits me with texting. I will finish in the next thread.
The following forum message was posted by bntheman at:
... So here is my script.
import pydev; pydev.settrac()
import maya.standalone as ms
ms.linitialize()
import maya.cmds as cmds
results = cmds.polyPlane(name = 'myPlane', ch=1, w=1, h=1, cuv=1)
print results[0]
so... when I run this, I get a TraceBack error....
AttributeError 'noneType' something something is unscriptable sorry but I couldn't
remember the exact error, but from a python point of view, that error means
that the variable, results, was never assigned to anything. So, if I am setting
results to be ='ed to the results of polyPlane(...), and polyPlane(...) is suppose
to return the name of the new object, then why is results ='ed to none?
The following forum message was posted by ekondrashev at:
Hi
I'm developing eclipse plugin and faced with situation when i need update my
project python path programmatically.
Is there a way of doing this from the eclise plugin?
Perhaps there is some extension point or something else?
Thanks,
Eugene
Pydev finds out about the interpreter by running
python_executable_provided
plugins/org.python.pydev_xxx/PySrc/interpreterInfo.py
And gets the actual executable in that script accessing 'import
sys;print(sys.executable)"... in your case, your custom interpreter
should have the sys.executable as itself, but it appears it's pointing
to /usr/bin/python instead (so, your custom interpreter should be
fixed to handle that).
Cheers,
Fabio
On Fri, Jan 21, 2011 at 5:30 PM, Michael Wand <michael.wand@...> wrote:
>ydev-users mailing list
> Pydev-users@...
>
>
The following forum message was posted by fabioz at:
It shouldn't be hard. Please create a feature request for that.
Cheers,
Fabio
The following forum message was posted by repgahroll at:
Thanks man. But the problem was PyDev uses sitecustomize by default. It's necessary
to add it in PYTHONPATH to run some programs.
The following forum message was posted by fabioz at:
You can see the command line in: toolbar > run > run configurations, choose
the run that's working for you, select the 'interpreter' tab, press 'see resulting
command line for the given parameters'. it should give you the command line
to use and the pythonpath you need to set for it to run.
Cheers,
Fabio
The following forum message was posted by repgahroll at:
Hello there.
My apps doesn't run from terminal, only from PyDev. I wonder how i can reproduce
the PyDev parameters/settings in order to get my apps running without it.
I've tried to fix the code in order to achieve that, but i fix one problem and
another appears, and so on. And as the apps run perfectly from PyDev, I want
to simply reproduce that without it.
Can someone help me please? (I'm on Linux, Python 2.6)
Thank you. thedm at:
As you told me, I created a ticket here:
g-loading-bundles
The following forum message was posted by microo8 at:
Hi, i have a problem with PyDev and its code completion of method params.
when i write:
def method_in_my_module1(param_that_is_a_class_from_my_module2):
param_that_is_a_class_from_my_module2. #and here when i pres CRTL+Space
it popups a box with sugestions and they are not good for that
"param_that_is_a_class_from_my_module2"
in would be good that i write this method and explicitly write a comment on
it, there would be the description of the params and PyDev will know what class
is the param and the code-completion would be good for that class.
sorry for my english :)
The following forum message was posted by lyquid at:
Hi there
I'm having a problem with import statements. The module I need has been added
as a library but due to our versioning system it's a symlink:
.../moduleIWant.py -> versions/someAwkwardNameWithNoDotPyExtension
Seems Eclipse is dereferencing the link to the full canonical path which makes
no sense to it, especially because it doesn't end with .py
My temp solution has been to manually copy the modules to another location which
forces them to become real files with the correct name. The problem is that
this is no a longer a live reference to our latest libraries.
Any thoughts much appreciated
Mark
eclipse 3.5.1
pydev 1.5.4
centos 3
The following forum message was posted by bntheman at:
Ok,
I think I know what. Is going on. Just as I was about to give up, I thought
to myself, why couldn't I try rrunning maya standalone in Python IDLE. At first
I was simply getting an import error no module named maya.standalone. I searched
this error and the best answer I could find was to make sure that my system
enviroment pythonpath was set to
c:\program files\autodesk\maya2008\python\lib\site-packages
I did that, and I still got the same import error. So I did some more searching,
and one person had asked almost the same question at autodesk's forum. Although
no one had replied, the person asking the question said that he tried copying
the standalone.pyd file to the main site-packages folder, and it worked. So
rather than copying the file, I decided to use pythonKs os module to change
the current working directory like so...
Import os
os.chdir("c:\program files\autodesk\maya2008\python\lib\site-packages\maya")
I then tried
Import standalone
standalone.initialize(name='python')
I did not get an error, but it did not take as long to import as if I was doing
this in mayapy.exe, and sure enough, when I tried importing maya.cmds the import
worked, but the ls command spit out an AttributeError module object has no attribute
'ls'
...so it sounds like to me that my enviroment varaibles are being ignored, and
possibly answers why I am getting all sorts of errors trying to use eclipse.
If so, is there a fix for this?
The following forum message was posted by at:
This is a usability issue only. After a fresh install of Eclipse Classic (3.6.1)
on Ubuntu and PyDev 1.6.4.2011010200, the debug menu often does not include
"Debug As... PyDev: Django". If I right-click on my project in the project
explorer and visit the Django submenu, without clicking on anything, I can go
back to the debug menu and "Debug As... PyDev: Django" is there.
Also, if you right-click on PyDev Django in the Eclipse Debug Configurations
popup, and select "New", the New_configuration created is totally blank and
it would be quite complicated to fill in all the values. I found the only practical
way is to run "Debug As... PyDev: Django once so it automatically creates a
useful debug config (all values filled in); you'll see that new configuration
in the Debug Configurations popup and you can "Duplicate" it to make your own.
Perhaps all of this is as per design... in which case the manual
[url][/url] might be improved.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/pydev/mailman/pydev-users/?viewmonth=201101 | CC-MAIN-2017-26 | refinedweb | 2,262 | 65.73 |
New to javaFx and wanting to use scenebuilder for GUI development, i've come across an issue with no luck searching the website nor the web in general for my problem, although similar questions have been asked, thought a different perspective could be needed. I am trying to load an FXML file through Netbeans after a quick build to test functionality so the code is simple, but i cannot get the root file to be set in the controller. my code is the following public class Divergex extends Application {
@Override public void start(Stage stage) throws Exception { Parent root = FXMLLoader.load(getClass().getResource("DivergexGUI.fxml")); Scene scene = new Scene(root); scene.setRoot(root); stage.setScene(scene); stage.show(); }
Ive tried suggestions in changing fxroot to a Vbox with no luck, i continue to get a Load exception on the compile :
Exception in Application start method... Caused by: javafx.fxml.LoadException: Root hasn't been set. Use method setRoot() before load.
yet when i use
scene.setRoot(root);
the same exception is experienced
i've narrowed the issue down to the fact that my FXML file is unable to be set as a root in the Parent object but have had no luck in tackling this. Any suggestions would be great thanks.
<fx:root> specifies a "dynamic root" for your FXML file; this means the root of the FXML file is an object that is set on the loader prior to loading the file. This is typically used for custom controls, where you want the control to be a subclass of
Node that can be instantiated using regular Java code, but want to define its layout using FXML. Proper use of
<fx:root> (or at least an example of how it can be used) is shown in the standard documentation. In particular, if you use
<fx:root> you must:
FXMLLoaderinstance, instead of using the static convenience
FXMLLoader.load(URL)method
For standard FXML use, you just use a regular instance declaration as the root. Almost every example available works this way: probably the best place to start is the official tutorial. In your case, since you want a
VBox, you probably just need
<VBox xmlns="javafx.com/javafx/8"; xmlns: <!-- ... --> </VBox>
Edit If Netbeans is giving you issues, I recommend using Eclipse with the e(fx)clipse plugin. There's a very barebones, but pretty much all you need, tutorial.
uncheck id::root in scence builder or change id::root to vbox | https://javafxpedia.com/en/knowledge-base/23729277/javafx-fxml-load-file-issues-with-setting-root | CC-MAIN-2020-50 | refinedweb | 411 | 61.67 |
An easy to use ini file handler library
Ini Handler is a simple and small Python library for reading and writing .ini setting files.
Ini Handler makes implementing user customisable settings painless and simple. It achieves this by limiting the number of methods you have to remember to be able to do what you want to do.
For example:
from ini_handler.vbini import Ini
ini_file = Ini() ini_file[‘NewSetting’] = ‘Simple!’
print(ini_file[‘NewSetting’])
Output:
‘Simple!’
Just like that we have created and retrieved a new setting!
Go here for the documentation:
Source
The source files can be found on GitHub:
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/Ini-Handler/ | CC-MAIN-2017-43 | refinedweb | 120 | 68.26 |
/* * Copyright def HAVE_CONFIG_H #include "config.h" #endif #ifndef lint static const char rcsid[] _U_ = "@(#) $Header: /tcpdump/master/tcpdump/print-vjc.c,v 1.15 2004-03-25 03:31:17 mcr Exp $ (LBL)"; #endif #include <tcpdump-stdinc.h> #include <pcap.h> #include <stdio.h> #include "interface.h" #include "addrtoname.h" #include "slcompress.h" #include "ppp.h" /* * XXX - for BSD/OS PPP, what packets get supplied with a PPP header type * of PPP_VJC and what packets get supplied with a PPP header type of * PPP_VJNC? PPP_VJNC is for "UNCOMPRESSED_TCP" packets, and PPP_VJC * is for COMPRESSED_TCP packets (PPP_IP is used for TYPE_IP packets). * * RFC 1144 implies that, on the wire, the packet type is *not* needed * for PPP, as different PPP protocol types can be used; it only needs * to be put on the wire for SLIP. * * It also indicates that, for compressed SLIP: * * If the COMPRESSED_TCP bit is set in the first byte, it's * a COMPRESSED_TCP packet; that byte is the change byte, and * the COMPRESSED_TCP bit, 0x80, isn't used in the change byte. * * If the upper 4 bits of the first byte are 7, it's an * UNCOMPRESSED_TCP packet; that byte is the first byte of * the UNCOMPRESSED_TCP modified IP header, with a connection * number in the protocol field, and with the version field * being 7, not 4. * * Otherwise, the packet is an IPv4 packet (where the upper 4 bits * of the packet are 4). * * So this routine looks as if it's sort-of intended to handle * compressed SLIP, although it doesn't handle UNCOMPRESSED_TCP * correctly for that (it doesn't fix the version number and doesn't * do anything to the protocol field), and doesn't check for COMPRESSED_TCP * packets correctly for that (you only check the first bit - see * B.1 in RFC 1144). * * But it's called for BSD/OS PPP, not SLIP - perhaps BSD/OS does weird * things with the headers? * * Without a BSD/OS VJC-compressed PPP trace, or knowledge of what the * BSD/OS VJC code does, we can't say what's the case. * * We therefore leave "proto" - which is the PPP protocol type - in place, * *not* marked as unused, for now, so that GCC warnings about the * unused argument remind us that we should fix this some day. */ int vjc_print(register const char *bp, u_short proto _U_) { int i; switch (bp[0] & 0xf0) { case TYPE_IP: if (eflag) printf("(vjc type=IP) "); return PPP_IP; case TYPE_UNCOMPRESSED_TCP: if (eflag) printf("(vjc type=raw TCP) "); return PPP_IP; case TYPE_COMPRESSED_TCP: if (eflag) printf("(vjc type=compressed TCP) "); for (i = 0; i < 8; i++) { if (bp[1] & (0x80 >> i)) printf("%c", "?CI?SAWU"[i]); } if (bp[1]) printf(" "); printf("C=0x%02x ", bp[2]); printf("sum=0x%04x ", *(u_short *)&bp[3]); return -1; case TYPE_ERROR: if (eflag) printf("(vjc type=error) "); return -1; default: if (eflag) printf("(vjc type=0x%02x) ", bp[0] & 0xf0); return -1; } } | http://opensource.apple.com/source/tcpdump/tcpdump-28/tcpdump/print-vjc.c | CC-MAIN-2016-18 | refinedweb | 482 | 69.21 |
Clar is a vast field of theories and methods, and to be perfectly clear: despite advertising exaggeration and hype, machine learning as it is today amounts to a small part of AI. One significant practical problem today is that the endeavor to advertise products dramatically distorts scientific information on the subject by equating machine learning with AI. For example, if you Google “artificial intelligence frameworks” you will get hits on Theano and TensorFlow. You did not Google “machine learning frameworks,” but you received a list of machine learning frameworks anyway, because people are in the business of selling this software, and they don’t care what you call it. These products are not comprehensive AI frameworks; they are limited to problems of machine learning, that are largely pattern recognition problems. Conflating the terms in the above list is disadvantageous.
Recently, many products are even advertised as autonomous! To get an idea of a truly autonomous self-driving car, imagine one that suddenly decides on its own to leave San Jose and drive up to Vancouver to take some nice pictures of the mountains in the Fall, and disappears! That is autonomy. Hopefully, we can look at a few of the research horizons of AI and quickly realize that machine learning does not even approach such lofty goals:
Although ML can do excellent pattern recognition and therefore defeat a human chess expert, it will never enjoy the game of chess nor satisfactorily emulate the emotions of a human player which gives rise to both the priority and significance of the game. It is not apparent to most humans that logic has dependencies! Logic itself arises from the presupposition of priorities and values. Indeed, although the broader field of AI proposes theories for constructing such machines, AI cannot escape Gödel’s incompleteness: logical-mathematical systems include statements that are accepted as true but which cannot be proven! Machine learning is a little more than old linear regression renamed.
Examining the “difference between machine learning and deep learning” again we find that the former is a refined and advanced subset of the latter. If we are talking about the “difference between machine learning and neural networks” we can see that the neural network is a method of deep learning. Cognitive neural networks are a further specialized subset. Furthermore, an effort to delineate the “difference between deep learning and neural networks” reveals once again that neural networks are a specific set of methods of deep learning, and so AI contains all of these by proxy.
Something a bit different happens when we study the “difference between data science and machine learning” because now we are looking at two broad fields of inquiry; and we find that data science includes AI to an extent and also shares many methods with machine learning. The difference between data mining and machine learning is more ambiguous because many techniques and methods of pattern recognition are commonly and almost equally labeled as one or the other.
Today, the difference between AI and machine learning is the single most important source of confusion in popular technical journals. Misconceptions about machine learning and deep learning differences are the second most abundant. Toward the goal of straightening out all the misconceptions, we will explore all three of these with an in-depth look at each.
After distinguishing the relationships among the various fields of inquiry as above, the associated development frameworks of each likewise fall into a similar order. A framework is an environment for creating applications, and usually includes an IDE, one or more standard language interpreters, and compilers, and these tools further expose a vast array of standard libraries and modules for advanced coding. Clarifying the differences among popular frameworks results in a hierarchical structure similar to the one above, but we now need to include language interfaces and point out the difference between library and framework. In this article we will look at the best of class frameworks:
When we talk about a library in the context of a programming language like Python, we include libraries like Scipy, Numpy, and Pandas, each of which contains a set of functions and methods to make coding projects efficient and convenient. Pandas, for example, contain the FrameData objects which make convenient the task of data representation in a form similar to MS Excel. Many of the same methods used in Excel are replicated in pandas. The Numpy library includes functions for efficient matrix math which is important to all the methods of machine learning.
A quick deep learning framework comparison reveals the common overlap in concepts and produces hits including TensorFlow, Keras, PyTorch, and MXNet. Here we can see that the library and framework are two terms used interchangeably, and for practical purposes, there is no problem with doing so. Just keep in mind that you will likely see SciKitLearn described as a Python library as well as an ML framework. Likewise, a machine learning framework comparison will produce almost identical hits.
As for the most popular deep learning framework, the truth is perhaps impossible to discover, because proprietary frameworks have absorbed open-source frameworks, effectively concealing the frequency of their implementation. Also, the success of a deep learning project will depend on the accurate choice of machine learning methods for a specific set of data features rather than a framework. This is true because all of the popular frameworks contain the same popular methods, like K nearest neighbors, random forests, and deep belief neural networks. For instance, Wolfram’s machine learning framework absorbed MXNet and it is then distributed within a paid application. Not surprisingly, developers move from one company to another, taking their design methods with them, and after a few years of this mixing, all distributions look alike.
To speak of a “python machine learning framework” is to imply SciKit Learn machine learning library or another library designed to work with the Python language. Python is not inherently designed for AI, but many libraries and frameworks for AI, machine learning, and deep learning are easily implemented with Python. Our purpose here is to delineate this terminology and to provide a fundamental example as an illustration. Beyond Python, many languages are capable of implementing models of AI and machine learning including Lisp, Prolog, Java, and more. Now let’s look at the implementation of these tools.
To escape the hype of advertising and drive closer to more diverse research in true AI, we need to expand our vocabulary. The problem with Google is that you cannot search for something unless you know what it’s called; You can’t browse the internet the way you can browse a traditional library. This shortcoming makes us wonder why this app is called a browser! But fortunately, things can be discovered through serendipity. Lisp is the second-oldest high-level language in use today. After its inception became the preferred coding language for artificial intelligence research apps. Common Lisp is currently the most popular dialect. And Caveman2 is an open-source framework for Common Lisp with support for creating web applications. Caveman2 is free open-source software and is available through the Quicklisp package manager. Quicklisp is a package manager intended to support Common Lisp modules and libraries. Quicklisp implements a simple compatibility layer allowing Quicklisp to run with most Common Lisp implementations. It can also facilitate the download of some 1,400 related modules and libraries.
We want to demonstrate the use of a true AI framework with a code sample that pertains to an endeavor relevant in today’s AI context. Although it is not in the scope of this article to explain the syntax and functionality of Lisp, we can delineate some salient features and thus distinguish it from the throng of machine learning frameworks blaring in previously subtle places.
Coined from the phrase “list processing,” Lisp is practical for AI apps because of its great prototyping capabilities and facility for symbolic expression. Lisp is used in AI projects such as DART, Macsyma – one of the first symbolic algebra apps which were originally developed at MIT, and CYC, and is often used in medical diagnosis apps, one of the most difficult problems in machine intel. Here is an example List program to reverse the order of characters in a list:
(defun iterative-reverse (1st) ;; function reverse list with iteration (prog (temp) ;; temp local variable initialized to NIL LOOP (cond ((null 1st) (return temp))) ;; check for termination of 1st, add first element to temp (setq temp (cons (car 1st) temp)) ;; remove first element from 1st before looping (setq 1st (cdr 1st)) (terpri) (princ ' J Temp = ]) (princ temp) ;; print result (go LOOP)))
The parenthetical syntax is remarkable in Lisp. A feature annoying to some and quintessential to others, this aspect is little different than brackets in C++. The objects in Lisp are called atoms, and anything can be an atom. Likewise, functions and recursion are similar to other languages. In fact, most languages of today are capable of implementing logical and symbolic features similar to Lisp. For example, Reddit news was originally written in Common Lisp, but it was later rewritten in Python.
A typical implementation in Caveman2 to reference a JSON file HTTP request looks like this:
(defun char-vector-to-string (v) (format nil "~{~a~}" (mapcar #'code-char (coerce v 'list)))) (defun remote-json-request (uri) "Pull in remote JSON. Drakma returns it as a large vector of character codes, so we have to parse it out to string form for cl-json." (let* ((json-response-raw (http-request uri)) (json-response-string (char-vector-to-string json-response-raw)) (json (decode-json-from-string json-response-string))) json)
Caveman2 in conjunction with Quicklisp now provides functionality for developing web apps equivalent to Python and MXNet, for example. The nuances in the choice of language and framework are largely nominal today. One enterprise may favor Lisp because of a legacy of established code and engineers already familiar with the existing libraries. Freedom of choice is greatest at the point of the initial design.
As we have clearly established, the most popular machine learning frameworks today are equivalent in that all include every popular method of ML and DL. This is imperative because of competition and is facilitated by the widespread availability of research documents that demonstrate the implementation of the methods. Conflating ML and DL is a trivial error, but artificial intelligence is a superset of both. The list of competing for ML and DL frameworks grows every month, and it is astonishing that freeware programs should compete at all, which may serve to illustrate the wild popularity of the subject. Although the following list of frameworks is far from exhaustive it will demonstrate the point:
And we have not even mentioned the paid frameworks. The real choice of which method to choose to achieve the greatest accuracy in a given project must be based on a mathematical appraisal of the objectives. For example, cognitive neural network methods are better for natural language processing. Of the many regression methods here are four noteworthy varieties::
The artificial neural networks are pattern matching algorithms used for regression as well as classification problems. To name a few:
Deep Learning methods constitute more recent improvements and innovations in classical ML methods. Targets are Big Data in the context of decreasing hardware costs. DLLs intend to increase the depth of ordinary neural networks and extend them to larger datasets. However, the methods are fundamentally the same, and include these popular examples::
Choosing any two frameworks from the above list and exploring a few example scripts will eventually prove that they are all fundamentally the same down to the pith. In this example, we will generate a small dataset with Shogun and Python:
from numpy import * from numpy.random import randn dist=0.499 trainingdata_real = concatenate((randn(2,1000)-dist1, randn(2,1000)+dist), axis=1) testdata_real = concatenate((randn(2,1000)-dist1, randn(2,1000)+dist), axis=1) train_labels = concatenate((-ones(1000), ones(1000))) test_labels = concatenate((-ones(1000), ones(1000)))
We first import numpy, and then generate a type of real-valued training and test data split based on a Gaussians distribution. Next, generate two Gaussian sets that are “dist” apart. We insert the data in a matrix with each column describing an object. Finally, we add labels. This simple setup can be replicated in an almost identical form in all of the frameworks listed.
All machine learning algorithms run fastest on GPU hardware because ML is modeled on matrix math and GPUs are optimized for matrix math. It’s the perfect match for software and hardware. NVIDIA Deep Learning SDK runs deep learning algorithms with this match in mind. Advanced deep neural networks use algorithms in conjunction with big data with the power of the GPU for apps like self-driving cars, where speed is crucial. Let’s look at an example of a PyTorch algorithm that accelerates n-dimensional tensors on GPUs. In the following Python code, we take advantage of PyTorch Tensors:
import torch dtype01 = torch.FloatTensor N, D_in, H, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in).type(dtype01) y = torch.randn(N, D_out).type(dtype01) w01 = torch.randn(D_in, H).type(dtype01) w02 = torch.randn(H, D_out).type(dtype01) learning_rate = 1e-6 for t in range(500): h = x.mm(w01) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w02) loss = (y_pred - y).pow(2).sum() print(t, loss) grad_y_pred = 2.0 * (y_pred - y) grad_w02 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w02.t()) grad_h = grad_h_relu.clone() grad_h[h < 0] = 0 grad_w01 = x.t().mm(grad_h) w01 -= learning_rate * grad_w01 w02 -= learning_rate * grad_w02
We begin by creating a random dataset to test the function. Next, we initialize weights, bias, and calculate the loss. We then use backpropagation with the weights, and finally, we refine the weights with each loop through gradient descent. This is a real breakthrough when running in the GPU.
The core concept in managing datasets in the Caffe2 Framework is Blobs. The purpose is a fundamental reorg through naming data chunks as tensors.
from caffe2.python import workspace, model_helper import numpy as np01 x = np01.random.rand(4, 3, 2) print(x) print(x.shape) workspace.FeedBlob("my x val", x) x2 = workspace.FetchBlob("my x val") print(x2)
In the above code sample, we demonstrate the capability to initialize a tensor in 3-space with a random dataset (a similar concept to the previous Torch sample). Next, we need to demo the Net object. Caffe2 nets are operator graphs, a unique mechanism to mitigate input blobs and output blobs through the learning model. Look at this example:
# Input data: data = np01.random.rand(16, 100).astype(np01.float32) # Label data as integers [0, 9]. label = (np01.random.rand(16) * 10).astype(np01.int32) workspace.FeedBlob("data - ", data) workspace.FeedBlob("label - ", label)
Next, we create the model, and initiate weights and biases:
m = model_helper.ModelHelper(name="Caffe2 net:") weight01 = m.param_init_net.XavierFill([], 'fc_w', shape=[10, 100]) bias01 = m.param_init_net.ConstantFill([], 'fc_b', shape=[10, ])
Finally, we implement the model as:
fc_1 = m.net.FC(["data", "fc - w", "fc - b"], "fc1") pred = m.net.Sigmoid(fc - 1, "pred") softmax, loss = m.net.SoftmaxWithLoss([pred, "label"], ["softmax", "loss"])
Caffe2’s implementation uses a standard Softmax regression to generate the model parameters. The frameworks establish unique features, but in the end, nearly all of them use the highly efficient Softmax for the regression.
Hopefully, it is now apparent that machine learning is not intelligent. However, the more practical point is a realistic appraisal of the difference between artificial intelligence as an advanced field of computer science and the implementations of machine learning which are commonplace in open source frameworks now. Today a common task of ML can be generalized as, “inferring a function to describe hidden structures from the unlabelled data.” This strategy may reveal surprising forecasts that a company can profit from immediately purchasing a fleet of vehicles, but it will not cause the machine to choose the decision not to buy the vehicles on the longer-term basis that it is destructive to the environment; the former is a common machine learning task and the latter requires actual intelligence. Field and subfield are thus differentiated. The latter is a level of intelligence not yet on the AI horizon. It is speculative.
Humans are now backward-adaptive, which means that humans are changing their goals and altering their behavior to compensate for the inadequacies of machine intelligence! Humans are effectively lowering the standard definition of intelligence to equivocate the current definition of machine intelligence. As humans develop increasingly intelligent systems while simultaneously backward-adapting themselves the result may be that machines and humans of the future meet somewhere in between today’s concepts of natural and artificial intelligence, through an unfortunate evolutional proxy. What the true future of AI holds in the store may depend more on the advent of the quantum computer than on the development of AI algorithms! | https://bytescout.com/blog/best-ml-dl-and-ai-frameworks-in-2018.html | CC-MAIN-2022-40 | refinedweb | 2,830 | 52.7 |
S
ScoutKirkOlson
@ScoutKirkOlson
0Reputation
4Posts
7Profile views
0Followers
0Following
Best posts made by ScoutKirkOlson
This user hasn't posted anything yet.
Latest posts made by ScoutKirkOlson
- RE: Q-Tree How to tick all nodes posted in Help
- Q-Tree How to tick all nodes.
- RE: Full solution to Dynamic color scheme + extras
@CodeGrue said in Full solution to Dynamic color scheme + extras:
import { colors } from “quasar”;
Your VueX solution works for the Quasar stylus variables only, not for custom added variables does it?
Is there way to dynamically change those as well?
- Linearicons
Hello all,
Does anyone have experience integrating Linearicons () in the Quasar framework? | https://forum.quasar-framework.org/user/scoutkirkolson | CC-MAIN-2020-24 | refinedweb | 105 | 50.77 |
IZotope.Ozone.6.Advanced.v6.00.Incl.Emulator-R2R Full Version
Clever Pilots Flying Weather Station 4 Pro v1.5 Incl Keygen-R2R. Ozone 8 Advanced v8.00. Incl Emulator-R2R.IK Multimedia.Master Collection . Izotope Ozone 8 Advanced v8.00 Incl Emulator R2R.Q:
Setting a default foreign key in Django
I am trying to learn python and django by building a simple blog application.
So the app has two models – Post and Category.
So far I have used a one-to-one relationship to build a model.
So my Post Model is:
class Post(models.Model):
#some other fields
category = models.ForeignKey(Category, on_delete=models.CASCADE)
My Category Model is:
class Category(models.Model):
#some other fields
category = models.CharField(max_length=255)
Now I wish to add a field to Post that will act as a default FK to the Category (ie. whenever I would create a new Post, I could choose a category from a drop down) but I cannot figure out how to set a default to the Category when I create a new Post.
Any help would be great!
A:
You can specify default argument as a keyword argument in the Category model’s __init__:
class Category(models.Model):
#some other fields
category = models.CharField(max_length=255)
def __init__(self, *args, **kwargs):
super(Category, self).__init__(*args, **kwargs)
self.category = kwargs.get(‘default_cat’, None)
Marcin Szatura
Marcin Szatura (born 4 November 1981 in Wrocław) is a Polish football midfielder.
Career
He finished his career in 3rd Divizia A, at Śląsk Wrocław.
References
External links
Category:1981 births
Category:Living people
Category:Polish footballers
Category:Sportspeople from Wrocław
Category:Ś
Download and play Major music, movies, TV shows and more with the free 100. Support for Audiobooks, Podcasts, and Sports radio with easy-to-learn. Find all the latest PC games including free. IZotope Stutter Edit v1.
Ozone.6.Advanced.v6.00.Incl.Emulator-R2R torrent, iso Trainer and clean Download software, games and apps.
iZotope Stutter Edit v1 Full Version, iTunesKeygen, FullCrack.. 1,0,1,1,1,0,1,1,1,0,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0..
DigitalAudio Blog Deadly Rooms of Death Download With Crack. The Last Chapter of the Legacy of Kain – Chapter 14: The Final Battle – Hints & Cheats.
iZotope Ozone 6 Advanced | iZotope RX 6 | Download. IZotope Stutter Edit v1.03-full-crack-1.0. Code – Advanced Features – Word Wrap.
Zero Click – ER2R
1. Playlist for iZotope Ozone 6 Advanced v6.00. Incl. Emulator-R2R. Playlist for iZotope Oxygen.4-3.10.10539-Full.Crack.Win..
Discord chat download for latest android zynga poker machines Download iZotope Ozone.Roland Déricourt
Roland Déricourt, born Roland Jaurès-Déricourt (April 15, 1914, Brussels – November 25, 1982, Leuze-en-Hainaut), was a Belgian resistance fighter, who joined the Belgian army in September 1939 and was part of the military resistance until 1944. During World War II, he took part in many operations.
In March 1940, he was asked to join the SOE, but refused. In May 1940, he joined, under the name “Poulot” to the Belgian intelligence agency Service de Renseignement et d’Action (SRDA). He sent many missions in relation with the Vichy and Nazi regime. In November 1942, he created the Ancienne
3e33713323 | https://www.batiksukses.com/izotope-ozone-6-advanced-v6-00-incl-emulator-r2r-full-version/ | CC-MAIN-2022-33 | refinedweb | 557 | 52.15 |
Hi
I want to know how to speed up the dataloader. I am using torch.utils.data.DataLoader(8 workers) to train resnet18 on my own dataset. My environment is Ubuntu 16.04, 3 * Titan Xp, SSD 1T.
The log shows that the dataloader takes at least 50% time of the training process. So I want to speed up the training process by reducing the time for dataloader.
I analyses the time for the datalayer get_item()
total time: 0.02
load img time: 0.0140, 78.17%
random crop and resize time: 0.0001, 0.68%
random flip time: 0.0001, 0.40%
other time: 22.36%
It shows that the reading image takes the most of time (I read 3 images in this section). I uses the codes from torch.utils.data.Dataset
def pil_loader(path):
# open path as file to avoid ResourceWarning ()
with open(path, ‘rb’) as f:
with Image.open(f) as img:
return img.convert(‘RGB’)
Therefore, I tried Rookie ask: how to speed up the loading speed in pytorch . I save the images as strings with pickle in lmdb. Then I load them back. I found that it doesn’t speed up too much. Maybe pickle.loads() still costs too much time.
Now I have no idea to speed up the dataloader. Any hints will help me much.
Thanks. | https://discuss.pytorch.org/t/how-to-speed-up-the-data-loader/13740 | CC-MAIN-2019-30 | refinedweb | 224 | 88.02 |
Please
to post a new message or reply to an existing one. If you are not registered, please
NOTE: Some forums may be read-only if you are not currently subscribed to
our technical support services.
I would like manifest my unsatisfied with your support.i?m waiting for more 30 days for support about doubts that I sent to you and get now return at now.If motive delay is your issues of bug fix to future release, please, tell and I will waiting without problem. I just need know wath is happen with my application to decide what course I will give to my desgin project. If you need, I send all historical emails.Again, I list my problems below:1) I have a problem with a treegridwnd in header rows and left and right collumn like screenshots and source project sent to you tree times;2) I have a problem with a multi-printpreview like screenshot and source project sent to you tree times;3) I migrated to windows vista home premium 32 bits (VS 2005) where I use VTK and when I initialize this view I get a crash. This crash occur when I?m interact with same treegridwnd.I create a new project without prof-uis wizard, same problems happen to, but I don’t need initialize the PC.4) I still cant post my questions in support forum on the prof-uis site because the area type the questions do not appear. So, I would like if there are some problems with my licence, listed below:Prof-UIS. Prof-UIS One User License with One Year Technical Support.02/22/2007 06:18:27, Status: CompletedGet Invoice Product: Prof-UIS v2.64 License Type: Prof-UIS One User License with One Year Technical Support Quantity: 1 Amount: $445.00Discount: $133.50Total Amount: $311.50
We are really sorry. It is our fault. We tried to compile your AIEGer project but encountered the following error in the stdafx.h file where the following line refers to an un-existing header file:
#include "..\\Comum\\ado2.h" | http://www.prof-uis.com/prof-uis/tech-support/general-forum/30-says-waiting-for-support-without-response-57357.aspx | CC-MAIN-2018-39 | refinedweb | 350 | 73.07 |
The QlikView Management API is a web based service through which you can issue a wide range of commands on data reports for automating the management activities. These activities can include the following:
>> Creating new tasks
>> Data Access Permission and its Modification
>> Licence updating
>> Server setting and its Modifications
Learn how to use QlikView, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free QlikView Training Demo!
The QlikView Management Service (QMS) API available at by default. All communication are nade through HTTP- SOAP protocol. A "service key" must be supplied as a header of the HTTP request of each operation call, . The HTTP header containing the service key is called X-Service-Key.
For example, the header with a value can look like this:
X-Service-Key: rPnBL6zlbvNr5k2nowI919EJkkOeHsi8
There are different security layers provided to control the access to data and their operations.
First layer of security requires a membership of a local group called QlikView Management.
The second layer of security has the local groups QlikView Administrator, QlikView EDX and the Document Folder Administrator role.
>> QlikView Administrator provides the highest access level on data
>> QlikView EDX the lowest access level on the document data
>> The Document Folder Administrator have access to those operations that require QlikView EDX membership, and a QlikView Administrator has access to operations that require Document Folder Administrator membership.
The user requests to make calls to Management Service by specifying the group of members on the Olik View Server. This group is initially fromed by QlikView Installer by providing specific user details added to it. The user also required to define the parameters appropriate to the type of application they want the QlikiView BI Tool to run.
Like
Open Visual Studio to start a new project. Connect the QMS API to visual studio by adding a Service Reference number/ID as shown below.
Add the service reference dialogue enter the following UR address - - click the Go button and it will connect and validate the QlikView service Server. Provide a meaningful name for this reference, for example QMSAPIService. It should look as below
A “service key” representing the specific user session within the QlikView server must be injected into every request he made. QlikView follows a .net project coding in Visual Studio platform. The steps are as follows:
>> In Visual Studio create a new folder called ServiceSupport in the root of the projects folder structure.
>> Download the attached "
ServiceSupportFiles.zip" file and extract thiose files starting with “
ServiceKey…cs”.
>> Now right click the folder the folder you created above and click “Add | Existing Item” browse to where you saved the files,
>> select all the files saved and click Add.
The structure of your project should now look like the below.
Next each of these and immediately after this paste the below entry
Notice in the code there are TWO references to the namespace for the code we added above, make sure BOTH of these match the namespace of your project.
Finally locate the following block in the config file
Before the closing tag add behaviorConfiguration="ServiceKeyEndpointBehavior"
Save and close the config file.
Frequently Asked QlikView Interview Questions & Answers.
1 /15 | https://mindmajix.com/qlikview-management-api | CC-MAIN-2022-27 | refinedweb | 531 | 53.21 |
Need some help, pleeaaase!!
popshopper
Greenhorn
Joined: Nov 15, 2001
Posts: 1
posted
Nov 15, 2001 08:49:00
0
I keep getting a null pointer exception from the paint method.
It seems that there is a null in the player.display, but I can't find it. any help would be great. Sorry, but the style seems not to have made the transition.
Thanks
B.
<font size=2.5>import java.awt.*; import java.awt.event.*; import java.applet.*; public class Nim extends Applet implements ActionListener { private NimBoard theBoard; private Button reset, row1, row2, row3, newseries; private TextField amountRemoved; private Label title1, title2; private int matches= 0; private boolean exceptionerror, numbererror; public Player currentPlayer, otherPlayer; public boolean checkname; private Player player1, player2; public void init() { theBoard = new NimBoard(12); Player player1 = new Player("Scarface"); Player player2 = new Player("The Hustler"); currentPlayer = player1; otherPlayer= player2; reset= new Button("New Game"); add(reset); reset.addActionListener(this); row1= new Button("Row 1"); add(row1); row1.addActionListener(this); row2= new Button("Row 2"); add(row2); row2.addActionListener(this); row3= new Button("Row 3"); add(row3); row3.addActionListener(this); title1= new Label("Enter the amount of matchsticks you wish removed, and then select row:"); add(title1); amountRemoved= new TextField (2); add(amountRemoved); amountRemoved.addActionListener(this); newseries = new Button("New Series"); add(newseries); newseries.addActionListener(this); repaint(); } // end of init private void playerSwitch(){ if (checkname){ currentPlayer= player2; otherPlayer= player1;} else currentPlayer= player1; otherPlayer= player2; } public boolean checkname (){ boolean current = false; if (currentPlayer.getname().equals (player1.getname())) current = true; return current; } public void actionPerformed (ActionEvent e) { if (e.getSource()== reset) theBoard= new NimBoard(12); if (e.getSource() == newseries) repaint(); try { if (e.getSource() == row1){ matches= Integer.parseInt(amountRemoved.getText()); if (theBoard.moveAllowed(1,matches)) theBoard.play(1, matches); else{ numbererror = true;} } if (e.getSource() == row2){ matches= Integer.parseInt(amountRemoved.getText()); if (theBoard.moveAllowed(2, matches)){ theBoard.play(2, matches);} else{ numbererror = true;} } if (e.getSource() == row3){ matches= Integer.parseInt(amountRemoved.getText()); if (theBoard.moveAllowed(3, matches)){ theBoard.play(3, matches);} else{ numbererror = true;} } amountRemoved.setText(""); } // end of try catch (NumberFormatException err) { exceptionerror = true;} if (theBoard.gameOver()){ repaint(); amountRemoved.setText("");} amountRemoved.setText(""); repaint(); } // end of actionPerformed public void paint( Graphics g ) { //if (Play //g.drawString("It is The Hustler's turn next", 50, 400); //else //g.drawString("It is Scarface's turn next", 50, 400); if (theBoard.gameOver()){ g.drawString("Game Over!!! Player "+ currentPlayer.getname()+" has won!!!", 50, 100); ;} else theBoard.display(g); player1.display(g); if((exceptionerror) | | (numbererror)){ g.drawString("Illegal Move", 50, 300); exceptionerror = false; numbererror = false;} else theBoard.display(g); } // end of paint } // end of the applet class Nim // The class NimBoard is to represent the state of a board at // any time, and to provide methods for modifying a board and // displaying a board. class NimBoard { public boolean allowed, gameover = false; private int firstRow, secondRow, thirdRow; // private variables to hold the numbers // of matchsticks in the three rows public NimBoard(int rowLength) { // constructor method, used to initialise a game board firstRow= rowLength; secondRow= rowLength; thirdRow= rowLength; } // end of constructor NimBoard public void play(int row, int matchsticks) { // method to "register" a move if (row == 1){ firstRow= firstRow-matchsticks;} if (row == 2){ secondRow= secondRow-matchsticks;} if (row == 3){ thirdRow= thirdRow-matchsticks;} } // end of play public boolean moveAllowed(int row, int matchsticks) { allowed= true; if (firstRow+secondRow+thirdRow <2) { allowed = false;} if (matchsticks <= 0){ allowed = false;} if (row == 1){ if (matchsticks > firstRow) allowed = false;} if (row == 2){ if (matchsticks > secondRow) allowed= false;} if (row == 3){ if (matchsticks > thirdRow) allowed= false;} return allowed; } // end of moveAllowed public boolean gameOver() { // method to check whether the game is over if(firstRow +secondRow+ thirdRow<2) { gameover= true;} return gameover; } // end of gameOver private void drawRow(Graphics g, int matchsticks, int y) { // method to draw one row of the game board (private!) // parameter y is for the y-coordinate of the row int x; int counter; counter= matchsticks; x= 50; // sets initial x value for each row while (counter>0) { g.fillRect(x,y,5,30); x= x+10; counter= counter-1; } } // end of drawRow public void display(Graphics g) { // method to display the three rows of the game board drawRow(g,firstRow, 100); drawRow(g,secondRow,150); drawRow(g,thirdRow, 200); } // end of display } // end of the class NimBoard class Player { private String name; private int creditLeft = 5; private int playercredit = 0; public Player (String playerName){ String name = playerName; }// end of player construction method public String getname(){ return name; } public int getcredit(){ return creditLeft; } public int playercredit(){ playercredit = creditLeft; return playercredit; } public boolean creditout(){ if (creditLeft == 0) return true; else return false; } public void display(Graphics g){ g.drawString(name, 250, 400); g.drawString("Credit's left: "+playercredit, 275, 400); } }
(code tags added by Marilyn for readability)
[This message has been edited by Marilyn deQueiroz (edited November 15, 2001).]
Brian MacDonald
Greenhorn
Joined: Nov 15, 2001
Posts: 4
posted
Nov 15, 2001 09:06:00
0
Hi,
I've just realised that my username was incorrect
oops,
sorry.
Paul Stevens
Ranch Hand
Joined: May 17, 2001
Posts: 2823
posted
Nov 15, 2001 09:40:00
0
Brian could you edit your post and put code tag around it.
Brian MacDonald
Greenhorn
Joined: Nov 15, 2001
Posts: 4
posted
Nov 15, 2001 09:49:00
0
Sorry,
I am a complete beginner at this!!!
Also, I forgot to say. This is an assignment so I wouldn't like
help in the way of actual code. But of the life of me, I can't
see where I've went wrong. Any help any one could give me would be great, thanks
Sorry once again.
Paul Stevens
Ranch Hand
Joined: May 17, 2001
Posts: 2823
posted
Nov 15, 2001 10:32:00
0
Look at player1, player2 and name. Look where you create instance variables and then where you use new. See anything wrong?
Brian MacDonald
Greenhorn
Joined: Nov 15, 2001
Posts: 4
posted
Nov 15, 2001 11:43:00
0
I'm not sure, we've only started the course, and the commands I'm
using are the same as those in the book. Should I create instance
variables outside init? I'll continue to
muddle through. I'm getting the variable
coming up as null now, so is definitely just
a matter of the way I'm creating them. I've
been at this all day (I'm Scottish!!!) and
I've been stuck for a while now, I'm sure I'll get it.
thanks
[This message has been edited by Brian MacDonald (edited November 15, 2001).]
Cindy Glass
"The Hood"
Sheriff
Joined: Sep 29, 2000
Posts: 8521
posted
Nov 15, 2001 11:51:00
0
The problem is that you declared them outside of init - which is fine, but then you RE-declared them inside the init() method. This effectively gave you NEW variables local only to the init() method, which you then did stuff with.
When init() was over - poof! the local variables are gone and now all that is left are the original variables which were never initialized with any values
.
If you just USE the variables - without redeclaring them (by naming the variable "type"), the code in init() will use the variabled declared outside.
Player player1 = . . . // declares a variable
player1 = . . . // uses a variable without redeclaring it
[This message has been edited by Cindy Glass (edited November 15, 2001).]
"JavaRanch, where the deer and the Certified play" - David O'Meara
Paul Stevens
Ranch Hand
Joined: May 17, 2001
Posts: 2823
posted
Nov 15, 2001 12:04:00
0
By the way Brian, very noble of you to only want a little help and not have someone else code it for you. In case you didn't get what Cindy pointed out.
In your Player class you define name as an instance variable.
Private
String
name;
Within your constructor you do:
public Player (String playerName){ String name = playerName; }// end of player construction method
You created another variable called name used only within the constructor. So your name instance variable is null. That is why you get the error. The other methods attempted to use name (instance variable) which was still null.
Just say
name = playerName;
Same goes for player1 and player2. I know you just wanted hints but I think now you might understand what happened.
Brian MacDonald
Greenhorn
Joined: Nov 15, 2001
Posts: 4
posted
Nov 15, 2001 12:18:00
0
oops!!
Thank you.
I agree. Here's the link:
subject: Need some help, pleeaaase!!
Similar Threads
Loop for players
help with actionPerformed method.
problem with program hanging
Help with method
synchronization.. and threads !!!
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/390348/java/java/pleeaaase | CC-MAIN-2015-35 | refinedweb | 1,447 | 63.7 |
This page covers the use of external scripting languages with MathProg.
Passing values via the command-lineEdit
Please see the section on passing values via the command-line. The method involves a creating a simple shell script (or batch file).
Parametric studies and MathProgEdit
The MathProg language does not offer control structures in the interests of simplicity and efficiency (a restriction that does not apply to compiled language programming with the GLPK API). This lack of control structures prevents the direct implementation of parametric studies within MathProg — where one would like to perform a simple loop over a range of parameter values with the MathProg solve statement nested inside. A parametric study is the same as a sensitivity analysis.
One work-around is to use a scripting language to generate and run a number of GLPK model instances. A more sophisticated approach would be to use a relational database for the storage of interim solutions.
GLPSOL and AWKEdit
AWK is a well-established scripting language with a C-like syntax. GNU AWK is present by default on most Linux distros, otherwise install the gawk package. Windows users can obtain GNU AWK binaries from gnuwin32.sourceforge.net/packages/gawk.htm.
AWK makes it possible to create GMPL data files and then invoke glpsol on these newly-created files. This means AWK is a good choice for undertaking parametric studies in association with GMPL. It is normally better to use one volatile data file for your scanned parameters and another stable data file for the rest of your model data (glpsol supports multiple data files).
The rudimentary example below is presented as a stub for developing your own parameter scanning scripts. This example repeatedly writes the parameter iter to a volatile data file test.dat and then calls glpsol to display the value of this parameter. This script also generates the model file test.mod at the outset, but normally your model file would preexist. Windows users will need to replace the Linux remove command rm --force with del.
# AWK script # provides a starting point for developing custom parameter scanning scripts BEGIN { modfile = "test.mod"; datfile = "test.dat"; system("rm --force " modfile); # Linux remove call printf ("Writing model file\n"); printf ("# model file\n") > modfile; printf ("param iter;\n") > modfile; printf ("solve;\n") > modfile; printf ("display iter;\n") > modfile; printf ("end;\n") > modfile; close(modfile); for (i = 1; i <= 5; i++) { system("rm --force " datfile); # Linux remove call printf("\n\nIteration %i\n", i); printf ("Writing data file\n"); printf("# data file %i\n", i) > datfile; printf("data;\n") > datfile; printf("param iter := %i;\n", i) > datfile; printf("end;\n") > datfile; close(datfile); system("glpsol --model " modfile " --data " datfile); } exit; }
Save this script as text file scan.awk.
Ensure that lines are separated by line feeds (0x0A). Beware, some OS X editors use carriage returns (0x0D).
Run the script from the command-line:
$ awk -f scan.awk
Edited terminal output from iteration 2 is as follows:
Iteration 2 Writing data file GLPSOL: GLPK LP/MIP Solver, v4.44 Reading model section from test.mod... Reading data section from test.dat... Model has been successfully generated GLPK Simplex Optimizer, v4.44 0 rows, 0 columns, 0 non-zeros ~ 0: obj = 0.000000000e+00 infeas = 0.000e+00 OPTIMAL SOLUTION FOUND Display statement at line 4 iter = 2 Model has been successfully processed
The same basic idea can be implemented in virtually any scripting language, from bash upwards. In addition, astute readers may notice that altered model (as opposed to data) files can also be constructed on-the-fly using similar scripting methods.
GLPSOL and Visual Basic ScriptEdit
Visual Basic Script is programming language delivered with Microsoft Windows.
Visual Basic Script makes it possible to create GMPL data files and then invoke glpsol on these newly-created files.
The example below demonstrates how this can be done.
Create a model file test.mod which will only print parameter p.
param p; printf "Parameter p = %d\n", p; end;
Create a script test.vbs
Const ForWriting = 2 Set wshShell = WScript.CreateObject ("WSCript.shell") Set fso = CreateObject("Scripting.FileSystemObject") Set sout = WScript.StdOut For i = 1 To 3 'Write data file Set MyFile = fso.OpenTextFile("test.dat", ForWriting, True) MyFile.WriteLine "data;" MyFile.WriteLine "param p := " & i & ";" MyFile.WriteLine "end;" MyFile.Close 'Execute glpsol Set oExec = wshShell.exec("glpsol -m test.mod -d test.dat") 'Copy output of glpsol to console used by script While Not oExec.StdOut.AtEndOfStream sout.Write oExec.StdOut.Read(1) Wend Next
Run the script with
cscript test.vbs
GLPSOL and Visual Basic for ApplicationsEdit
Visual Basic Script is programming language delivered with Microsoft Office.
VBA makes it possible to create GMPL files and then invoke glpsol on these newly-created files.
The example below demonstrates how this can be done.
Option Explicit Private Declare Function WaitForSingleObject Lib "kernel32" ( _ ByVal hHandle As Long, _ ByVal dwMilliseconds As Long) As Long Private Declare Function OpenProcess Lib "kernel32.dll" ( _ ByVal dwDesiredAccess As Long, _ ByVal bInheritHandle As Long, _ ByVal dwProcessId As Long) As Long Private Declare Function CloseHandle Lib "kernel32" ( _ ByVal hObject As Long) As Long Private Const SYNCHRONIZE = &H100000 Private Const INFINITE = -1& Private Const MinimizedNoFocus = 6 ' Model file Private Const modfile = "C:\TEMP\test.mod" ' Result file Private Const resfile = "C:\TEMP\test.res" Public Sub parametricStudy() Dim p As Double Dim r As String For p = 0.5 To 2.5 Step 0.2 r = r & runGLPK(p) & vbCrLf Next p MsgBox r, vbOKOnly, "Result" End Sub Private Function runGLPK(p As Double) As String Dim f As Integer Dim s As String Dim pid As Long Dim h As Long Dim x As String Dim y As String ' Convert double to string s = p s = Replace(s, ",", ".") ' Create model file f = FreeFile() Open modfile For Output As f Print #f, "param f, symbolic := """ & resfile & """;" Print #f, "var x, >=0;" Print #f, "var y, >=0;" Print #f, "maximize obj : x + "; s; " * y; " Print #f, "s.t. c1 : x + y <= 4;" Print #f, "s.t. c2 : x + 2 * y <= 6;" Print #f, "solve;" Print #f, "printf ""%f\n"", x > f;" Print #f, "printf ""%f\n"", y >> f;" Print #f, "end;" Close f ' Delete result fle If Dir(resfile) <> "" Then Kill resfile End If ' Start glpsol pid = Shell("""C:\Program Files\GLPK\glpk-4.47\w32\glpsol.exe"" -m " & modfile, MinimizedNoFocus) If pid = 0 Then MsgBox "Failure to start glpsol.exe", vbCritical, "Error" Exit Function End If ' Wait for glpsol to end h = OpenProcess(SYNCHRONIZE, 0, pid) If h <> 0 Then WaitForSingleObject h, INFINITE CloseHandle h End If ' Check if result file written If Dir(resfile) = "" Then MsgBox "No result from glpsol.exe", vbCritical, "Error" Exit Function End If ' Output result Open resfile For Input As f Line Input #f, x Line Input #f, y Close f runGLPK = "p = " & s & " => x = " & x & ", y = " & y End Function
Python and PyMathProgEdit
If shell-command-based scripting (using AWK) is not flexible enough, then the Python language and the PyMathProg package provide a more powerful alternative. PyMathProg allows one to write linear and mixed-integer programming models — in a form very much like GMPL — using Python. A succinct example of how PyMathProg can be used to implement a subtour elimination heuristic is given here.
Python and SageEdit
Comment: this material should be extended.
Sage is an open source mathematics environment, offering both symbolic and numerical calculation and good visualization. Sage supports the Python language and GLPK is available through the Sage optimization module.
A mixed-integer model is first built using an instance of class MixedIntegerLinearProgram — and then solved using its solve method, with the solver set to GLPK:
sage: p = MixedIntegerLinearProgram(maximization=True) sage: x = p.new_variable() ... sage: p.solve(solver="GLPK", log="filename.log")
The overhead for installing Sage is apparently quite high, but the environment works well.
Suppressing terminal output under PythonEdit
Terminal output may be suppressed as follows:
import subprocess capture = subprocess.check_output(["glpsol", "--math", "noisy.mod", "--output", "noisy.out"]) print ("complete")
Save the above scripting to a file named quiet.py and then execute it:
> python quiet.py complete
GLPSOL's normal output is stored in capture for later use. The model's solution is saved as file noisy.out. And only the explicit print statement is sent to the console.
Similar techniques could be applied to other scripting languages, including Bash and Perl. Moreover, command-line arguments could be passed through to the final call to aid flexibility. | https://en.m.wikibooks.org/wiki/GLPK/Scripting_plus_MathProg | CC-MAIN-2015-35 | refinedweb | 1,413 | 56.35 |
My Journey.
Interaction Testing
As the name suggests, interaction testing tests interactions with React components. It can be thought of as unit testing for React components. Your tests will pretend to be the user — interacting with the component by typing stuff, clicking buttons, etc — and check that whatever should happen, happens.
Take this simple Counter component as an example.
function Counter() { const [count, setCount] = useState(0); return ( <div> <p>Count: {count}</p> <button onClick={() => setCount(count + 1)}> Increment </button> { count > 0 && ( <button onClick={() => setCount(0)}> Decrement </button> ) } </div> ); }
The user can perform two interactions with this component — increment the counter or decrement the counter (if count is greater than 0). Once the respective buttons are clicked, the component will display the new value.
We'll use this example component to go through how you can use React Testing Library for interaction testing.
Rendering
So, let's begin understanding the tools that React Testing Library provides. The first fundamental to React component testing is rendering. This is as simple as calling the render function included in React Testing Library.
import { render } from '[@testing]()-library/react'; test('render', () => { render(<Counter />); });
And voilà! That's literally it. Seriously. Who knew?
Querying
Okay, the first hurdle is over. Now that we have rendered our component, we have to get the HTML element(s) to interact with. Fortunately for us, React Testing Library provides a lot of simple and neat queries via the render function response. In our test, getByText will help query the button for us to click.
test('queries existence', () => { const { getByText } = render(<Counter />); const increment = getByText('Increment'); });
A similar thing can be done to get the decrement button if rendered. If the not (when the count is 0),getByText('Decrement') will throw an error causing the test to automatically fail, even though we're not testing anything yet! When this is the case, we can use queryByText to try and query the button. If the element can't be found, queryByText will return null.
test('queries non-existence', () => { const { queryByText } = render(<Counter />); const decrement = queryByText('Decrement'); });
Firing Events
Time to interact! React Testing Library provides a fireEvent function that includes support for almost all DOM events — keyboard, mouse, animation, etc. Since we have defined onClick for our increment and decrement buttons, we'll use the click event.
import { fireEvent, render} from '[@testing]()-library/react'; test('fireEvent', () => { const { getByText } = render(<Counter />); const increment = getByText('Increment'); fireEvent.click(increment); });
It's important to fire the correct event — otherwise, your expected interaction will not happen. The event fired must be the same type as the event listener attached to the element. If not, the event listener will not be triggered. In our Counter component, the buttons have the onClick event listener attached and therefore will only be triggered with click events. This is different from browser implementations where click events also trigger mousedown, mouseup, and other events.
Validating
Congratulations, you're now an expert in React Testing Library! Rendering, querying, and firing events is pretty much all the specific React Testing Library tooling that you really need to know.
Wait but our test is not done! Yes, but now you have learned all the tools you need to complete our test.
Let's revisit the interactions that we need to handle:
Click the increment button and our counter will increase
Click the decrement button and our counter will decrease
We now know how to fire the click event on the buttons but how do we validate whether our counterchanged as expected? More queries!
test('validation existence', () => { const { getByText } = render(<Counter />); const increment = getByText('Increment'); fireEvent.click(increment); expect(getByText('Count: 1')).toBeTruthy(); const decrement = getByText('Decrement'); fireEvent.click(decrement); expect(getByText('Count: 0')).toBeTruthy(); });
Just like how we queried for the button, we can query the component to see whether our expected update occurred. After we click the increment button, our component is updated to display the new count. By querying for what we expect the new count value to be, we can check its existence to determine whether the Counter behaves as expected.
Great! We just tested to make sure the buttons work as expected. But, for full test coverage of our Counter component, we also need to make sure that the decrement button is rendered only when count is greater than 0. Using the same technique but with queryByText, we can test for this.
test('validation non-existence', () => { const { getByText, queryByText } = render(<Counter />); expect(queryByText('Decrement')).toBeNull() const increment = getByText('Increment'); fireEvent.click(increment); expect(queryByText('Decrement')).toBeTruthy() });
We first make sure the decrement button doesn't exist by validating that the queryByText response is null. Then we increment the counter and validate that the decrement button exists.
And that's it! Our Counter component is fully tested and we can be very confident that it works exactly as expected.
Screen Logging
Oh wait! There's still one more valuable tool that React Testing Library provides: screen.debug. This function will log the DOM structure. By default, it will log document.body.
import { render, screen } from '[@testing]()-library/react'; test('debug default', () => { render(<Counter />); screen.debug(); // output: // <body> // <div> // <div> // <p> // Count: // 0 // </p> // <button> // Increment // </button> // </div> // </div> // </body> });
You can also pass in a DOM element to specifically log that.
import { fireEvent, render, screen } from '[@testing]()-library/react'; test('debug element', () => { const { getByText } = render(<Counter />); const increment = getByText('Increment'); screen.debug(increment); // output: // <button> // Increment // </button> });
With this tool, you can visually inspect and understand the DOM structure of your component so you can formulate the queries that are needed to test the component.
Final Thoughts
React Testing Library vastly simplifies and, for lack of a better term, dumbs it down to something you and I can understand and work with. The four tools it provides — rendering, querying, firing events, and screen logging — covers the basics of interaction testing. Now that you've learned this, go out and make sure your React components are bug-free! | https://plainenglish.io/blog/interaction-testing-with-react-testing-library | CC-MAIN-2022-40 | refinedweb | 995 | 56.25 |
/*
* Resolver.java February.util;
import java.util.AbstractSet;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
/**
* This is used to store <code>Match</code> objects, which can then be
* retrieved using a string by comparing that string to the pattern of
* the <code>Match</code> objects. Patterns consist of characters
* with either the '*' or '?' characters as wild characters. The '*'
* character is completely wild meaning that is will match nothing or
* a long sequence of characters. The '?' character matches a single
* character.
* <p>
* If the '?' character immediately follows the '*' character then the
* match is made as any sequence of characters up to the first match
* of the next character. For example "/*?/index.jsp" will match all
* files preceeded by only a single path. So "/pub/index.jsp" will
* match, however "/pub/bin/index.jsp" will not, as it has two paths.
* So, in effect the '*?' sequence will match anything or nothing up
* to the first occurence of the next character in the pattern.
* A design goal of the <code>Resolver</code> was to make it capable
* of high performance. In order to achieve a high performance the
* <code>Resolver</code> can cache the resolutions it makes so that if
* the same text is given to the <code>Resolver.resolve</code> method
* a cached result can be retrived quickly which will decrease the
* length of time and work required to perform the match.
* The semantics of the resolver are such that the last pattern added
* with a wild string is the first one checked for a match. This means
* that if a sequence of insertions like <code>add(x)</code> followed
* by <code>add(y)</code> is made, then a <code>resolve(z)</code> will
* result in a comparison to y first and then x, if z matches y then
* it is given as the result and if z does not match y and matches x
* then x is returned, remember if z matches both x and y then y will
* be the result due to the fact that is was the last pattern added.
* @author Niall Gallagher
public class Resolver<M extends Match> extends AbstractSet<M> {
/**
* Caches the text resolutions made to reduce the work required.
*/
protected final Cache cache;
* Stores the matches added to the resolver in resolution order.
*/
protected final Stack stack;
* The default constructor will create a <code>Resolver</code>
* without a large cache size. This is intended for use when
* the requests for <code>resolve</code> tend to use strings
* that are reasonably similar. If the strings issued to this
* instance are dramatically different then the cache tends
* to be an overhead rather than a bonus.
*/
public Resolver(){
this.stack = new Stack();
this.cache = new Cache();
}
* This will search the patterns in this <code>Resolver</code> to
* see if there is a pattern in it that matches the string given.
* This will search the patterns from the last entered pattern to
* the first entered. So that the last entered patterns are the
* most searched patterns and will resolve it first if it matches.
*
* @param text this is the string that is to be matched by this
* @return this will return the first match within the resolver
public M resolve(String text){
List<M> list = cache.get(text);
if(list == null) {
list = resolveAll(text);
}
if(list.isEmpty()) {
return null;
return list.get(0);
* @return this will return all of the matches within the resolver
public List<M> resolveAll(String text){
if(list != null) {
return list;
char[] array = text.toCharArray();
if(array == null) {
return resolveAll(text, array);
* @param array this is the character array of the text string
private List<M> resolveAll(String text, char[] array){
List<M> list = new ArrayList<M>();
for(M match : stack) {
String wild = match.getPattern();
if(match(array, wild.toCharArray())){
cache.put(text, list);
list.add(match);
}
return list;
* This inserts the <code>Match</code> implementation into the set
* so that it can be used for resolutions. The last added match is
* the first resolved. Because this changes the state of the
* resolver this clears the cache as it may affect resolutions.
* @param match this is the match that is to be inserted to this
* @return returns true if the addition succeeded, always true
public boolean add(M match) {
stack.push(match);
return true;
* This returns an <code>Iterator</code> that iterates over the
* matches in insertion order. So the first match added is the
* first retrieved from the <code>Iterator</code>. This order is
* used to ensure that resolver can be serialized properly.
* @return returns an iterator for the sequence of insertion
public Iterator<M> iterator() {
return stack.sequence();
* This is used to remove the <code>Match</code> implementation
* from the resolver. This clears the cache as the removal of
* a match may affect the resoultions existing in the cache. The
* <code>equals</code> method of the match must be implemented.
* @param match this is the match that is to be removed
* @return true of the removal of the match was successful
public boolean remove(M match) {
cache.clear();
return stack.remove(match);
* Returns the number of matches that have been inserted into
* the <code>Resolver</code>. Although this is a set, it does
* not mean that matches cannot used the same pattern string.
* @return this returns the number of matches within the set
public int size() {
return stack.size();
* This is used to clear all matches from the set. This ensures
* that the resolver contains no matches and that the resolution
* cache is cleared. This is used to that the set can be reused
* and have new pattern matches inserted into it for resolution.
public void clear() {
cache.clear();
stack.clear();
* This acts as a driver to the <code>match</code> method so that
* the offsets can be used as zeros for the start of matching for
* the <code>match(char[],int,char[],int)</code>. method. This is
* also used as the initializing driver for the recursive method.
* @param text this is the buffer that is to be resolved
* @param wild this is the pattern that will be used
private boolean match(char[] text, char[] wild){
return match(text, 0, wild, 0);
* This will be used to check to see if a certain buffer matches
* the pattern if it does then it returns <code>true</code>. This
* is a recursive method that will attempt to match the buffers
* based on the wild characters '?' and '*'. If there is a match
* then this returns <code>true</code>.
* @param off this is the read offset for the text buffer
* @param pos this is the read offset for the wild buffer
private boolean match(char[] text, int off, char[] wild, int pos){
while(pos < wild.length && off < text.length){ /* examine chars */
if(wild[pos] == '*'){
while(wild[pos] == '*'){ /* totally wild */
if(++pos >= wild.length) /* if finished */
return true;
}
if(wild[pos] == '?') { /* *? is special */
if(++pos >= wild.length)
for(; off < text.length; off++){ /* find next matching char */
if(text[off] == wild[pos] || wild[pos] == '?'){ /* match */
if(wild[pos - 1] != '?'){
if(match(text, off, wild, pos))
return true;
} else {
break;
}
}
if(text.length == off)
return false;
if(text[off++] != wild[pos++]){
if(wild[pos-1] != '?')
return false; /* if not equal */
if(wild.length == pos){ /* if wild is finished */
return text.length == off; /* is text finished */
while(wild[pos] == '*'){ /* ends in all stars */
if(++pos >= wild.length) /* if finished */
return true;
return false;
* This is used to cache resolutions made so that the matches can
* be acquired the next time without performing the resolution.
* This is an LRU cache so regardless of the number of resolutions
* made this will not result in a memory leak for the resolver.
*
* @author Niall Gallagher
private class Cache extends LimitedCache<List<M>> {
/**
* Constructor for the <code>Cache</code> object. This is a
* constructor that creates the linked hash map such that
* it will purge the entries that are oldest within the map.
*/
public Cache() {
super(1024);
* This is used to store the <code>Match</code> implementations in
* resolution order. Resolving the match objects is performed so
* that the last inserted match object is the first used in the
* resolution process. This gives priority to the last inserted.
private class Stack extends LinkedList<M> {
* The <code>push</code> method is used to push the match to
* the top of the stack. This also ensures that the cache is
* cleared so the semantics of the resolver are not affected.
*
* @param match this is the match to be inserted to the stack
*/
public void push(M match) {
cache.clear();
addFirst(match);
* The <code>purge</code> method is used to purge a match from
* the provided position. This also ensures that the cache is
* cleared so that the semantics of the resolver do not change.
* @param index the index of the match that is to be removed
public void purge(int index) {
cache.clear();
remove(index);
* This is returned from the <code>Resolver.iterator</code> so
* that matches can be iterated in insertion order. When a
* match is removed from this iterator then it clears the cache
* and removed the match from the <code>Stack</code> object.
*
* @return returns an iterator to iterate in insertion order
public Iterator<M> sequence() {
return new Sequence();
* The is used to order the <code>Match</code> objects in the
* insertion order. Iterating in insertion order allows the
* resolver object to be serialized and deserialized to and
* from an XML document without disruption resolution order.
* @author Niall Gallagher
*/
private class Sequence implements Iterator<M> {
/**
* The cursor used to acquire objects from the stack.
*/
private int cursor;
* Constructor for the <code>Sequence</code> object. This is
* used to position the cursor at the end of the list so the
* first inserted match is the first returned from this.
*/
public Sequence() {
this.cursor = size();
* This returns the <code>Match</code> object at the cursor
* position. If the cursor has reached the start of the
* list then this returns null instead of the first match.
*
* @return this returns the match from the cursor position
public M next() {
if(hasNext()) {
return get(--cursor);
}
return null;
}
* This is used to determine if the cursor has reached the
* start of the list. When the cursor reaches the start of
* the list then this method returns false.
* @return this returns true if there are more matches left
public boolean hasNext() {
return cursor > 0;
* Removes the match from the cursor position. This also
* ensures that the cache is cleared so that resolutions
* made before the removal do not affect the semantics.
public void remove() {
purge(cursor);
}
}
} | http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.util.Resolver.html | CC-MAIN-2017-13 | refinedweb | 1,753 | 65.12 |
>
when this code is run, the variable mydirection says 90 degrees, instead of the 0 degrees that it should show. Why is this? I have already spent days hitting my head against the wall, so any input would be great.
AssemblyCSharp.locData[] myList = new AssemblyCSharp.locData[7];
//A list of an object called locData
// x z
AssemblyCSharp.locData holla = new AssemblyCSharp.locData(7.57,-.84);
myList[0] = holla;
//the part below is where I think the problem is, did I misuse the Vector3.Angle() method?
mydirection = Vector3.Angle((new Vector3((float)myList[stop].getx(),0f,(float)myList[stop].getz())-transform.position),transform.up);
The first thing that is wrong with it, is lacking format...fixed that for you.
Answer by Bunny83
·
Sep 08, 2012 at 03:00 PM
Why would you expect 0?
Your vector, from which you subtract your position, is only in the x-z-plane, so it's naturally exactly 90° to the upvector which is (0, 1, 0) for a nonrotated object.
I guess direction is a float? What is it supposed to store? The direction as angle? if so you're doing it totally wrong ;) Are you sure that you actually need the angle? Usually you work with vectors all the time.
Vector3.Angle always returns a positive angle since there is no clear sign between two arbitrary vectors in 3D space. You might want to use Mathf.Atan2:
Vector3 dir = new Vector3((float)myList[stop].getx(),0f,(float)myList[stop].getz()) - transform.position;
float angle = Mathf.Atan2(dir.z, dir.x) * Mathf.Rad2Deg;
ps: I'm a bit confused about your custom class "locData". Besides that the type starts with a lowercase letter, it seems strange that you have to cast it to float and that you have to query x and z seperately... Any insight what this class (or struct?) is used for?
Is the containing namespace / class really called "AssemblyCSharp" ?
Thankyou for your reply, the y upvector does show the angle on the x-z plane so it shouldn't be 90%.
I also just found out there is a moveto method so yeah,i guess I am doing it wrong.
to do it I would need the angle so I go in the right direction, and I guess I should of used vector3 instead of locdata, but that and the double instead of float is because im a bit new to unity.
all you need to know about locdata is that it returns an x and z. The only way I could use helper classes was to use AssemblyCSharp in the namespace.
I already tried using Atan but it doesn't work as it gives you (angle%90)
thankyou Philipp jutzi for formatting it and Bunny83.
Vector3.Angle returning wrong values for vectors with small components
2
Answers
What is the 3rd point of Vector3.Angle ?
1
Answer
Rotate vector around vector?
1
Answer
how to find direction between two points
2
Answers
Constructing a Vector in 2-space from an angle for camera panning
1
Answer | https://answers.unity.com/questions/315025/what-is-wrong-with-my-vector3angle-code.html | CC-MAIN-2019-26 | refinedweb | 506 | 66.54 |
Difference between revisions of "BeagleBoard/GSoC/2021 Proposal/simpPRU Improvements"
Revision as of 06:50, 19 May 2021
Contents
Proposal for Improvements to simpPRU
===== About =====
Student: Archisman Dey
Mentors: Abhishek, Pratim Ugale, Andrew Henderson
Code:
Wiki:
GSoC:
Status
This project has been selected for GSoC 2021..
Add the ability to use hexadecimal numbers to initialize ints
Currently, only decimal numbers are supported. This will be helpful for writing drivers for I2C devices, for example.
Syntax:
int a := 0xF; /* 15 */
Add support for the modulo operator and the bitshift operators
Currently, four arithmetic operators are supported: +, -, *, /. This project will add support for the modulo (%) operator.
int a := 17 % 4; /* 1 */
Also, three bitwise operators are supported right now: ~, &, and |. This project will add support for two more: left shift (<<) and right shift (>>).
int a := 128; int b := a << 1; /* 256 */ int c := a >> 1; /* 64 */
Update the grammar so that control statements (break/continue) can only be called inside loops
Currently, break and continue can be called inside any compound statement such as conditionals, which will throw an error while compiling the generated C code. After this project, they can be called inside loops only:
loop_for | loop_while { statement_list; conditional_statement { /* if / elif / else */ statement_list; break | continue; } statement_list; }
For example, after updating the grammar:
/* this code will work */ int a := 10; while: true { if: a < 0 { break; } a := a - 1; } /*; } }
Earlier, this had to be written as:
def func: int: int a { int b; if: a < 22 { b := 1; } else { b := 0; return b; } it's value might get altered when assigning from int.
char c; /* declaration without assignment */ c := 'A'; /* assignment from single quoted character */ c := 65; /* assignment from numeric */ int a := 65; c := a; /* 0 <= 65 <= 255, so works correctly */ int a := 282982; c := a; /* 282982 > 255, so does not work correctly */
Arithmetic and Comparison operators:
- All comparison operators (>, <, ==, !=, >=, <=) will work correctly between char/char, char/int and int/char.
char i := 65; int j := 65; if: i == j { ... } /* true */
- Arithmetic operators (+, -, *, /, %) will work with chars and ints, but the char will get automatically converted to an int. If the result is assigned to a char, rules for assigning to char from int applies.
char c1 := 45; int c2 := 62; char c3 := c1 + c2; /* 107 */ char c3 := c1 * c2; /* value will get altered */
- Bitwise operators (~, &, |, <<. >>) will also work on chars.
This project will make simpPRU more complete and robust, which will help with beginners learning to use the PRU or experienced users prototyping something on the PRU.
Quotes:
"simpPRU will simplify programming the PRU, and probably make it easy even for a kid to program the PRU, which is a big plus point of this project."
- Vedant Paranjape (@vedant16)
Misc
Link to pull request: #146 | https://www.elinux.org/index.php?title=BeagleBoard/GSoC/2021_Proposal/simpPRU_Improvements&curid=144181&diff=551531&oldid=548271 | CC-MAIN-2021-31 | refinedweb | 454 | 56.18 |
Look Ma, javac tells me I am overriding static method wrongly!!
By sundararajan on Oct 21, 2009
// File: SuperClass.java public class SuperClass { public static int func() { return 0; } }
// File: SubClass.java public class SubClass extends SuperClass { public static boolean func() { return false; } }
$ javac -fullversion javac full version "1.6.0_15-b03-226" javac SuperClass.java SubClass.java SubClass.java:2: func() in SubClass cannot override func() in SuperClass; attempting to use incompatible return type found : boolean required: int public static boolean func() { \^ 1 error
The subclass uses a different return type for the same named method with same argument types. So, it is overloading SuperClass.func() and the overloading SubClass.func() differs only in return type. But, I am not sure of the error message....
Hey dude,
Firstly return type never helps in overloading.
Secondly here you are overriding method.
Third static methods can never be overridden.
Thanks
Vinod
Posted by Vinod Kumar Kashyap on October 21, 2009 at 04:20 AM IST #
Hi Vinod,
Thanks for commenting! Of course, I know that return type is not considered for overloading and I do know static methods can not be overriden. This blog entry is about appropriateness (or lack of it) error message from java compiler. BTW, I didn't deliberately write such a code -- JavaFX compiler uses javac in the back-end. It ended up generating such (wrong) code for a specific case and resulting error message looked odd (This is the next version of JavaFX compiler that is being developed with more or less rewrite)
Posted by A. Sundararajan on October 21, 2009 at 04:38 AM IST #
Actually you can override a static method, it's just that normally you call it with an explicit class. You can however call it with an implicit class, as per usual non-static methods.
SuperClass obj = new SubClass();
obj.func(); //What's the return value?
Regardless, it's pretty bad that JavaFX generated erroneous code.
Posted by Ryan on October 21, 2009 at 11:52 AM IST #
Ryan: Just a clarification: That bug I mentioned with JavaFX compiler is \*not\* in the released product versions of the JavaFX compiler. Compiler code was at that temporary wrong state during the development of new compiler - which is still being development. And it has been fixed since then. So, no real harm done to any JavaFX code out there.
Posted by A. Sundararajan on October 21, 2009 at 12:26 PM IST #
I agree that the javac error message is confusing. Perhaps it should say "func() in SubClass cannot hide func() in SuperClass", using the terminology of JLS 8.4.8.3.
Posted by Eamonn McManus on October 21, 2009 at 03:17 PM IST #
I presume you have filed a bug against javac...
Posted by Jonathan Gibbons on October 27, 2009 at 08:51 PM IST # | https://blogs.oracle.com/sundararajan/entry/look_ma_javac_tells_me | CC-MAIN-2015-32 | refinedweb | 474 | 65.83 |
CHI::Driver::File - File-based cache using one file per entry in a multi-level directory structure
version 0.58
use CHI; my $cache = CHI->new( driver => 'File', root_dir => '/path/to/cache/root', depth => 3, max_key_length => 64 );
This cache driver stores data on the filesystem, so that it can be shared between processes on a single machine, or even on multiple machines if using NFS.
Each item is stored in its own file. By default, during a set, a temporary file is created and then atomically renamed to the proper file. While not the most efficient, it eliminates the need for locking (with multiple overlapping sets, the last one "wins") and makes this cache usable in environments like NFS where locking might normally be undesirable.
By default, the base filename is the key itself, with unsafe characters escaped similar to URL escaping. If the escaped key is larger than "max_key_length" (default 248 characters), it will be digested. You may want to lower "max_key_length" if you are storing a lot of items as long filenames can be more expensive to work with.
The files are evenly distributed within a multi-level directory structure with a customizable "depth", to minimize the time needed to search for a given entry.
When using this driver, the following options can be passed to CHI->new() in addition to the CHI.
The location in the filesystem that will hold the root of the cache. Defaults to a directory called 'chi-driver-file' under the OS default temp directory (e.g. '/tmp' on UNIX). This directory will be created as needed on the first cache set.
The number of subdirectories deep to place cache files. Defaults to 2. This should be large enough that no leaf directory has more than a few hundred files. Each non-leaf directory contains up to 16 subdirectories (0-9, A-F).
Permissions mode to use when creating directories. Defaults to 0775.
Permissions mode to use when creating files, modified by the current umask. Defaults to 0666.
Extension to append to filename. Default is
.dat.
Returns the full path to the cache file representing $key, whether or not that entry exists. Returns the empty list if a valid path cannot be computed, for example if the key is too long.
Returns the full path to the directory representing this cache's namespace, whether or not it has any entries.
By default, during a set, a temporary file is created and then atomically renamed to the proper file. This eliminates the need for locking. You can subclass and override method generate_temporary_filename to either change the path of the temporary filename, or skip the temporary file and rename altogether by having it return undef.
Jonathan Swartz <swartz@pobox.com>
This software is copyright (c) 2012 by Jonathan Swartz.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~haarg/CHI/lib/CHI/Driver/File.pm | CC-MAIN-2014-23 | refinedweb | 488 | 65.12 |
Aflați mai multe despre abonamentul Scribd
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.
Paul McLellan with a foreword by
Jim HoganEDAgraffiti
ISBN 978-1-452-85327-7Foreword: Jim Hogan ................................................................... 5
Introduction .................................................................................. 7
3EDAgraffiti
4Foreword: Jim HoganPaul has assembled an extraordinary number of opinion pieces ina relatively short period of time. I frankly find it amazing he cancome up with that many ideas on the subject of EDA. I havemaybe two or three ideas a year that are worth sharing withvoices outside on my own head. Just amazing!Paul and I have been frequent collaborators. Our partnershipstarts soon after Paul joined Cadence via Ambit merger. At thetime Cadence had recently lost Joe Costello as their CEO. Asanyone who has heard Joe knows, Mr. Costello has a unique giftfor spinning a vision and getting people to believe not only it waspossible, but in fact it is actually already a reality. When Joe leftCadence, there was a huge void for all of EDA not just Cadence.Cadence had been through one of their acquisition cycles. Thecompany was left with a collection of interesting products andsolutions, but had difficulty welding the pieces into the cohesivewhole.Paul and I were tasked to bring that story to our investors,customer and employees. Fortunately for Cadence and myself,Paul pulled it together and it made sense. He and I each took ateam and hit the road for a month to communicate the vision. Igot to know and appreciate Paul’s unique gifts during this period.I am proud to say I was smart enough to hang on to his coat tailsthen and plan to continue to.Over a decade has passed what I realize more than ever is thatPaul has the brains, curiosity and willingness to take on complexsubjects. He also has the unique ability to explain these ideas interms and approach that are understandable by the maturity of thetechnical readership. I like to think of this as seeing the arc ofstory. How do you get everyone to your desired conclusion?Great story-tellers know this implicitly, Paul has this gift and youwill see it demonstrated in the EDA Graffiti collection.Paul has done his best to offer his space to alternative views andexpress a balance in reporting his findings. Blogs by nature areEDAgraffiti
6IntroductionThis book is an outgrowth from the blog EDAgraffiti on EDNMagazine Online. Although the basis of the book is the originalblog entries there is new material and the old material has beenextensively revised. In particular, I’ve reordered the content sothat each chapter covers a different area: Semiconductor, EDAmarketing, Investment and so on.I’ve tried to write the book that I wish I’d been able to read whenI started out. Of course it would have been impossible to writeback then, but in a real sense this book contains 25 years ofexperience in semiconductor, EDA and embedded software.Not many books cover as wide a spectrum of topics as this. I’veended up in marketing and executive management but I startedout my career in engineering. For a marketing guy I’m verytechnical. For a technical guy I’m very strong on the business andmarketing side. It is an unusual combination. I started insemiconductor, worked in EDA but also in embedded software.I’m a software guy by background but spent enough time in asemiconductor company that I have silicon in my veins.There was no EDA industry really 25 years ago and I worked onsoftware at VLSI Technology when we, along with LSI Logic,were essentially inventing the ASIC methodology. I remained atVLSI for nearly 16 years, first in the US, then in France and thenback in the US. By then we had spun out Compass DesignAutomation as a separate company. I ended up running Compassand I finally left VLSI by sawing off the branch I was sitting onwhen we sold Compass to Avant!I then went to Ambit as VP Engineering. Our revenues multipliedby over 10 times in the year I was there so it was like sitting on arocket ship. Cadence acquired us and I stayed at Cadence forthree years. I moved into marketing and management, initiallyworking on strategy since I was one of the few people in thecompany who understood the entire product line. Then I ran theCustom IC business unit, Cadence’s largest and worked as one ofEDAgraffiti
8 Table of Contents
Paul McLellanSan Francisco, April 2010Email: paul@greenfolder.com
9Chapter 1. SemiconductorindustryThis chapter looks at the semiconductor industry, primarily froma business rather than a technology point of view. One theme isthat almost everything that happens in the semiconductor industry(and by extension in industries like EDA that feed into it) isdriven by the economics of semiconductor manufacturing.Semiconductor manufacturing is highly capital intensive, andgets more so at every process generation. It is also a mass-production process with a lot of the cost incurred up-front,especially adding in the cost of design. The first chip costs $50Mand then they are $2 each after that so you’d better want a lot ofthem. In fact another issue is indeed working out what the cost ofa chip is, given yield and setup issues.One specific submarket of the semiconductor world is themicroprocessor market. In fact since so much of an electronicsystem is implemented using software these days, embeddedmicroprocessors are the heart of many systems-on-chip. They arealso interesting from a business point of view because softwarecompatibility provides strong lock-in to a particularmanufacturer. The most obvious cases of this are Intel’sdomination of the PC market, and ARM’s of the cell-phonemarket.
11EDAgraffiti
1Have you noticed that people like to write or say 1 MIP as if MIPS were plural. But, ofcourse, the S stands for “seconds”. End of nitpicking.
12 Chapter 1: Semiconductor
13EDAgraffiti
Fab 5For some time I have been talking about the semiconductorindustry as the Fab 5, since there have been five process “clubs”.A few players hedge their bets and are in more than one club. Thefab five are Intel (a club on its own), UMC (along with Xilinxand Texas Instruments), IBM (along with Samsung, ST, Infineon,AMD, Sony, Freescale and Chartered), Japan Inc (Renasas,Toshiba, Fujitsu, OKI, Sharp, Sanyo, Matsushita) and the big oneTSMC (with AMD, TI, NXP, ST, LSI, Sony, Qualcomm). JapanInc in particular is messy with Toshiba tied closely to NEC (inthe TSMC club but now merging into Renasas) but also to Sony(in the IBM club too), Renasas and Fujitsu are still sort of goingit alone. Japanese politics would indicate that they will all gettogether somehow.Changes are afoot. Here are some of the things going on. ST,NXP and Ericsson wireless are all merged together into a newcompany (called, yawn, ST-Ericsson). Nokia has also sold itswireless unit to ST so it is presumably in there somewhere.Toshiba looks like it is going to really join Japan Inc (as if therewas any doubt). TI and Freescale are both trying to find a homefor their wireless groups but nobody wants them at a price theywant to sell. The IBM club has deepened its technologyagreements and ARM (although fabless) seems to be sort ofjoining the IBM club to help create energy-efficient SoCs, withSamsung both building and consuming the volume (and so Ihereby rename the IBM club the Samsung club).What about everyone else? AMD, ATI (also in AMD for now),MIPS, nVidia, UMC, NXP, Infineon, Motorola, TexasInstruments, Freescale were all bleeding cash even before thedownturn got really bad, and they are reducing their footprints.All of Japan Inc except maybe Toshiba were also bleeding money(and Toshiba would have been except for all that flash going into
14 Chapter 1: Semiconductor
phones and iPods, and is now hurting more after losing Xilinx toSamsung over price).So based simply on financial strength it looks like the 3 fabs aregoing to be TSMC, Intel and Samsung (taking over the namebadge for the IBM club) long-term. Of course other people likeST won’t lose their fabs overnight but they won’t be able toafford to keep up. And it is unclear how many of the memoryhouses will make it through the current downturn. Qimonda isclearly comatose already and isn’t going to wake up.So the Fab 5 will become the Fab 3. For EDA this justemphasizes that there are too many EDA companies, as I’ve saidbefore. Or maybe that EDA will go internal again, which is adiscussion for another day.Who would have predicted 20 years ago when TSMC was asmall foundry with a non-competitive Philips process that itwould be the dominant player. Kind of like predicting that Ringowould be the last Beatle of the Fab 4…oh wait, maybe that’sgoing to happen too.
15EDAgraffiti
16 Chapter 1: Semiconductor
2 Classical fact of the day: this is the opening line of Virgil’s Aeneid.
17EDAgraffiti
first ARM was designed. The lead designer who would use themwas Jamie Urquhart who eventually went on to be CEO of ARMfor a time.Acorn fell on hard times as the PC market consolidated and itwas acquired by Olivetti (yes, the typewriter people from Italyalthough by then they were in electronics too).Then a big change occurred. In 1989, Apple decided to build theNewton. The back-story is actually much more complicated thanthis. Larry Tesler of Apple looked around the various processorsthat they might use and decided that the ARM had the best MIPSper watt, which was really important since battery life wascritical. The Newton wouldn't be any use at all if its battery onlylasted an hour but the computation needs to do handwritingrecognition was significant. But they also decided they couldn'tuse it if the design team and compiler teams were all buriedinside a minor division of Olivetti.So ARM was spun out as a joint venture between Acorn/Olivetti,Apple and VLSI Technology. I had to fly from France, where Iwas by then living, to a mysterious meeting in Cambridge. Iwasn't even allowed to know what it was about until I got there.VLSI provided all the design tools that the nascent companyneeded in return for some equity, 5 or 10% I think, and also builtthe silicon. Remember, at this stage the idea was not to licensethe ARM processor widely, but rather to sit on the rocket-ship ofthe Newton as Apple created an explosively growing PDAindustry. John Sculley, Apple's CEO, was publicly saying themarket for PDAs and content would reach $3 trillion. VLSIwould sell ARM chips (this was just before a processor was smallenough to be embedded) to other companies for other productsand we would pay ARM a royalty plus pay them engineering feesto design the next generation. Or something like that, I forget thedetails.Well, we all know how the Newton story played out.Back then, microprocessors were not licensed except inextremely controlled ways. They would be second-sourced sincelarge customers didn't want to depend on a single semiconductor18 Chapter 1: Semiconductor
equity that was the payment for this, or Compass would haveended up being wildly profitable. It fell to me to renegotiate theterms with Tudor Brown (now President of ARM). It wasdifficult for both sides to arrive at some sort of agreement. ARM,not unreasonably, expected the price to continue to be $0 (whichwas what they had in their budget) and Compass wanted the dealto be on arms-length(!) commercial terms. It was an over-constrained problem and Compass never got anything like themoney it should have done from such an important customer.I eventually left Compass (I would return later as CEO) andended up back in VLSI where one of my responsibilities was re-negotiating the VLSI contract with ARM for futuremicroprocessors. It is surprising to realize that even by 1996ARM was still not fully-accepted; I remember we had to paymoney, along with other semiconductor licensees, to create anoperating system club so that ARM in turn could use the fundspay Wind River, Green Hills and others to port their real-timeoperating systems to the ARM processor. Today they couldprobably charge for the privilege.The business dynamics of ARM have certainly changed a lotbetween my first involvement and today.
20 Chapter 1: Semiconductor
21EDAgraffiti
22 Chapter 1: Semiconductor
PowerPCAt DAC 09, I happened to bump into Kaveh Massoudian of IBM,who is also the CTO of power.org, the consortium that deals withall things PowerPC. I previously met him when I was at Virtutechduring which era power.org was formally established. A little bitof history: PowerPC was created in 1991 jointly by IBM,Freescale (then Motorola Semiconductor) and Apple (Macswould all become PowerPC based before the switch to Intelarchitecture a few years ago). So the PowerPC was always amulti-company effort. It was designed as a 64/32 bit scalablearchitecture from the beginning. power.org was created in 2004to pull together the whole ecosystem around the architecture.PowerPC is really the third important microprocessorarchitecture, along with Intel and ARM. Their high level strategyis to let Intel own the PC market, let ARM own the wirelessmarket (“let” as in admit that it is game-over in those markets)and try and own as much as possible of everything else: videogames, aerospace, military, networking, base-stations, automotiveetc. Did you know that the Wii, the Xbox360 and the Playstationgame consoles are all based on PowerPC? Of course MIPS is stillaround, as are other processors (especially in automotive) butthey are largely confined to certain segments. For instance, MIPSis dominant in the set-top-box market (all those DVRs).The challenge that PowerPC faces is that, outside of video gameconsoles, most of these markets are not that large individually. Todesign an SoC really requires the possibility of shipping 100Munits, and if the market is going to be shared with othercompetitors then that means a market of, perhaps, 500M units.There just aren’t that many markets that big outside of PC,wireless and video-game. 23EDAgraffiti
24 Chapter 1: Semiconductor
25EDAgraffiti
dropping our home Internet service since we get all that with ourcell-phone.Next up is the netbook space (or whatever they end up beingcalled, apparently "netbook" is a Psion trademark). If all theintelligence is in the cloud we can get away with lower-poweredmachines at our end. Although there are some interestingtechnical and business issues (Atom vs ARM, Linux vs Androidvs Windows vs Symbian vs iPhoneOS) I think the mostinteresting challenge is to decide how big we want our devices tobe. For now the iPad seems to be taking off but it is still too earlyto tell whether it will have the legs that the iPhone did.I had a Palm for years, from back when they were still calledPalm Pilot and were made by US Robotics. Eventually I switchedmy Treo for an iPhone, but the screen is still too small for lots ofthings. I have a Kindle, great for reading but no color and acrappy keyboard. I have a MacBook but it is heavy and doesn’tfit in my pocket, and not a great screen for reading a book on. Idon’t have the big KindleDX but the one person I know who doesloves it. As screen and compute technology improve, the humaninteraction will be the limiting factor. Voice recognition seems tobe pretty solid now, Nintendo Wii type technology works fineand there are demos out there of the same sort of thing withoutneeding a controller at all, just a camera to watch you.It is going to be fascinating to find out what I actually want.
Mac and PCThe PC market is obviously one of the huge markets forsemiconductors. I think that the semiconductor content in cell-phones (in aggregate) is now greater than in PCs but I can’t findthe reference.I was at Google I/O last year. One thing a friend had told me wasthat essentially all web development is now done on Mac. Itseemed to be true. I would guess that only about 5% of themachines that I saw over those two days were Windows PCs, therest were all Macs. Of course Apple is riding high with the iPod26 Chapter 1: Semiconductor
29EDAgraffiti
30 Chapter 1: Semiconductor
31EDAgraffiti
on a wafer, and what the cost per wafer is. The first part is fairlyeasy to calculate based on defect densities and die size and is notcontroversial.In fabs that run only very long runs of standard products theremay be a standard wafer price. As long as the setup costs of thedesign are dwarfed by other costs since so many lots are run in arow, then this is a reasonable reflection of reality. Every wafer issimply assumed to cost the standard wafer price.In fabs that run ASIC or foundry work, many runs are relativelyshort. Not every product is running in enormous volume. For astart, prototypes run in tiny volumes and a single wafer is waymore than is needed although it used to be, and may still be, thata minimum of three wafers is run to provide some backup againstmisprocessing of a wafer and making it less likely to have torestart the prototype run from scratch.Back when I was at VLSI we initially had a fairly simple costmodel and it made it look like we were making money on allsorts of designs. Everyone knew, however, that although the costmodel didn’t say it explicitly the company made lots of money ifwe ran high volumes of wafers of about 350 mils on a side, whichseemed to be some sort of sweet spot. Then we had a full-timeexpert on cost-models and upgraded the cost-model to be muchmore accurate. In particular it did a better job about the setup costof all the equipment when switching from one design to the next,which happened a lot. VLSI brought a design into production onaverage roughly daily and would be running lots of designs, andsome prototypes, on any given day. The valuable fab equipmentspent a lot of the day depreciating while the steppers wereswitched from the reticles for one design to the next. Otherequipment would have to be switched to match the appropriateprocess because VLSI wasn’t large enough to have a fab for eachprocess generation so all processes were run in the same fab (fora time there were two so this wasn’t completely true). Intel andTSMC and other high volume manufacturers would typicallybuild a fab for each process generation and rarely run any otherprocess in that fab.
33EDAgraffiti
The new cost model shocked everyone. Finally it showed that thesweet spot of the fab was high volume runs of 350 mils on a side.Large enough that the design was complex and difficult (whichwe were good at) but small enough not to get into the part of theyield curve where too many die were bad. But the most shockingthing was that it showed that all the low volume runs, I thinkabout 80% of VLSI’s business at the time, lost money.This changed the ASIC business completely since everyonerealized that, in reality, there were only about 50 sockets a year inthe world that were high enough volume to be worth competingfor and the rest were a gamble, a gamble that they might be chipsfrom an unknown startup that became the next Apple or the nextNintendo. VLSI could improve its profitability by losing most ofits customers.Another wrinkle on any cost model is that in any given month thecost of the fab turns out to be different from what it should be. Ifyou add up the cost of all the wafers for the month according thecost model, they don’t total to the actual cost of running the fab ifyou look at the big picture: depreciation, maintenance, power,water, chemicals and so on. The difference is called the fabvariance. There seemed to be two ways of handling this. One,which Intel did at least back then in the early 1990s, was to scaleeveryone’s wafer price for the month so it matched the totalprice. So anyone running a business would have wafer prices thatvaried from one month to the next depending on just how wellthe fab was running. The other is simply to take the variance andtreat it as company overhead and treat it the same way as othercompany overhead. In the software group of VLSI we used to beannoyed to have our expenses miss budget due to our share of thefab variance, since not only did we have no control over it (likeeveryone else) it didn’t have anything to do with our business atall.
35EDAgraffiti
spectrum where we can focus light with lenses, and into the partwhere we essentially have X-rays that go straight through the lensand through pretty much anything else too. The next step lookslike it will have to be e-beam lithography, where a beam ofelectrons is steered in the same way as in an old TV. This is well-understood technically but it has a very slow write speed which,so far, makes the whole process uneconomical for massproduction.But being stuck at 193nm means we have a new problem. Wehave feature sizes on chips that are much less than 193nm (whichis around 0.18um which was many process nodes ago). All sortsof optical effects happen due to wave interference of light and weneeded to put very different patterns on the mask from theoriginal layout, inorder to get theeventual featureon the die tomatch what wefirst thought of. Itbecame anythingbut WYSIWYG.There is a wholegamut of techniques that have come to be known as RET, forresolution enhancement technologies. Optical proximitycorrection (OPC) changes the shape of what is on the mask sothat what ends up on the wafer is what is required. For example,corners have extra lumps added so that they don’t get etchedaway. Phase shift masking (PSM) etches the reticle by fractionsof a wavelength so that the interference that results is desirable.The generic name for putting these extra features onto the maskis known as RET decoration. Since this might multiply the billionor so shapes on a layer by a factor of ten it is computationallyvery expensive.A whole subsegment of EDA grew up when this first becameimportant, under the generic name of DFM, design formanufacturability. Many companies were started in the segmentand it is instructive to look at this since it is the most recent36 Chapter 1: Semiconductor
37EDAgraffiti
39Chapter 2. EDA industryThe EDA industry (which, of course, stands for ElectronicDesign Automation, but I’d be extremely surprised if you arereading this book and don’t know that) provides the softwaretools that are essential to design semiconductors.EDA is interesting at two levels. Since every few process nodesrequires a complete reengineering of the design, the industry isone of the fastest changing in the world. On the other hand, muchof the technology requirements can be “read off” thesemiconductor process roadmap years ahead.Despite being so key to the semiconductor industry, however,EDA feels like the Rodney Dangerfield of the ecosystem—it getsno respect. Unlike information technology (IT) which becamestrategic with the arrival of the Internet, EDA really has nevermanaged to become a true strategic partner to the semiconductorindustry, instead pushed down into the CAD organization as acost to be managed, as was IT in an earlier era.Going forward the challenge is how to fold in system level designand software development to keep productivity increasing at theextraordinary rate his has done for the past three decades.
3By the way, Gerry Hsu is sometimes portrayed as a bit of a buffoon. But he was certainlyextremely smart, and very perceptive. Just somewhat ethically challenged. I worked forhim for 8 hours!
42 Chapter 2: EDA Industry
Ferrari vs Formula 1It used to be received wisdom that the way to get a good designflow was for a semiconductor company to purchase best-in-classpoint tools and then integrate them together themselves. I thinkthere were two reasons for this. First, the EDA companies hadgrown from a lot of acquisitions so that’s what they had for sale:good point tools that were poorly integrated. Second, they wereselling to CAD groups in an era when semiconductor was doingwell and CAD groups liked to justify their existence by doing lotsof evaluation (which point tool is best?) and then integratingthem (need lots of people).For most people, this was actually not the best way to get aproductive environment matched to their needs. It is as if we allhad to buy cars the way a Formula-1 team does, buying the bestengine, the best brakes, the best gearbox and making everythingwork well together ourselves at great expense. If you really needto win a Formula-1 race then this is the only way to go. Even atop of the line Ferrari is simply way too slow. But for most of us,a Honda Accord is just fine, easier to use, cheaper to acquire, andorders of magnitude less expensive to get and keep on the road.Back in that era I was at VLSI Technology. When we spun outCompass we had a Ferrari in a marketplace where people thoughtthey wanted to build their own Formula-1 racecar. Potentialcustomers only wanted to benchmark point tools and wouldn’teven attempt to benchmark an entire design flow. I’m not evensure how you would. I don’t know how much better the designflows that CAD groups assembled out of Cadence and Synopsyspoint tools (along with a seasoning of stuff from startups) reallywere. And neither does anyone else. They were certainlyincredibly expensive in comparison. Before the spinout, I madeseveral visits to semiconductor companies whose CAD groupswere bigger than VLSI’s Design Technology group. But Design 43EDAgraffiti
Technology developed all the tools, wrote all the source code forsynthesis, simulation, timing analysis, place and route, physicalverification, designed all the standard cell libraries, created thememory compilers and the datapath compiler. Soup to nuts. Ithink the only external tool in wide use was for gate-array placeand route, an area where VLSI was never that competitiveanyway (if you really wanted a gate-array, you went to LSILogic).Magma was the first and only EDA company to build anintegrated environment. A CAD manager friend of mine told methat they used Magma for everything they could. For the mostdifficult designs they used Cadence’s Silicon Ensemble but theycould train someone on Magma in a day (and they weren’timmediately hired away by the competition once they’d beenexpensively put through training).At last year’s EDAC forecast meeting, Aart de Geus said that foryears he has been preaching that an integrated flow is important.One difference he is noticing in the 2009 downturn, he said, isthat this time executives are listening. Chi-Ping Hsu of Cadencetold me the same thing about the Cadence PFI initiative whichwas well-received by power-sensitive customers (is there anothersort of customer?). PFI’s main thread, the CPF standard, pulledtogether tools from across Cadence’s product line along withstandards that allowed external tools to play in the flow too.Synopsys UPF does the same thing on their side of the standardwars trench. People had managed to put together power-awareflows before, lashing together point tools with lots of their ownscripts. But they were very buggy and many chips failed due totrivial things like missing isolators or not taking getting thetiming right in multi-voltage blocks. This seems to be a thing ofthe past now, although most designs are still on the basic end ofpower saving (fixed voltage islands, power-down) and not yetattempting the really tricky things like dynamic voltage andfrequency scaling (lowering the voltage and slowing the clockwhen there is not much to do).In the current hyper-cost-sensitive environment I think that thependulum will swing back the other way towards these more pre-44 Chapter 2: EDA Industry
50 Chapter 2: EDA Industry
51EDAgraffiti
53EDAgraffiti
54 Chapter 2: EDA Industry
58 Chapter 2: EDA Industry
59EDAgraffiti
60 Chapter 2: EDA Industry
61EDAgraffiti
62 Chapter 2: EDA Industry
63EDAgraffiti
64 Chapter 2: EDA Industry
65EDAgraffiti
66 Chapter 2: EDA Industry
67EDAgraffiti
68 Chapter 2: EDA Industry
69EDAgraffiti
IP, assign the software to processors and so on. But that is thedirection we need to move in. The mismatch between fragmented end-markets and high costsof design is potentially disruptive and thus an opportunity tochange the way that design is done.
EDA pressListen to all those marketing engines are revving up to a feverpitch waiting for the green light. But who should they pitch to?Customers, obviously, you eventually have to win the groundwar. But what about the air war? There isn’t really a pressfollowing EDA any more, but there are lots of us bloggers andsome newsletters, and without really planning it we’ve becomeone of the channels that potentially marketing can use to reachtheir customers.But it’s a new game and nobody knows how to play yet. I’vebeen approached by several PR agencies and marketing folkabout product announcements, interviews and so on. Individualproduct announcements are not interesting to me, and I’massuming you readers wouldn’t want to wade through them allanyway. There are other places for that. But productannouncements in aggregate are interesting: What are the newtrends? Which new areas are hot? Which new startups areinteresting in those areas? What hard problems are gettingcracked?It is a major challenge for a smaller company to get its messageout in this brave new world. Big companies like Cadence andSynopsys have their own internal tradeshows and regularly meetcustomer executives to brief them. Somebody commented on oneof my blog entries about a TSMC engineer saying “I don’t go toDAC any more; if I want to talk to an EDA company I makethem come to us.” That’s fine as long as you know about thecompany, but if you take that attitude you’ll never find out earlyabout hot new technology that might turn out to be important.
70 Chapter 2: EDA Industry
Remember Bill Joy’s law: no matter where you are, the smartestpeople are somewhere else. You just don’t know what is going toturn out to be important, so you need to look at it all. But it isincreasingly difficult to immerse yourself in the stream of rawinformation that might allow you to spot something. In itsheyday, when both Richard Goering and Mike Santarini andmore were there, not much happened in EDA that you’d miss ifyou read EEtimes each week. Now, not so much.That’s one reason that, for the time being, I think DAC remainsstrong. It’s the only place for that kind of serendipity. Everyonehas a story of some major customer finding them by chance atDAC. Not the big companies of course (“Synopsys. I didn’t knowyou had any synthesis products!") but startups. When I was atVaST we acquired Intel as a customer (or “a large Santa Clarabased microprocessor company” since I don’t think Intel likesanyone claiming them as a customer) when a couple of engineershappened to pass by the booth.2009 was the first DAC I've been to where I was officiallyclassified as "press." I got in for free as press, I got invited tovarious press/analyst events (but not all of them), I got invited tovarious other events since I'm on the press list. "I have seen thefuture and it is us." In some ways it feels like EDA has beenabandoned by the traditional press so we'd better just do itourselves, and with our deeper knowledge do it better. I don'tknow if I succeed but that's certainly part of what I try and do onthis blog.It’s not clear what the channels to reach customers are going tomorph into. To tell the truth, since it is so unmeasurable, it wasalways unclear even how much EDA customers were reading theright articles in EEtimes versus us EDA insiders keeping an eyeon the competition.Of course what is happening in the EDA trade press is mirroringwhat is going on in the wider world of journalism in general.Even the New York Times is struggling financially and probablywill not make it in its present form. The San FranciscoChronicle’s days are almost certainly limited. Time and
71EDAgraffiti
72 Chapter 2: EDA Industry
things but it’s not quite fair to compare a complex turbine bladewith a single transistor and count both as one part.But here’s the thing I thought of last night that I’ve neverarticulated before. Having designed the 787 on the computer, youpress a button and an amazing automated assembly plant take acouple of months to manufacture one. And then you put it on theend of the runway, put the throttles up to full and expect it to takeoff first time, using engines that have never run before and flightsurfaces that have never flown before. Which it had better do,since it is already scheduled to come into service in Novemberready for the holiday market.Then, unlike Boeing, the plane will be obsolete in 6 or 12months. Next Christmas the 797 will be required, even bigger andmore complex. But it will need to fly first time too.
73EDAgraffiti
74 Chapter 2: EDA Industry
75Chapter 3. Silicon ValleyEDA is not entirely within Silicon Valley, of course. And, despitethe name, there are no longer any semiconductor fabs in SiliconValley. Intel closed their technology development fab, which wasthe last one, in early 2009.But EDA is driven by the Silicon Valley culture even when it isin other countries or other areas of the country. The ecosystem ofmany small companies and four large ones (3 based in SiliconValley, one based in Oregon but with a big Silicon Valleyfootprint) has historically been driven by west coast venturecapital. A large number of the engineers (and investors) in EDAdon’t come from Silicon Valley or even California and so EDAin intimately bound up in immigration policy and employs a verydiverse mixture of nationalities.
Einstein, Bob Hope, John Muir, Carlos Santana and many others.That doesn’t happen much in China or Mexico. The mayor ofVienna is not an American immigrant; Arnold Schwarzeneggercame in the other direction.On a personal note, I’m very grateful for the opportunity that theUS gave me.Most discussion of immigration centers on illegal immigration ofpoorly educated Mexicans, but all the evidence seems to be thatwhile poor Americans may lose slightly through increasedcompetition for low-paid jobs, they gain even more from thingslike lower cost food. But as a strategic issue for the US I don’tthink this is all that big a deal. The US economy doesn’t stand orfall on the basis of how many Mexicans work here.Much more important is the idiotic legal immigration policy wehave for educated people. The most insane part is allowingstudents to come here for PhDs (55% of engineering PhDs areforeign-born) and expelling them when they are done, since thereis no automatic route to stay here. Plus we make it harder thannecessary to come here to study in the first place. First loss, theseare just the kind of people that we need here in the US to drivetechnology businesses. Second loss, even if students go back totheir home countries, they go back with a positive image of theUS to counter the negative views of people who know little aboutthe country.The H-1 visa quota for each year opens up on 1st of April andcloses immediately since twice as many applications are receivedthat day as are available for the entire year. But those are forvisas starting October 1st. When I came to the US either there wasno quota or it was much higher than the number of applicants. Ifa company wanted to hire a qualified candidate from overseas(me) then it applied for a visa, waited about 6 weeks and got it,then the person could start. Today it is impossible to hiresomeone on that basis since the delay is about 9 months onaverage until the next October 1st after the next April 1st, and thenthere is only a 50:50 chance of getting a visa anyway. Companiescan’t be bothered with such a lengthy uncertain process.
77EDAgraffiti
The result is that H-1 visas have become a way for overseasconsulting companies, especially Indian, to apply for largenumbers of visas knowing some will get through and theiremployees can then come here months later. This is notnecessarily bad but it also squeezes out everyone else, everytalented person that an American company wants to hire fromoverseas, every student who wants to stay on once they have theirdoctorate and so on. The best solution if it is politicallyunacceptable to do the sensible thing and remove the cap, wouldbe to ‘auction off’ the visas. But I don’t mean by paying bids tothe government but by using the salary that the employee wouldreceive. The higher the salary paid the easier to get a visa for thatemployee. The Indian job shops would be ‘outbid’ by PhDs.I can do no better than to quote James Fallows, an editor atAtlantic Monthly who currently lives in China (and used to livein Japan during its heydey in the late 80s). Here he is talkingabout an Irishman who lived in southern California but had tomove to China because he couldn’t get a visa to remain here: .”
78 Chapter 3: Silicon Valley
Visa. PricelessThe current downturn has lead to renewed focus in the H-1B visacap, not to mention xenophobic restrictions slipped into theTARP bills to make the US even less welcoming. I think we havethe worst of all worlds right now. The caps are so low thatcompanies cannot use H-1 visas to hire talented people fromoverseas to work for them, they have become only a way forAsian subcontractors to get people in the to country and nothingmuch else. The entire year’s supply of visas goes in a day so theold model no longer works. It is no longer possible to find atalented person overseas, hire him or her, get a visa and set thestart date a few weeks later. That is how I came to the US in theearly 1980s. Now, the only model that works for a person likethat is to hire them onto your overseas subsidiary (so don’t be a
80 Chapter 3: Silicon Valley
startup or you won’t have one) and after they have worked for ayear it is possible to transfer them on an L-1 visa.But people always tend to focus on the lowest level people anddebate whether or not a person with an H-1 visa is taking a jobaway from an equally qualified American. In the old days theanswer was certainly “no”, but now I’m not so sure. They are forsure taking a job away from an almost certainly more talentedoverseas employee who cannot get hired under the current visasystem and who would be an unquestionable gain to the US as animmigrant.However, immigrants create a lot of jobs for Americans too bytheir skill at founding or managing companies. In EDA, forexample, Aart de Geus (CEO of Synopys) came fromSwitzerland, Lip-Bu Tan (CEO of Cadence) came fromSingapore, Rajeev Madhavan (CEO of Magma) came from India.As far as I know, Wally Rhines (CEO of Mentor) is Americanborn and bred.I’m guessing that most of the immigrants originally came to thiscountry either as students (so on an F-1 visa) or on an H-1 visa.Today we make it much too hard for the next generation oftalented individuals overseas to come here and stay.I think that over the next few years the problem with the US justas likely to be immigrants leaving the country, especially toreturn to India or Taiwan/China. This is already happening tosome extent. Growth there is more attractive than here, and theinfrastructure in the US for starting a business, thought better, isno longer so superior to everywhere else.I think that the US’s capability to absorb talented individuals andmake them successful is a competitive advantage no othercountry has. Everyone else must love the way we arehandicapping ourselves these days. We are our own April fooljoke, but not even mildly humorous.
81EDAgraffiti
82 Chapter 3: Silicon Valley
DownturnSuperficially, the present downturn is similar to the “technology”crash of 2001. I put technology in quotes since very little of thatfirst Internet boom involved true technology, and many peoplewho called themselves programmers were writing plain HTML.As somebody, I forget who, said to me at the time: “surely oneday technology will count again.” Of course some companies,like Amazon, Webvan, eBay or Napster, had a differentiatedtechnology foundation to go with what was mainly a businessmodel play but most did not.But undeniably the boom of investment created a huge number ofjobs. When the crash finally came, large numbers of them weredestroyed. A lot of those people had come to the bay areaattracted by the boom, and when their jobs went away they wenthome again. The SoMa warehouses in San Francisco emptied out 83EDAgraffiti
as fast as they had filled and apartment rents came back down.Many people who had left the EDA industry to make theirfortune returned to a place where their knowledge was stillvalued. As is often the case, the people in EDA (at least the onesI know) who made it big in the Internet companies were peoplewho left early, before it was obvious that it was a good idea.People who joined Yahoo before it was public, who formedeBay’s finance organization or founded small companies thatbuilt important pieces of the plumbing.This downturn seems different. Many of the people being laid off(and I don’t just mean in EDA, in silicon valley in general) arepeople who have been here for decades, not people who camehere in the last few years as part of the gold rush. Of course,veterans have been laid off before and then immediately re-hiredwhen the eventual upturn came.But again this downturn seems different. I don’t think that manyof these jobs are coming back again. Ever. EDA in particular isundergoing some sort of restructuring, as is semiconductor. Wecan argue about precisely what we will see when the dust settles,but I don’t think many of us expect to see the 2007 landscapeonce again.I’ve pointed out before that it is obvious that EDA technology isrequired since you can’t design chips any other way. But theEDA industry as it was configured will not be the way that toolscontinue to be delivered. It is hard to imagine that Cadence willemploy 5000 people again any time soon, to pick the mostobvious example.The many dozens of EDA startups that used to employ significantnumbers of people in aggregate aren’t coming back either. Anystartups that do get formed will be extremely lean with just ahandful of people. Partially this is driven by technology: withmodern tools and open infrastructure, it doesn’t take an EDAstartup half a dozen people and a year or two to build (duplicate)the infrastructure they need on which to create differentiatedtechnology. It takes a couple of guys a couple of months.Partially size is driven by investment. With public markets closed
84 Chapter 3: Silicon Valley
included) largely work in Silicon Valley but live in the city. Thetraffic is still more jammed entering the city than leaving but it’sgetting close. Bauer, who used to just run limos I think, now hasa huge fleet of buses with on-board WiFi that they contract out tobring employees down to the valley from San Francisco. Theycram the car-pool lane between all those Priuses making the not-so-green 40 mile trip.San Francisco seems to have a very anti-business culture.Anything efficient and profitable is bad. So if, like me, you livein San Francisco you have to drive for 15 minutes and give yourtax dollars to Daly City if you want to go to Home Depot. Theyfinally gave up trying to open a store in San Francisco after 9years of trying. Of course a Walmart, Ikea or Target isunthinkable. And even Starbucks has problems opening newstores since they (big) compete too effectively against localcoffee shops (small, thus good by definition). The reality is thatsome small coffee shops (like Ritual Roasters) are among the bestin the US, and a Starbucks next door wouldn’t do well; and forsome a Starbucks in the area would be an improvement. But inany case it makes more sense to let the customers of coffee shopsdecide who is good rather than the board of supervisors trying toburnish their progressive credentials.Those two things together—much commerce is out of the city,many inhabitants work outside the city—are warnings that SanFrancisco is not heeding. San Francisco already has one bigproblem (as do many cities) that housing is really expensive (atleast partially due to economically illiterate policies like rentcontrol and excessive political interference in the planningprocess making it difficult to build any new housing) and thepublic schools are crappy. So when a resident has a family, theyhave to be rich to afford a large enough house and a privateschool, or they move out. So every year San Francisco can closesome schools since there are ever fewer children in the city;famously there are more dogs than kids.The trend, which is not good, is for San Francisco to dependincreasingly on three things: out of town rich people who liveelsewhere (often in Nevada due to California’s taxes) but like to86 Chapter 3: Silicon Valley
PatentsThe basic “tradeoff” in having a patent system is that without thepromise of some sort of state-sanctioned monopoly innovationwould be a something that would be underprovided. Let’s notargue about that dubious point, and just take it as a given.Another positive for the system is that requiring the inventorreceiving the monopoly to disclose the details of the invention,means that once the monopoly period ends then the details arefreely available for everyone to copy.Let’s see how that seems to work in practice in the two industriesI know well, EDA and semiconductors.I knew nothing about patents until the mid-1980s. I was at VLSITechnology and we didn’t bother patenting stuff since we weresmall and patenting was expensive. Once VLSI reached about$100M in revenue, other semiconductor companies with largepatent portfolios (IBM, Motorola, TI, AT&T, Philips, Intel and soon) came knocking on our door with a suitcase of patents, sayingwe probably infringed some of them and would we please payseveral million dollars in licensing fees. We probably wereinfringing some of them, who was even going to bother to try andfind out, so that first year the only negotiation was how much wewould pay. VLSI started a crash program to patent everything wecould, especially in EDA where we were ahead of the work going
87EDAgraffiti
88 Chapter 3: Silicon Valley
89EDAgraffiti
Patent trollsCDMA is also another interesting oddity from a patent point ofview. Most patents are tiny pieces of incremental innovation thatform the many little pieces you need to build complextechnological products. You can’t build a semiconductor withoutviolating thousands if not millions of patents. For example,Motorola (Freescale now, I suppose) owned a patent on the ideaof filtering photoresist which surprisingly passed the non-obvioustest. This used to be a minor annoyance since the patents wereowned by other semiconductor companies, and the problem couldbe resolved with a manageable payment or royalty and a cross-license. After all, you don’t need to be in the business for longbefore they can’t build anything without your patents. Now that ahuge number of patents are owned by so-called patent trolls,people who have purchased patents for the explicit purpose oftrying to generate disproportionate licensing revenue, the cross-licensing approach won’t always work and, as a result, the patentsystem is effectively broken for technologies like semiconductor(and EDA for that matter) that stand on the shoulders of thosewho went before in ways too numerous to even take time toexamine.Patents were a problem for GSM phone manufacturers sincecompanies like Philips and Motorola managed to design theirown patents into the standard. GSM had the concept of essentialand non-essential patents. An essential patent was one that youcouldn’t avoid: if you were compliant with GSM you wereviolating the patent, something that “shouldn't happen.”However, the essential patent owners preferred to keep theirheads down for political reasons (don’t want those Europeangovernments telling us off in public) and keep quiet about whatpatents they owned until businesses were rich enough to be worthsuing. For example, Philips owned the patent on the specificvocoder (voice encoder) used in GSM. Not the general idea of a
90 Chapter 3: Silicon Valley
People are smarter these days about making sure that patentsdon’t get designed into standards. Look at the fuss over Rambus.However, it is still a grey area. After all, nobody knows whateven their own company’s patent portfolio really covers. Ifyou’ve read a patent, you know how hard it is to tell what itreally says. You can only read the word “plurality” a limitednumber of times before your eyes glaze over. And at thecompany level, nobody knows the whole portfolio. If you are therepresentative from, say, Nokia on some standardizationcommittee, then you can’t really guarantee that any particularstandard doesn’t violate any Nokia patents, and you are certainlynot going to sign up for guaranteeing never to sue, say, Samsungover a patent violation. Especially as you are not the corporatecounsel, you are some middle level engineer assigned to astandardization committee that may or may not turn out to bestrategically important.But CDMA was a complete patent-protected technology morelike a blockbuster drug formula. You couldn’t do anything inCDMA without licensing a portfolio of patents from Qualcommon whatever terms they felt like giving you. They invented theentire technology and patented it before anyone else really knewit was feasible. They sued Broadcom, they sued Ericsson, they 91EDAgraffiti
sued everyone and pretty much established that there was no wayaround this no matter what. In 2G this wasn’t a big issue sinceGSM doesn’t depend in any way on CDMA. But W-CDMA andall the later technologies use various aspects of CDMA and soQualcomm is in the happy position of having a tax on every cellphone.
92 Chapter 3: Silicon Valley
93EDAgraffiti
94 Chapter 3: Silicon Valley
95EDAgraffiti
96Chapter 4. ManagementEDA and semiconductor companies need managers just like anycompany, of course. But EDA moves so fast and is so technicalthat it is not, perhaps, like managing in any other industry. It hasa large number of startups, a few large companies, making forsome contrasts. Plus it is a software business, which hasessentially zero cost of goods (it doesn’t cost any extra to ship anextra copy of a tool) making for some dynamics that are hard tomanage and tempting to mismanage.EDA also has a high number of acquisitions, some successful andsome not so much. But acquisitions bring their own set ofproblems in integrating the team, dealing with the politics ofproduct overlap and accounting for what happened.One especially lonely job is being CEO. I’ve been CEO a coupleof times and managing change is one of the biggest challenges. Ifa company is in a static industry and running well then changemay not be necessary. In EDA change comes with the territory.
Three envelopesCan there be any subject more boring than revenue recognitionfor software? If you listen to the conference calls of the publicEDA companies, you’ll either hear them discuss or get askedabout how much of their business is ratable versus term. Whatdoes this mean? Should you care? Also, what does it matter howlong the term is, isn’t longer more money and so better?When Jack Harding was CEO of Cadence, he lost his job becauseof these details. Cadence had been selling permanent licenses (forhistorical reasons I’ll maybe go into at some point, EDA had ahardware business model). The sales organization had come upwith the concept of a FAM, which stood for flexible accessmodel. The basic idea was great. Instead of selling a permanentlicense valid forever, sell a license valid for only 3 years for notmuch less. Then, three years later sell the same license again. TheEDAgraffiti
any pressure. So eventually the wheels came off again. There waseven some restatement of revenue associated with, surprise,whether some deals were correctly recognized as term or ratable.So Mike Fister got to prepare his three envelopes and now weknow it is Lip-Bu Tan who will open them. Now the big reset,blaming Mike for all the terrible deals he left behind, and lots oftalk about starting with a clean sheet.
Being CEOWhat does being a CEO entail? I think all senior managementjobs consist of two separate dimensions that have two separateskill sets. I call these management and leadership. Some peopleare good at both, some are good at only one.Management is the basic operational management of thecompany. Presumably you already know how to do this, at leastin your own domain (engineering, marketing, sales, etc) or youprobably wouldn’t have been promoted. When you get moresenior you have a new challenge: you have to manage people notfrom your own domain. If you are an engineer, it’s likesalespeople are from another planet and you don’t understandwhat makes them tick. If you are a salesperson you may think thesame about engineering. If the company is medium sized thingsare not so bad since you’ll have a sales manager and anengineering manager to insulate you. But if the company is smallthen you’ll have to manage the aliens directly. Myrecommendation is to get some advice. If you’ve never set up asales commission plan before, don’t assume that because you area smart engineer who knows Excel that you can just wing it. Ifyou don’t know a friendly VP sales who can give you freeadvice, find a consultant and pay them. It’s a lot cheaper thanmaking major mistakes.As CEO you may have only an accountant (or maybe nobody) tosupport you in finance. I think it makes sense to get a “CFO for aday” consultant to help you unless you are very comfortable withall the finance issues and already have a good feel for how to put
100 Chapter 4: Management
101EDAgraffiti
the proverb that "bad news travels fast," inside a company badnews travels really slowly so you need to make a special effort todiscover it. In the early stages it is good to have someone inengineering who is a personal friend who will not hide bad news.Later on, you need someone in sales like that who’ll tell you whatis really happening when the company tries to sell the product.You can’t sit in your CEO office and believe everything that youare told. You have to get out and dig.
Board gamesThe board in any company really has two main functions. One isto advise the CEO since the board often has complementaryexperience. For example, older venture capital investors haveprobably seen before something very similar to any problem thatmay come up, or board members with industry experience mayhave a more “realistic” view on how taking a product to market islikely to turn out than the spreadsheets describing the company’sbusiness plan.The second, and most important, job of the board is to decidewhen and whether to change the CEO. In one way of looking at
103EDAgraffiti
things, this is really the only function of the board. The CEO canget advice from anywhere, not just the board. But only the boardcan decide that the company leadership needs to change. It is therare CEO that falls on his own sword, and even then it is theboard that decides who the new CEO is going to be.Usually there is some controversy that brings a crisis to a head.The CEO wants to do one thing. There is some camp, perhaps inthe company, or perhaps outside observers, or perhaps on theboard itself, that thinks that something else should be done. Theissues may be horribly complicated. But in the end the board hasa binary choice. It can either support the CEO 100%, or it canchange the CEO. It can’t half-heartedly support the CEO (“goahead, but we don’t think you should do it”). It can’t vote againstthe CEO on important issues (“let’s vote down making thatinvestment you proposed as essential for the future”).I was involved in one board level fight. I was about to be fired asa vice-president even though the board supported my view ofwhat the company needed to do and told me that they wouldn’tlet the CEO fire me. But in the end, they only had those twochoices: support the CEO, or fire the CEO. The third choice,don’t fire the CEO but don’t let him fire me, didn’t actually exist.So I was gone. And the new CEO search started that day and theold CEO was gone within the year.Boards don’t always get things right, of course. I don’t know allthe details, but there is certainly one view of the Carly Fiorina toMark Hurd transition at H-P that Carly was right, and Mark hasmanaged to look good since all he had to do was manage with alight hand on the wheel as Carly’s difficult decisions (inparticular the Compaq merger) started to bear fruit. If she hadbeen allowed to stay, she’d have got the glory in this view.Almost certainly, Yahoo’s board got things wrong with theMicrosoft acquisition offer. Jerry Yang wanted (and did) refuseit. The board supported him. Their only other choice was to find anew CEO, which they eventually did.When Apple’s board fired Gil Amelio and brought Steve Jobsback, hindsight has shown that it was a brilliant decision. But in104 Chapter 4: Management
fact it was extraordinarily risky. There are very few second actsin business, where CEOs have left a company (and remember, anearlier Apple board had stripped Steve Jobs of all operationalresponsibility effectively driving him out of the company) andthen returned to run them successfully later. Much more commonis the situation at Dell or Starbucks, where the CEO returns whenthe company is struggling and the company continues to struggle.
106 Chapter 4: Management
afar and don’t even wait to see if the CEO can handle it beforehitting the eject button. They knew when they founded thecompany that they would change the CEO. Sometimes they evenmake it a condition of funding, to make the process less traumaticwhen it happens.
Twelve-o-clock highThree or four times in my life I’ve been given divisions orcompanies to run that have not been performing. Although itseems like an opportunity like that would be a poisoned chalice,it was actually a no-lose situation. If things went badly then I wasdrafted in too late. If things went well then I would be creditedwith the improvement. When expectations are so low it is not thathard to exceed them. Which is not at all the same thing as sayingthat improvement or success are easy.When overnight I found myself as CEO of Compass DesignAutomation, one of my staff gave me the movie Twelve o’clockhigh in which Gregory Peck takes over a bomber squadron duringthe second world war and turns it around. The previouscommander had become too close to his men to be effective as acommander. It won some Oscars and still worth watching today.It is a lot easier to make the changes to an organization as anewly-drafted boss than it is to makes those changes if you werethe person responsible for the early decisions. Everyone is humanand we don’t like admitting that we made a mistake. We getemotionally attached to our decisions, especially to parts of thebusiness that we rose up through or created. Nobody wants to killtheir own baby. If you’ve ever fired someone that you hired orpromoted, you probably discovered everyone around youthought, “what took you so long?” Reversing decisions that youmade yourself tends to be like that.As a newly drafted boss, morale will usually improveautomatically just as a result of the change. Everyone knows lotsof things that need to be changed and that were unlikely to bechanged under the previous regime. It is a bit like the old joke
107EDAgraffiti
108 Chapter 4: Management
109EDAgraffiti
them an offer, the more likely they are to accept. Firstly, theywon’t have had time to interview with anyone else equallyattractive and secondly they won’t have had time to start to get tothe sour-grapes stage of rationalizing why you haven’t giventhem an offer already. One advantage startups have over biggercompanies is that they can make people an offer very fast. It canmake a big difference: when I first came to the US I waspromised an offer from Intel and H-P. But VLSI Technologygave me an offer at the end of the day I interviewed, so I nevereven found out what the others might have offered (Intel had ahiring freeze before they'd have been able to get me an offer, as ithappened). Don’t neutralize the fast offer advantage that startupshave by being indecisive.The second problem about hiring is hiring the wrong people.Actually, not so much hiring them. It goes without saying thatsome percentage of hires will turn out to be the wrong personhowever good your screening. The problem comes when theystart work. They turn out to be hypersmart, but think actuallydelivering working code is beneath them. They interview reallywell but turn out to be obnoxious to work with. They don’t showup to work. They are really bright but have too much still tolearn. Whatever. Keeping such people is one of the reasonstartups fail or progress grinds to a halt.Firing people is an underrated skill that rarely gets prominence inbooks or courses on management. Even in large companies, bythe time you fire someone, everyone around you is thinking,“what took you so long?” In a startup, you only have a smallteam. You can’t afford to carry deadweight or, worse, peoplewho drag down the team. It doesn’t matter what the reason is,they have to go. The sooner the better. One thing to realize is thatit is actually good for the employee. They are not going to makeit in your company, and the sooner they find a job at which theycan excel, the better. You don’t do them any favors by keepingthem on once you know that they have no future there.It may be the first time that you’ve fired someone in your life,which means that it will be unpleasant and unfamiliar for you.Whatever you do, don’t try and make that point to the employee110 Chapter 4: Management
112 Chapter 4: Management
113EDAgraffiti
114 Chapter 4: Management
Emotional engineersPeople sometimes say that salespeople are emotional, unlikeengineers. I think what they mean is that salespeople are(stereotypically) extrovert so if you mess with them they’ll makea noise about it. Whereas engineers are introvert and will justbrood (“How can you tell if an engineer is extrovert? He stares at 115EDAgraffiti
116 Chapter 4: Management
When sales start, engineering is like the first child. They go fromhaving all the attention to having to share it. And to make itworse, the second child, sales, has a very effective strategy forgetting all the attention they need: explain the reasons they arenot closing business until their needs are satisfied. To makethings worse still, the reason they are not closing business isprobably related to deficiencies in the early immature product,which means that what little attention engineering does get isnegative.This is a very tough emotional transition. Engineering is on thestart of a path from being almost 100% of the company decliningto perhaps 20% of the company as it moves towards maturity.Engineering will hold headcount relatively flat as other parts ofthe company seem to explode. Engineering goes from being thestar of the show to a supporting role.The most important thing to do about handling this is to makesure everyone understands that it is going to happen, like tellingyour 4 year-old about the new baby. And, what is more, makesure everyone realizes that it is a symptom of success, a rite ofpassage. When all that anyone cares about is engineering, itmeans that the company isn’t selling anything. Whenmanagement cares about other things, that's the first taste ofvictory. It’s engineering’s job to get out of glare of attention asquickly as they can, and let sales start taking the heat.After all, how much fun was it when the CEO was analyzingengineering’s embarrassingly inaccurate schedules in great detail.Every day.
117EDAgraffiti
People talk about the “risk” of joining a startup, but the main risk,unless you are vice-president level or you are joining before thecompany is funded, is simply that you’ll waste your time. Youget paid pretty much the going rate for an engineer or a productmarketing person or whatever you do. And you have some stockthat will be worth a significant capital gain if the company issuccessful or nothing otherwise. If you are an executive, you getpaid a lot less than the going rate in a big company. On the otherhand, you have a lot of stock, 1-3% of the company for a vice-president, more for a hired-in CEO. Founders may have morethan this depending on how much financing they end up needingto bring in. So for the senior people they really are losingsomething more than just time working for a startup.Startups have two different dynamics from larger companies. Thefirst is simply that they employ fewer people, pretty much bydefinition. Secondly, everyone’s personal and financial success,especially the management, is bound up in the success orotherwise of the company.Employing fewer people means that in a startup there is nowhereto hide. Everyone knows everyone else and it is clear who isperforming and who, if anyone, is trying to free-ride on everyoneelse’s efforts. In an environment like that, everyone is underpressure to perform. A startup can’t afford much headcount and ifyou are not going to perform at a high level, or for some otherreason are not a good match, then it is best for the startup to findsomeone else who will.The second dynamic, that everyone’s success is bound up withthe company’s success, means that people naturally are workingtowards the same goal. Startups often struggle as to what thatgoal should be, and different management teams do more or lesswell at communicating it, but it is not necessary on a daily basisto micromanage everyone’s priorities. The natural DNA of acompany that makes it operate in a particular way, which can besuch a weakness in an Innovator’s Dilemma situation, is a benefithere. If you don’t tell people what to do there is a good chancethey’ll do what they should do anyway.
118 Chapter 4: Management
Strategic errorsIn the time I was at VLSI, we made a couple of strategic errorsrelating to EDA. It is perhaps unfair to characterize them this waysince it is only with hindsight that the view is clear.First a bit of history. VLSI was created in the early 1980s to dowhat came to be called ASIC designs. To do that we had internaltools and they made VLSI pretty successful, first in ASIC andlater standard product lines for PC chipsets and GSM phones.VLSI was a pre-IPO startup when I joined and it grew to a$600M company that was eventually acquired by PhilipsSemiconductors (now called NXP) in a $1B hostile takeover. In1991 VLSI spun out its internal software as a new company,Compass Design Automation, which never really achievedsuccess. It grew to nearly $60M and eventually (by then I wasCEO of Compass) was sold to Avant! for about $90M dependingon how you count in 1997.But let’s go back a bit. In the mid 1980s, VLSI had a majorproblem. It didn’t have enough money. It didn’t have enoughmoney to build a 1um fab, and it didn’t have enough money tofund TD (technology development, meaning development of the
119EDAgraffiti
121EDAgraffiti
122 Chapter 4: Management
goals were really important strategically but there was only oneset of engineers.Balancing these two conflicting requirements is probably thehardest aspect to manage in a typical EDA acquisition. It is reallyimportant, not just for financial reasons, to maintain theleadership position of the technology in the marketplace. At thesame time, it’s as important to integrate that leadershiptechnology so that it is available under-the-hood in other parts ofthe product line which, in the end, is probably how it will mostlyget into customer's hands. Preserve the differentiation whiledoing the integration.
responsible for the failed strategy that Apple had been pursuing.The Next managers could implement their strategy much moreeasily if they didn’t have another set of managers arguing withthem about every decision.Everybody knows that the big time sink in mergers is whereproducts overlap. My advice? Move your customers to the newproduct as soon as your engineers can do so. But the best way tohandle this is to make sure that the managers of the successful,acquired, product are in charge of those decisions and not themanagers of the failed product. This doesn’t make the problemgo away completely, after all, the customers of the existingproduct cannot typically simply be upgraded painlessly to thenew product, but at least it means that the winning product willbe the acquired one, which is essentially the decision that seniormanagement had already determined is what they wanted to havehappen when they decided to do the acquisition.Not all mergers are like this, of course. Sometimes, the newproduct line is completely complementary with no overlap. Butmore often than not, under the hood, there is more overlap than isobvious. When Cadence acquired Ambit, Cadence was alreadyahead of the curve because their internal synthesis product,Synergy, was doing so badly that they had killed it off six monthsbefore they acquired us. But one reason for acquiring Ambit wasfor its timing engine, which seemed to be the best in existence atthat time, but the existing timing team at Cadence still controlledtiming strategy. It took months to arrive at the foregoneconclusion that the Ambit timing engine should “win” andbecome the Cadence timing engine, a decision that would havetaken 5 minutes if Ambit’s timing team had been put in charge onday 1.It is very difficult to keep innovation going after an acquisition,especially if it is done at a high price so that many individualshave made significant money and are really hanging aroundlargely to vest the rest of their stock. Keeping a competing teamaround, and one that already is better connected politically,almost guarantees that innovation will stop and that theacquisition will be much less successful than it could have been124 Chapter 4: Management
125EDAgraffiti
126 Chapter 4: Management
127EDAgraffiti
the $2 million per sales team level. This is where companies diethough. If the sales teams are added too early, then they will burnall the cash. If the product is not ready for the mainstream, thenthe sales guys will not make it to the $2 million level and burn allthe cash. But if everything is in place, then the company can getto $10 million rapidly. The first year I was at Ambit, we did$840K in revenue; the second year, $10.4 million.This is the point at which a company is very attractive foracquisition. It has traction in the market ($10 million in sales andgrowing). The technology is proven (people are using lots of it—look, $10 million in sales). The acquisition price hasn’t got tooexpensive yet (only $10 million in sales). It is probably themarket leader in its niche ($10 million in sales and growing). Ofcourse if the company continues to grow, it will typically take inmore investment at this point in order to grow even faster. Thevalue of a software company is some multiple of forwardearnings, and the greater the growth, the greater the multiple.
Interview questionsA friend of mine was interviewing for a marketing position at anEDA startup. I’m leaving names anonymous to protect theinnocent. He (or maybe it was she) asked me what would be goodquestions to ask.There are two reasons for asking questions in an interview, whenyou are the candidate. One is that the type of questions you askreveal that you are already thinking about the important issuesaffecting the company. And the other is that you genuinely wantto know. In most cases, the questions serve both ends. In factmost questions you ask should help you decide if the company isgoing to be successful and whether you have the right skillset toimprove those chances.When you interview for a position at a startup, it is important torealize that you are interviewing the company as much as theyare interviewing you. The point of working for a startup is thatthe stock they give you will be valuable—otherwise, go do
128 Chapter 4: Management
129EDAgraffiti
in a job about three years. The first year, you don’t know how todo the job and your are learning a lot. The second year, you aregetting the hang of it. By the third year, you have become good atthe job. But being good at the job typically means that you don’thave much more to learn from the job because you are continuingto do it. That’s when it’s time to move on.When I say it’s time to move on I don’t mean that you need tomove to another company, although that is certainly one option.If you move to work on a new product, you’ll be learning stuffagain. If you relocate to Japan, you’ll be learning stuff again. Ifyou move from application engineering to product marketing,you’ll be learning again.In particular, if you get promoted, your job will change andyou’ll be learning stuff again. This is especially acute the firsttime you are promoted into management. Typically, you are thebest engineer or salesperson or whatever on the team and so youget promoted. Now, you have to learn about management, asubject that, previously, you may not have taken much interest in.It is an especially difficult transition since your comfort zone isnot to manage at all, just do everyone’s jobs for them. (After all,you were the best on the team so you are better than they are). Itis a hard lesson to learn that as a manager, your output is notwhat you do personally, it is the output of your group. It is not apositive that you did a lot of the work yourself; that means youare not doing a good job of nurturing the people in your group,not training them to be as good as you are.People will often move on to another company anyway if theyare bored, since there might not be an appropriate position tomove into, or a promotion to be had. This is especially true ofnew graduates who get fed up with some aspects of the companybureaucracy or culture and move to a new company to escape.However, the new company is typically the same (althoughdifferent in details). It’s just the nature of companies that theydon’t always do just what you think they ought to. The result ofthis phenomenon is that I think the best value people you canpossibly hire are people who have already worked for at least onecompany and have three to five years experience. At that point,130 Chapter 4: Management
131EDAgraffiti
that, usually, the contestant would arrive just after the trainstarted to move and either just catch it or just miss it by seconds.The career path train, however, isn’t like that. It doesn’t stop atthe station every day and, when it does stop, you have to decidewhether or not to get on. When you want a change of job forsome reason, there doesn’t seem to be a train. And when youaren’t really looking for anything, the train shows up and youhave the opportunity to board while it is in the station. But itwon’t be in the station again tomorrow; you have to decide rightnow.It’s especially hard to decide if the opportunity takes you out ofthe comfort zone of what you have been used to in your career sofar, or if it involves relocating. Two special times that the careerpath train stopped for me were, “would you like to go to Franceand open up an R&D center for us?” and “would you like toreturn to California and run all of R&D?” There’s always somesort of tradeoff in a promotion, not just more money for doingwhat you are already doing.Big companies usually have dual career ladders for engineers,with a management track and a technical track. However, it’s abit of an illusion, since only the strongest technical contributorsreally have a sensible option of staying completely technical andcontinuing to advance. I think dual career ladders are mostlyimportant because they institutionalize the idea that a seniortechnical person might be paid more than their manager,sometimes a lot more. In more hierarchical eras, that didn’thappen.But the fact that only the strongest engineers can keep advancingas engineers means that, at some point, most of them will have totransition into management or into some other role that makesuse of their engineering background along with other skills to dosomething that is more highly leveraged than simply doingindividual contributor engineering. It’s a big step that will requireyou to learn stuff you’ve not had to do before.But people are often not keen to take that critical step out of theircomfort zone. I’ve sometimes been surprised at how reluctant 133EDAgraffiti
A-team behavior. You can’t always get what you want using yourown personal charisma; sometimes you actually need your bossto do some tackling for you to leave the field clear.One rule I’ve always tried to follow is not to produce bigsurprises. Of course, things can go wrong and schedules (forexample) can slip. But these things don’t go from being on timeto being 6 months late overnight, without the slightest earlier hintof trouble. It is better to produce a small surprise and warn yourboss that things might be getting off track (and have a reputationfor being honest) than to maintain the image of perfection untilthe disaster can no longer be hidden. Just like the salesman’smantra of “underpromise and overdeliver,” your boss is a sort ofcustomer of yours and should be treated the same way.Lawyers are advised never to ask a witness a question that theydon’t already know the answer to. Getting decisions that cutacross multiple parts of a company can be a bit like that too.Never ask for a decision when you don’t already know exactlywhat everyone on the decision making panel thinks. Ideally, theyall buy into your decision, but the middle of a meeting is not thetime to find out who is on your side and who isn’t. Your boss canbe invaluable in helping to get his peers on-board and finding outwhat they think in a way that you, being more junior, perhapscannot.In some ways, this sounds like office politics, but actually I’mtalking getting the company to make the correct decision. Oftensomeone junior is the best-placed person to know the righttechnical solution, but they are not senior enough to drive theimplementation if it requires other groups to c-operate. That’swhen managing your boss comes into the picture.If you are CEO, you have some of the same issues managingyour board. But your board is not one person and they all havedifferent capabilities that you can take advantage of. Just as in thedecision committee scenario above, if you need a decision fromthe board, make sure that everyone is bought into the decisionalready, or at least have some of the board ready tocounterbalance any skeptics.
135EDAgraffiti
136 Chapter 4: Management
138Chapter 5. SalesSales in EDA is different from sales in many other areas. Firstly,like any software business, the cost of goods is essentially zero;selling one more copy of a place and route tool doesn’t involvemanufacturing costs like selling one more cell-phone does.Furthermore, since a license can be shipped by email, it really ispossible to make a deal late on the last day of the quarter and shipit for revenue that quarter. These two facts make for abrinkmanship between sales and the buyers in the customercompanies approaching the end of a license period. The customerhas to make a deal (or they lose their licenses) but they know thesalesman needs the deal this quarter (today) and not next quarter(in a few hours time).But EDA moves a lot faster than most enterprise software (thinkor Oracle or SAP) and so the software never gets mature. Orrather, the software for leading edge processes never gets maturebecause the leading edge process keeps changing. This makesEDA sales extremely risk averse since they are making most oftheir money renewing what they sold last time.
141EDAgraffiti
142 Chapter 5: Sales
143EDAgraffiti
144 Chapter 5: Sales
145EDAgraffiti
146 Chapter 5: Sales
deal means that a startup must fund a sales team for the quarter,they close a deal in the last week, and the company receives cashin the middle of the following quarter. That time-lag, between theinvestment in the team and collecting the cash, is one of the mainthings for which series B investment money is needed. VCs havea phrase “just add water” meaning that since the product isproven, the customer will buy at the right price. It should be asimple case of adding more money, using it as working capital tofund a bigger sales team and to cover the hole before the biggersales team produces bigger revenue and pays for itself. Wheredoes this $2 million rule come from? A successful EDA companyshould make about 20% profit and will require about 20%revenue to be spent on development. Of course, it is more in theearly stage of a startup, most obviously before the product is evenbrought to market but even through the first couple of years afterthat. Let’s take another 20% for marketing, finance, the CEO andso on. That leaves 40% for sales and application engineers. Theother rule of thumb is that a salesperson needs two applicationengineers, either a dedicated team or a mixture of one dedicatedand one pulled from a corporate pool. If a salesperson brings in$2 million, then that 40% for sales and applications amounts to$800K, A fully loaded application engineer (salary, bonus,benefits, travel) is about $250K. A fully loaded salesperson isabout $300K (more if they blow away their quota). So thenumbers add up. If the team brings in much less than $2 million,say $1.5 million, then they don’t even cover their share of thecosts of the rest of the company, let alone leave anything over forprofit.One consequence of the $2million dollar rule is that it is hard tomake a company work if the product is too cheap, at least in theearly days before customers will consider large volumepurchases. How tough? To make $2 million with a $50Kproduct, if you only sell two licenses at a time, you have to bookone order every two or three weeks. But, in fact all the orderscome at the end of the quarter meaning that the salesperson istrying to close five deals with new customers at the end of eachquarter, likely an impossibility.
147EDAgraffiti
148 Chapter 5: Sales
EDA companies didn’t really plan this effect. They bundle largeportfolios of tools (Cadence called them FAMs for flexibleaccess model) as a way to increase market share, and for a time itwas very effective. By the late 1990s, for example, Cadenceroughly took in $400 million per quarter and dropped $100million to the bottom line. Having difficulty in doing theaccounting afterwards was just an unintended consequence.However, the reason that this doesn’t really work is that the listprices don’t reflect value to the customer. The customer and thesales team don’t really look at them. They think of the deal asdelivering a certain design capability for a certain number ofengineers, for a certain sum of money. Nobody wastes any timearguing that their Verilog simulation price is too high but theywould be prepared to pay a bit more for synthesis, when theanswer is going to be a wash in any case. That’s both the strengthand the weakness of bundling, or what is often but misleadinglycalled “all you can eat.”The biggest problem of this sort of accounting for EDAcompanies is that they lose price and market signals. Cadencedidn’t realize that it was losing its Dracula franchise to Mentor’sCalibre until it was too late, since this trend never showed up inthe numbers. Customers would simply refuse to pay so much forDracula but the number of licenses in the deal wouldn’t actuallyget adjusted, so the allocation of the portion of the deal toDracula hid what was going on.During the heyday of Synopsys’s Design Compiler in the late1990s, it was hard for them to know how much revenue toallocate to other products in the deal that might have been ridingon its coattails. That’s without even considering the fact thatSynopsys would want to spread the revenue out as much aspossible to look less like a one-product company to bothcustomers and investors.This problem is not unique to EDA. I talked to a VP from Oraclethat I happened to meet and he told me that they have the sameissue. Without getting signals from the market, it is very hard toknow where they should invest engineering resources. EDA has it
149EDAgraffiti
Channel choicesShould a separate product be sold through a separate channel? Ifa new product is pretty much more of the same, then the answeris obviously “no.” If the new product is disruptive, sold to adifferent customer base, or requires different knowledge to sell,then the answer is less clear. There seem to be several maininputs into the decision. Cost, reach, conflict, transition anddisruption.First, cost. Each channel costs money. Obviously a separatedirect sales force, as Cadence once had for the Alta Group (itssystem level tools), is expensive. Less obviously, even adistributor or reseller has a cost: upfront cost in training itspeople; and ongoing cost in supporting them and in the portion ofeach sale that they retain. At the very least, the separate channelneeds to be more productive than it would be to simply sellthrough the existing channel. By productive, I mean deliveringmore margin dollars. Sales might be higher with the separatechannel, but sales costs might be even higher still, making aseparate channel unattractive. That is one reason that, typically,when an acquisition is made, most of the sales force from the
150 Chapter 5: Sales
acquired company is folded into the sales force for the acquiringcompany.The second issue is reach. The existing salesforce sells to certaincustomers, and, in fact, to certain groups within those customers.It will be hard for an existing sales force to sell a new product ifit has different customers or even completely different groupswithin those customers. Their “rolodex” (or CRM system) isn’tany use. They are not already on the right aircraft, they are notalready going to the right meetings. In this case, that arguesstrongly for having a separate channel.The third issue is conflict. So-called “channel conflict” occurswhen a customer might be able to purchase the same productthrough more than one channel, specifically more than one typeof channel, such as direct from the company or via some sort ofreseller. This has impact on pricing in a way that might havedownsides. For example, go up to Napa Valley and visit awinery. For sure, the winery will be very happy to sell you a fewbottles of wine. Since they don’t have any middlemen and have ahuge amount of inventory (they don’t just have the few bottles inthe store, they have hundreds of barrels of the stuff in the back)then surely they will sell you the wine for less than anyone else.But, in general, they will sell you the wine for the highest priceanywhere. If they sold it for less, they would make more moneyat the winery but they would risk having distributors andrestaurants refuse to carry it. In EDA, if there is a productavailable through distribution and direct, then the direct channelcannot routinely undercut the distribution or the distributor willsoon stop actively selling.The fourth reason to have a separate channel is when the marketdemands, or the company decides, that it must transition its salesfrom one channel to another. Maybe they decide to move fromdirect sales to only doing telesales or only taking online orders.Or perhaps they decide that the day of a standalone product hasgone, and they will only be able to sell it in an integrated bundlethrough a partner going forward. The channel must switch fromtheir former selling method to simply relying on the partner tosell their product and getting their share of those sales (and, 151EDAgraffiti
152 Chapter 5: Sales
154 Chapter 5: Sales
Application EngineersApplication engineers are the unsung heroes of EDA. They haveto blend the technical skills of designers with the interpersonalskills of salespeople. Most AEs start out as design engineers (orsoftware engineers for the embedded market). But not all designengineers make it as AEs, partially because, as I’m sure you’venoticed, not all design engineers have good interpersonal skills!There’s also another problem, memorably described to me yearsago by Devadas Varma when we were both at Ambit,: “they’veonly been in the restaurant before; now they’re in the kitchen,they’re not so keen on what it takes to prepare the food.” Beingan AE means cutting more corners than being a design engineer,and some people just don’t have that temperament. An AEusually has to produce a 95% solution quickly; a design engineerhas to take whatever time it takes to produce a 100% solution.AEs have a lot of options in their career path. As they becomemore senior and more experienced they have four main routesthat they can take. They can remain as application engineers andbecome whatever the black-belt AEs are called in thatcompany—be the guy who has to get on a plane and fly to Seoulto save a multi-million dollar renewal. They can become AEmanagers, and run a region or a functional group of AEs. Theycan move into product marketing, which is always short ofpeople who actually know the product. Or they can move intosales and stop resenting the fact that when the deal closes, forwhich they feel they did all the work, the salesperson makes morethan they do (and usually discover sales is harder than theyexpected).In a startup, in particular, the first few AEs hired can be thedifference between success and failure. The first release of aproduct never works properly, never quite matches what the
155EDAgraffiti
156 Chapter 5: Sales
their salary policies were too inflexible. Good AEs are like goldand if you don’t have them you don’t get any gold.
Customer supportCustomer support in an EDA company goes through threephases, each of which actually provides poorer support than theprevious phase (as seen by the long-term customer who has beenthere since the beginning) but which is at least scalable to thenumber of new customers. I think it is obvious that everydesigner at a Synopsys customer who has a problem with DesignCompiler can’t simply call a developer directly, even though thedeveloper would provide the best support.There is actually a phase zero, which is when the companydoesn’t have any customers. As a result, it doesn’t need toprovide any support. It is really important for engineeringmanagement to realize that this is actually happening. Anyengineering organization that hasn’t been through it before iscompletely unaware of what is going to hit them once theimmature product gets into the hands of the first real customerswho attempt to do some real work with it. They don’t realize thatnew development is about to grind to a complete halt for anextended period. “God built the world in six days and could reston the seventh because he had no installed base.”The first phase of customer support is to do it out of engineering.The bugs being discovered will often be so fundamental that it ishard for the customer to continue to test the product until they arefixed, so they must be fixed fast and new releases gotten to thecustomer every day or two. By fundamental I mean that thecustomer library data cannot be read, or the coding style isdifferent from anything seen during development and brings theparser or the database to its knees. Adding other people betweenthe customer engineer and the development engineer just reducesthe speed of the cycle of finding a problem and fixing it, whichmeans that it reduces the rate at which the product matures.
157EDAgraffiti
Running a salesforceIf you get senior enough in any company, then you’ll eventuallyhave salespeople reporting to you. Of course if you are asalesperson yourself, this won’t cause you too much of aproblem; instead, you’ll have problems when an engineeringorganization reports to you and appears to be populated withpeople from another planet.Managing a salesforce when you’ve not been “carried a bag”yourself is hard when you first do it. This is because salespeopletypically have really good interpersonal skills and are really goodnegotiators. You want them to be like that so that they can usethose skills with customers. But when it comes to managingthem, they’ll use those skills on you.When I first had to manage a salesforce (and, to make thingsmore complicated, this was a European salesforce with French,German, English and Italians) I was given a good piece of adviceby my then-boss. “To do a good job of running sales you have topretend to be more stupid than you are.”Sales is a very measurable part of the business because an ordereither comes in or doesn’t come in. Most other parts of a businessare much less measurable and so harder to hold accountable. But 159EDAgraffiti
all their salaries without the cash from the business they aregenerating that quarter to offset those expenses.Each salesperson needs two application engineers to be effective.Or at least one and a half. This means that a sales team costsapproximately $800K per year in salaries, travel and so on, whichis $200K per quarter, perhaps a little less if you don’t have thefull two AEs per salesperson.As for sales productivity, at capacity a sales team brings in $2million/year. If you put in much more than this, then you aresimply being unrealistic. If you put in much less you’ll find thatthe business never gets cash-flow positive.EDA tends to have a six-month sales cycle. So normally a newsalesperson won’t close business in less than six months, andprobably nine months. Figure on three months to understand theproduct and set up initial meetings, six months of sales cycle. Ilike to use a ramp of $0, $0, $250, $250, $500 for the first fivequarters, which assumes a salesperson sells nothing for twoquarters, is at half speed for two quarters and then hits the full $2million/year rate. Later, this may be conservative since a newsalesperson can inherit some funnel from other existingsalespeople in the same territory and so hit the ground if notrunning then at least not at a standing start. In the early days, itmight be optimistic since I’ve assumed that the product really isready for sale and it is just a case of adding sales teams. Butrealistically it probably isn’t.So those are the variables. In five years you need to be at $50million, which means about 25 sales teams at the end of year four(because only those sales teams really bring in revenue in yearfive). Some may be through distribution, especially in Asia, but itturns out not to make all that much difference to the numbers.In the meantime, the rest of the company has to be paid for andthey don’t directly bring in orders. So if you ramp sales tooslowly, the rest of the company will burn more money in themeantime. This makes the model less sensitive than you wouldexpect to the rate at which you hire salespeople, within reason.
162 Chapter 5: Sales
If you hire people too fast on day one, then the hole gets hugebefore your first teams start to bring in any money to cover thecost of the later guys. You need to get to about $7 million ofbookings before things get a bit easier and the early salespeopleare bringing in enough to cover the costs of the rest of thecompany. However, if you bring in people too slowly, then youwill not get to a high enough number in the out years. The trick isto hire in a very measured way early and then accelerate hiringlater. This will give a hole of about $4-5 million meaning youshould raise about $6 million to give yourself some cushion tocover all the inevitable delays.
163EDAgraffiti
166 Chapter 5: Sales
167Chapter 6. MarketingMarketing in EDA is not really about enlarging the market in theway that a lot of consumer marketing is. No matter how good themarketing, how appropriate the price and how clever the productname, there is simply not an untapped reservoir of people whonever knew they wanted to do RTL synthesis.On top of that, there are only relatively few decision makers inthe customer companies. But those decision makers areinfluenced by the designers in their companies.On the inbound side, marketing is about trying to understand thecustomer designers’ future needs and iterate them into theproduct. On the outbound side, it is about getting to the customerengineers so that they know the capabilities of your product.With very limited EDA press to speak of, both these tasks aregetting harder.
169EDAgraffiti
170 Chapter 6: Marketing
171EDAgraffiti
173EDAgraffiti
174 Chapter 6: Marketing
StandardsI was once at a standardization meeting many years ago when afriend of mine leaned over and said, “I tend to be againststandards, they just perpetuate other people’s mistakes.” I thinkthis is really a criticism of standardizing too early. You can onlystandardize something once you already know how to do it well.In many businesses, the winner needs to be clear before thevarious stakeholders will move. Standards are one way for acritical mass of companies to agree on the winner. For example,Philips and Sony standardized the CD for audio and, since it wasthe only game in town, it was adopted immediately by vendors ofCD players. The record labels knew which format to put discs outin, the people building factories to make the CDs knew what tomake. A few years earlier, there had been the first attempt tomake videodiscs, but there were three or more competingformats. So everyone sat on their hands waiting for the winner toemerge. In the m eantime everything failed. When everyone triedagain a few years later, the DVD standard was hammered out, itwas the winner before it shipped a single disk, and the markettook off. This was a lesson that seemed to have been lost in theHD-DVD vs BlueRay wars, although, by then, discs were startingto be irrelevant—downloading and streaming movies is clearlygoing to be the long-term winner.EDA is an interesting business for standards. Since you can onlystandardize something you already know how to do, standards areuseless for anything leading edge. By the time we know how todo something, the first batch of tools is out there using whateverinterfaces or formats the initial authors came up with.Standardization, of the IEEE variety, lags far behind and servesto clean up the loose ends on issueswhere there are already de 175EDAgraffiti
176 Chapter 6: Marketing
Old standardsAbout 12 years ago, I attended a three-day seminar about thewireless industry presented by the wonderfully-named HerschelShosteck. (Unfortunately, he died of cancer last year although thecompany that bears his name still runs similar workshops.) It washeld at an Oxford college and since there were no phones in therooms, they didn’t have a way to give us wake-up calls. So wewere all given alarm clocks. But not a modern electronic digitalone. We were each given an old wind-up brass alarm clock. Butthere was a message behind this that Herschel had long espoused:old standards live a lot longer than you think and you can’t ignorethem and hope that they will go away.In the case of the wireless industry, he meant that despite thethen-ongoing transition to GSM (and in the US to CDMA andUS-TDMA) the old analog standards (AMPS in the US, a wholehodge-podge of different ones in Europe) would be around for along time. People with old phones would expect them to continueto work and old base stations would continue to be a cheap wayof providing service in some areas. All in all, it would take a lotlonger than most people were predicting before handset makerscould drop support for the old standard and before base stationswould not need to keep at least a few channels reserved for theold standard. Also, in particular, before business models couldfold in the cost saving from dropping the old standard.My favorite old standard is the automobile “cigarette lighter”outlet. According to Wikipedia, it is actually a cigar lighterreceptacle (hence the size, much larger than a cigarette). Thecurrent design first started appearing in vehicles in 1956.
177EDAgraffiti
178 Chapter 6: Marketing
181EDAgraffiti
better that customers will pay for similar tools that they alreadyown.I had lunch with Paul Estrada (a.k.a. Pi). He is COO of BerkeleyDesign Automation (which is obviously located in…Santa Clara).They produce a SPICE-accurate circuit simulator AFS that is 5 to10 times faster and has higher capacity than the big companySPICE tools. For designers with really big simulations, that is apretty compelling value proposition (over lunch instead ofovernight). But for designers with smaller simulations and accessto unlimited big company SPICE simulators, it is harder toconvince them to even take a look, never mind open their wallets.However, those slow big company simulators still tie uphardware—and circuit simulators are both CPU- and memory-intensive, so need the good hardware—and they keep expensivedesigners busy waiting.So Berkeley recently introduced a block-level SPICE tool, AFSNano, that sells for only $1,900. This literally saves customersenough in hardware to justify the purchase, even if they have apile of big company SPICE simulators stacked up on the shelf.Oh yeah, and those expensive designers can get back to work. Itis not quite the freemium business model (which would requiregiving AFS Nano away) but it is close. As with the other models,Berkeley hopes the near-freemium AFS Nano will get customersinterested in their big tools.Another interesting book is What Would Google Do? by JeffJarvis. He examines lots of businesses and wonders what theywould look like if you largely gave away everything to make theuser experience as good as possible, and then found alternativeways to monetize the business.EDA software is notoriously price-inelastic. It doesn’t matterhow cheap your tool is, it has a relatively small number ofpotential users. You might steal some from a competitor, butoverall the number of customers is not driven by the price of thetools in the same way as, say, iPods. So a free business model isunlikely to work unless there is a strong payment stream fromsomewhere else such as a semiconductor royalty. There is also a
182 Chapter 6: Marketing
183EDAgraffiti
184 Chapter 6: Marketing
with simulation. Finding all the right boards and cables wouldtake at least a couple of weeks.I was at a cellphone conference in the mid-1990s where I talkedto a person in a different part of Ericsson. They had a hugebusiness building cell-phone networks all over the world. He didsystem modeling of some sort to make sure that the correctcapacity was in place. To him a system wasn’t a chip, wasn’teven a base-station. It was the complete network of base-stationsalong with the millions of cell-phones that would be incommunication with them. He thought on a completely differentscale to most of us.His major issues were all at the basic flow levels. The type ofmodeling he did was more like fluid dynamics than anythingelectronic. The next level down, at the base-station, the biggestproblem was getting the software correctly configured for whatis, in effect, a hugely complex multi-processor mainframe with alot of radios attached. Even on an SoC today, more manpowergoes into the software than into designing the chip itself.And most chips are built using an IP-based methodology, someof which is complex enough to call a system in its own right. Soit’s pretty much “turtles all the way down”.
they wanted; they designed what they wanted to and then found acarrier willing to take it largely unseen. Of course lots of peoplewere involved in the iPhone design, not just CEO Steve Jobs andchief designer Jonathan Ive (another Brit, by the way, referringback to my post about the benefits of easier immigration) but itwas designed with a conceptual integrity rather than a list of tick-the-box features. The first version clearly cut a lot of corners thatmight have been fatal: no 3G data access, no GPS, no cut-and-paste, no way to send photos in text messages, only a couple ofapplications that honored landscape mode. The second versioncame with 3G and GPS. Most of the rest of the initial peeves arenow fixed in the 3.0 version of the operating system (which, as aregistered iPhone developer, I already have installed). But themoral is that they didn’t ask their customers to produce a featurelist, and they didn’t make an attempt to implement as much ofthat list as possible.When I was at Cadence, we were falling behind in place androute. So we decided to build a next-generation place and routeenvironment including everything the customers wanted. It wasto be called Integration Ensemble. We asked all our customerswhat the requirements should be. So, of course, it ended up as along list of everything every customer group had ever wanted,with little conceptual integrity. In particular, for example,customers insisted that integration ensemble should provide goodsupport for multiple voltages, which were just going mainstreamat that time, or they wouldn't even consider it. We speced outsuch a product and started to build it. With so many features, itwould take longer to build than customers would want to wait,but customers were insistent that anything less than the fullproduct would be of no use. Then these same customers allpurchased Silicon Perspective since what they really needed wasgood placement and fast feedback, which was not at the top oftheir list. Silicon Perspective did not even support multiplevoltage supplies at that point. The end of that story was thatCadence expensively acquired Silicon Perspective andIntegration Ensemble was quietly dropped. The customers gotwhat they wanted even though they never asked for it.
186 Chapter 6: Marketing
188 Chapter 6: Marketing
Brand name counts for very little in EDA. To the extent it countsfor anything in this context, it stands for a large organization ofapplication engineers who can potentially help adoption. Itcertainly doesn’t stand for rock-solid reliability. The speed ofdevelopment means that every large EDA company has had itsshare of disastrous releases that didn’t work and products thatnever made it to market. There are no Toyotas and Hondas inEDA with a reputation for unmatched quality. I don’t thinkanyone knows how it would be possible to create one without italso having a reputation for the unmatched irrelevance of manyof its products due to lateness.So there are a few theories. Like all stories after the fact, they areplausible but it is not clear if they are the real reason. But thefacts are clear: traditional marketing, such as advertising, doesn’twork for EDA products.
Test casesOne critical aspect of customer support is the internal process bywhich bugs get submitted. The reality is that if an ill-defined bugcomes in, nobody wants to take the time to isolate it. The AEswant to be out selling and that if they just throw it over the wallto engineering, then it will be their job to sort it out. Engineeringfeels that any bug that can’t easily be reproduced is not theirproblem to fix. If this gets out of hand, then the bug languishes,the customer suffers and, eventually, the company does too. Asthe slogan correctly points out, “Quality is everyone’s job.”The best rule for this that I’ve ever come across was created byPaul Gill, one of the founders of Ambit. To report a bug, anapplication engineer had to provide a self-checking test case, orelse engineering won’t consider it. No exceptions. And he wasthen obstinate enough to enforce the “no exceptions” rule.This provides a clear separation between the AE’s job and thedevelopment engineer’s job. The AE must provide a test case thatillustrates the issue. Engineering must correct the code so that it
189EDAgraffiti
is fixed. Plus, when all that activity is over, there is a test case togo in the regression suite.Today, most tools are scripted with TCL, Python or Perl. A self-checking test case is a script that runs on some test data and givesa pass/fail test as to whether the bug exists. Obviously, when thebug is submitted the test case will fail (or it wouldn’t be a bug).When engineering has fixed it, then it will pass. The test case canthen be added to the regression suite and it should stay fixed. If itfails again, then the bug has been re-introduced (or another bugwith similar symptoms has been created).There are a few areas where this approach won’t really work.Most obviously are graphics problems: the screen doesn’t refreshcorrectly, for example. It is hard to build a self-checking test casesince it is too hard to determine whether what is on the screen iscorrect. However, there are also things that are on the borderlinebetween bugs and quality of results issues: this example got a lotworse in the last release. It is easy to build the test case. But whatshould be the limit? EDA tools are not algorithmically perfect soit is not clear how much worse should be acceptable if analgorithmic tweak makes most designs better. But it turns out thatfor an EDA tool, most bugs are in the major algorithms undercontrol of the scripting infrastructure and it is straightforward tobuild a self-checking test case.So when a customer reports a bug, the AE needs to take some ofthe customer’s test data (often they are not allowed to ship out thewhole design for confidentiality reasons) and create a test case,preferably a small and simple one, that exhibits the problem.Engineering can then fix it. No test case, no fix.If a customer cannot provide data to exhibit the problem (theNSA is particularly bad at this!), then the problem remainsbetween the AE and the customer. Engineering can’t fix aproblem that they can’t identify.With good test infrastructure, all the test cases can be runregularly, and since they report whether they pass or fail, it iseasy to build a list of all the failing test cases. Once a bug has
190 Chapter 6: Marketing
been fixed, it is easy to add its test case to the suite and it willautomatically be run each time the regression suite is run.That brings up another aspect of test infrastructure. There mustbe enough hardware available to run the regression suite inreasonable time. A large regression suite with no way to run itfrequently is of little use. We were lucky at Ambit that wepersuaded the company to invest in 40 Sun servers and 20 HPservers just for running the test suitesA lot of this is fairly standard these days in open-source and otherlarge software projects. But somehow, it still isn't standard inEDA, which tends to provide productivity tools for designers,without using state of the art productivity tools themselves.On a related point, the engineering organization needs to have atleast one very large machine too. Otherwise, customers inevitablywill run into problems with very large designs where there is nohardware internally, even to attempt to reproduce the problem.This is less of an issue today when hardware is cheaper than itused to be. It is easy to forget that ten years ago, it cost a lot ofmoney to have a server with eight gigabytes of memory; few harddisks were even that big back then.And another approach, here's XKCD on test-cases:
191EDAgraffiti
can’t create demand even though they could have money to spendif it were effective.EDA used to be rich enough that it would advertise anyway, atleast to get the company name out in front of people (rememberall those in-flight magazine ads for Cadence and the “curfewkey” and suchlike). But as times got tighter, EDA stoppedadvertising since it was ineffective. In turn, the books that used tocover EDA, like EE Times and EDN, cut back their coverage andlaid off their specialist journalists like Richard Goering and MikeSantarini. To be honest, I think for some time before that, themajor readers of EDA coverage were the other EDA companies,not the customers. I don’t have any way to know, but I’m sure thereadership of my blog is the same.Trade shows seem to be a dying breed too, and not just in EDA.DATE seems to be dead, as a tradeshow, with almost noexhibitors any more. I wouldn’t be surprised if this year, 2010, ithas almost no visitors any more either, and gives up next year.EDA seems like it can support one real tradeshow, which isDAC. It is mainly for startups for whom it is really the only wayto get discovered by customers outside of having a half-reasonable website. The large EDA companies run their owntradeshows in an environment that leverages their costs betterthan paying a ridiculous rate for floor space, paying rapaciousconvention center unions to set up the booth, and putting up withwhatever restrictions show management has chosen for this year(“you can’t put a car on the booth, just because” was onememorable one that I ran into once).The large EDA companies, with some justification, feel that a bigpresence at DAC is subsidizing their startup competitors as wellas not being the most cost-effective way to reach their customersto show them the portfolio of new products. The best is to avoidthe political noise by at least showing up, but the booths with 60demo suites running continuously with a 600 employee presenceare gone.That leaves websites and search engines as the main way thatcustomer engineers discover what is available. So you’d think
192 Chapter 6: Marketing
Licensed to billIn every sizeable EDA company that I’ve worked at, a hugepercentage, 30-50%, of all calls to the support hotline have to dowith license keys. Why is this so complicated? Are EDAsoftware engineers incompetent?Most of these problems are not directly with the license keymanager (the most common, almost universal, one is FlexLM).Sometimes, there are direct issues because customers want to runa single license server for all the EDA tools they have from alltheir vendors, something that the individual EDA companieshave a hard time testing since they don’t have access to everyoneelse’s tools. More often, license problems are simply becauselicenses are much more complicated than most people realize.All sorts of license problems can occur, but here is a typical one.The customer wants some capability and discusses with thesalesperson who provides a quote for a particular configuration.Eventually, the customer places an order and a license key is cutfor that configuration. At this point, and only then, it turns out
193EDAgraffiti
expect that the tool will behave gracefully when their paucity oflicenses comes to light and the run is deep in the innards of thetool when the customer finds out that the tool cannot continue.Interactive tools are worse still. Do you claim a license in orderto show the capability on a menu? Or do you show a menu itemthat may fail due to lack of a license when you click it? Do youbehave the same if the customer has licenses, but all are currentlyin use, versus the customer not having any licenses to thatproduct at all?None of these problems typically affect the engineers developingthe product or their AEs. Usually all employees have a “runanything” license. The licenses issues often only come to lightwhen customers run into problems. After all, they may be theonly site in the world running that particular configuration. Sometesting can be done easily, but exhaustive testing is obviouslyimpossible.EDA companies want to create incremental revenue for newcapabilities, so they don’t want to simply give them to all existingcustomers even though they may want to make sure that all newcustomers are “up to date.” This drives an explosion of licenseoptions that sometimes interact in ways that nobody has thoughtof.Until some poor engineer, in the middle of the night, tries tosimulate a design containing two ARM processors. That’s whenthey discover that nobody thought about whether two ARMsimulations should require two licenses or one. The code claimsanother license every time an ARM model is loaded—in effect, itsays two. Marketing hadn’t considered the issue. Sales assuredthe customer that one license would be enough without askinganyone. Nobody had ever tried it before. “Hello, support?”
DACTrade shows in general are probably gradually dying. I doubtwe’ll be going to them in ten years time. But rumors of their
195EDAgraffiti
196 Chapter 6: Marketing
197EDAgraffiti
the simulation vendors, which meant all the big guys anyway. Sothey invited everyone. Denali was under 10 employees in this era,not well-known; they were more worried about holding a partyand nobody coming than the opposite. But never underestimatethe gravitational attraction of an open bar.They expected about 100, maybe 150 people, would attend. Onething that they hadn’t anticipated was that the AEs from the bigguys weren’t able to get into their own parties (the execs andsales guys went with their customers; AEs need not apply). Sothey showed up in large numbers. In the end, well over 500people came for at least some of the evening. At midnight, thevenue management told them they had to stop the party since theentire night’s alcohol budget was already gone. So they gulped,wrote a large check, and kept the party going for another hour.Shutting down a party as early as midnight in New Orleans andthrowing their customers out didn’t laissez les bons temps roulez.They realized that the party had been something special, and notjust for their customers. The entire EDA community had shownup since Denali was neutral ground. Nobody from Cadence wentto the Synopsys party and vice versa. But Denali, as theSwitzerland of EDA, welcomed everyone. So next year, itseemed like it would be a good idea to do it again. And so it hasbeen for many years.I think it has turned out, somewhat fortuitously, to have been agreat way to market themselves. We are in an era when it is reallyhard to get your name out in front of customers and partners.Denali doesn’t have that problem, plus it has a lot of goodwillfrom the entire EDA community since the Denali party isn’texclusive. You don’t have to be a customer of Denali to get in;you can even be a competitor.So here we are a decade later. Everyone knows who Denali is,and they are a much bigger company now. They are still private,so just how big they are is largely a guess. But nobody caresabout their revenue, the financial answer everyone wants to knowis “how much does the Denali party cost?” I slipped a shot ofvodka into Mark’s Diet Coke but he still wasn’t talking.
198 Chapter 6: Marketing
Value propositionsI spent some time recently giving someone a bit of freeconsulting about value propositions in EDA. If you take the high-level view. then there seem to be three main value propositions inEDA: optimization, productivity and price.Optimization means that your tool produces a better result thanalternatives. A place and route tool that produces smaller designs.A synthesis tool that produces less negative slack. A power-reduction tool that reduces power. This is the most compellingvalue proposition you can have since the result from using yourtool—as opposed to sticking with the status quo—shows throughin the final chip, affecting its price, performance or power. Thehigher the volume the chip is expected to run at, the higher thevalue of optimizing it.Productivity means that your tool produces an equivalent result tothe alternatives, but does it in less time. My experience is that thisis an incredibly difficult value proposition to sell unless theproductivity difference is so large that it is a qualitative change:10X not just 50% better. Users are risk-averse and just won’tmove if they have “predictable pain.” It may take an extra weekor an extra engineer, but it is predictable and the problem isunderstood and well-controlled. A new tool might fail, causingunpredictable pain, and so the productivity gain needs to beenormous to get interest. Otherwise, the least risky approach is tospend the extra money on schedule or manpower to buypredictability.The third value proposition is that you get the same result in thesame time but the tool is cheaper. For something mission-criticalthis is just not a very interesting value proposition, sort of likebeing a discount heart surgeon. Only for very mature productspaces where testing is easy is price really a driver: Verilogsimulation for example. The only product I can think of thatstrongly used price as its competitive edge was the originalModelSim VHDL simulator, and even then, it was probablysimply the best simulator and the low price simply left money onthe table. 199EDAgraffiti
On the other hand, many startups fail because they are too earlyto market. In EDA, technologies tend to be targeted at certainprocess nodes which we can see coming down the track. There’slittle upside in developing technologies to retrofit old designmethodologies that, by definition, already work. Instead, theEDA startup typically takes the Wayne Gretsky approach ofgoing where the puck is going to be. Develop a technology that isgoing to be needed and wait for Moore’s Law to progress so thatthe world does need it. The trouble with this is that it oftenunderestimates the amount of mileage that can be gotten out ofthe old technologies.Since process nodes come along every couple of years, and eventhat is slowing, getting the node wrong can be fatal. If youdevelop a technology that you believe everyone needs at 45nmbut it turns out not to be needed until 30nm, then you are going toneed an extra two years of money. And even then, it may turn outnot to be really compelling until that 22nm node, after you’vegone out of business. All the OPC (optical proximity correction)companies were too early to market, supplying technology thatwould be needed but wasn't at that point in time. Even companiesthat had good exits, like Clearshape, were basically running outof runway since they were a process generation ahead of whentheir technology became essential.The windows paradigm was really developed at Xerox PARC(yes, Doug Englebart at SRI had a part to play too). Xerox isoften criticized for not commercializing this windows concept,but in fact they did try. They had a computer, the Xerox Star,with all that good stuff in it. But it was way too expensive andfailed because it was too early. The next attempt was Apple. NotMacintosh, Lisa. It failed. Too early and too expensive. One canargue the extent to which the first Macs were too early, appealingonly to hobbyists, at first, until the laser printer (also invented atPARC) came along. There are other dynamics in play than justtiming but Microsoft clearly made the most money out ofcommercializing those Xerox ideas, coming along after everyoneelse.
201EDAgraffiti
Barriers to entryWhenever I look around at DAC, one thing that is in some wayssurprising is that, given the poor growth prospects of the EDAindustry, there are so many small EDA companies.If you are a technologist of some sort, then it seems like thechallenge of getting an EDA company going is insurmountable.After all, there are probably only a couple of dozen people in theworld who have deep enough knowledge of the esoteric area ofdesign or semiconductor to be able to create an effective product.That seems like it should count as a high barrier.But, in fact, technology is the lowest of barriers if you are in amarket where technology counts for something. Designing andbuilding chips is something that races along at such a breakneckpace that the whole design ecosystem is disrupted every fewyears and new technology is required. It has to come fromsomewhere. As a result, brand name counts for very little andsmall companies with differentiated important technology can besuccessful very quickly.
202 Chapter 6: Marketing
Other industries are not like that; nowhere else does technologymove so fast. What was the last big innovation in automotive?Probably hybrid powertrains. Most cars still don’t have them andit is now ten-year old technology.Let’s think of an industry with just about the least amount oftechnology, so pretty much at the other end of the scale fromEDA and semiconductor: bottled water. Do you think that yourbottled water startup is going to do well because you have betterwater technology? Do you think that the customer who chosePerrier rather than Calistoga could actually taste the differenceanyway? Bottled water is selling some sort of emotionalaspirational dream.You’ve obviously noticed that if you go to bar and get upscalewater then you typically end up with something from Europe(San Pellegrino, Perrier, Evian) and not something fromCalifornia (Crystal Geyser, Calistoga). It has to be bottled inEurope and shipped here. Why don’t they ship it in bulk andbottle it here? For the same reason as wine is bottled before it isshipped: nobody would trust what was in the bottle. One thingthat surprised me when I was in Japan a couple of years ago isthat the Crystal Geyser water we turn down as beinginsufficiently upscale is what they drink over there. It comesfrom California, the other side of the Pacific, how exotic is that? Idon’t know if the third leg of the stool exists, people in Europedrinking water from Asia: bottled from a spring on Mount Fuji,how zen is that?.In between are lots of companies and industries where there isobviously a technical component, and an emotional component.BMW may be the ultimate driving machine, but most people whobuy one couldn’t tell you what a brake-horsepower is, even ifthey know how many their car has. And almost nobody actuallyuses all that horsepower, running their car at the redline on thetach all the time. Yes, there’s technology but mostly it’s anemotional sell.In the commercial world, think of Oracle. Do you think you aregoing to displace Oracle because you’re little startup has some
203EDAgraffiti
205EDAgraffiti
206Chapter 7. PresentationsPresentations are a primary way that we communicate withothers. The heart of what sales needs to sell a product is apresentation. The heart of what an investor wants to hear isembodied in a presentation.Most people make a mistake with their presentations and try andmake them do too many things: be the backdrop to what you aresaying but also be the teleprompter for what you should besaying. Breaking that habit is essential to improve the quality ofyour presentations.
208 Chapter 7: Presentations
In the consulting work I do, I find that not getting these twothings right are very common. Presentations where the basicmessage is not clear, and presentations that do not flow frombeginning to end. Not to mention people trying to get through 20slides in 10 minutes.If you are presenting to foreigners who don’t speak good English,you must make sure that everything important is on the slidessince you can assume they will not catch everything that you say(maybe anything you say). You will also need to avoid slang thatnon-Americans might not understand (although you’d besurprised how many baseball analogies Europeans use these dayswithout knowing what they really mean in a baseball context). Iremember the people at a Japanese distributor being confused by“low-hanging fruit.” They thought it must have some sort ofsexual connotation!So make sure you know the main point, and make sure that thepresentation tells a story that starts from and finishes with themain point.Oh, and here is another rule of thumb. Print out your slides. Putthem on the floor. Stand up. If you can’t read them the type is toosmall. Or go with Guy Kawasaki's rule of using a minimum fontsize at least half the age of the oldest person in the room.
209EDAgraffiti
But just like Steve Jobs or the TED presenters, to carry this offwell you need to rehearse until you have your speech perfect,either basically memorizing it or doing it from notes. Whateveryou do, don’t write it out word for word and read it. The slidesare not going to help you remember what to say, they are anothercomplication for you to make sure is synchronized with yourspeech. So rehearse it without the slides until you have thatperfect. Then rehearse it with the slides. Then rehearse it somemore. Like a good actor, it takes a lot of repetition to make adlibs look so spontaneous.This approach will not work presenting to foreigners who don’tspeak fluent English. There is simply not enough context in thevisuals alone, and your brain has a hard time processing bothvisuals and speech in a second language. If you know a foreignlanguage somewhat, but are not bilingual, then watch the news inthat language. It is really hard work, and you already know thebasic story since they cover the same news items as the regularnetwork news.If you are giving a keynote speech, then this is the ideal style touse. You don’t, typically, have a strong "demand" like you dowhen presenting to investors (fund my company) or customers(buy my product). Instead you might want to intrigue theaudience, hiding the main point until late in the presentation. Soinstead of opening with a one-slide version of the wholepresentation, you should try and find an interesting hook to getpeople’s interest up. Preferably not that Moore’s Law is going tomake our lives harder since I think we’ve all heard that one.I find that the most difficult thing to achieve when givingspeeches to large rooms of people is to be relaxed, and be myself.If I’m relaxed, then I’m a pretty good speaker. If I’m not relaxed,not so much. Also, my natural speed of speaking is too fast for apublic speech, but again if I force myself to slow down, it is hardto be myself. This is especially bad if presenting to foreignerssince I have to slow down even more.I also hate speaking from behind a fixed podium. Sometimes youdon’t get to choose, but when I do I’ll always take a wireless
210 Chapter 7: Presentations
lavalier (lapel) mike over anything else, although the best onesare not actually lapel mikes but go over your ear so that the mikecomes down the side of your head. That leaves my hands free,which makes my speaking better. Must be some Italian bloodsomewhere.Another completely different approach, difficult to carry off, iswhat has become known as the Lawrence Lessig presentationstyle, after the Stanford law professor who originated it. Look,for example, for presentations where he talks about copyright andgets through 235 slides in 30 minutes, or watch a greatpresentation on identity with Dick Hardt using the sameapproach. Each slide is on the screen for sometimes just fractionsof a second, maybe containing just a single word. I’ve neverdared to attempt a presentation like this. The level of preparationand practice seems daunting.
211EDAgraffiti
212 Chapter 7: Presentations
213EDAgraffiti
214 Chapter 7: Presentations
music itself and they can only make money by selling stuffassociated with music that is harder to copy: clothing, concertperformances and so on. In just the same way, hardwarecompanies ride on products like Linux, recovering anydevelopment they do for the community (if any) through theirhardware margin. Nonetheless, the opportunity is to move EDAfrom just plain IC design up to these higher levels and find abusiness model that makes it work.Finally, the fourth opportunity is to look still further afield andtake in the entire design process, in a similar way as PLMcompanies like IBM, PTC and Dassault do for mechanical, butwith considerably less technology on the design side. Take the“E” out of “EDA.” By taking the entire design problem, thebusiness model issues associated with software might be side-stepped. And all four challenges are really about software.In summary, the challenge is to expand from EDA as IC design(which is the most complex and highest-priced part of themarket) to design in general, in particular to take in the growingsoftware component of electronic systems. It’s a technologyproblem for multicore, but most of the rest is a business challenge
216 Chapter 7: Presentations
might be much more than the container trucks, but the legacystuff all comes in containers. It just doesn’t do to look only at thetotal carrying capacity.A company I’m on the board of, Tuscany Design Automation,has a product for structured placement. In essence, the designexpert gives the tool some manual guidance. But people areworried at how difficult this is, since they’ve never used a toolthat made it easy. It really is hard in other tools where all you getis to edit a text file and don’t get any feedback on what you’vedone. The analogy I’ve come up with is that it is like computertypesetting before Macs and PageMaker and Word. You had text-based systems where you could put arcane instructions and makeit work but it was really hard and best left to specialists. Once thewhole desktop publishing environment came along, it turned outthat anyone (even great aunt Sylvia) could produce a newsletteror a brochure. It was no longer something that had to be left totypesetting black belts. And so it is with structured placement.Once you make it easy, and give immediate feedback, and peoplecan see what they are doing then anyone can do it.
218 Chapter 7: Presentations
Books on presentationsA few days ago I was asked by a friend what books I wouldrecommend on putting together presentations. There are lots outthere and I don’t claim to have looked at all of them. But here arefive that I think are especially good.The first book isn’t actually about presentations specifically butis one aspect of a few slides in some presentations. It is EdwardTufte’s book The Visual Display of Quantitative Information. Hehas two more books which are also worth reading but they goover much of the material in this first book in more detail.Anyone who has ever put a graph or a table in a presentation (orin an article or white paper for that matter) should read this book.It is full of wonderful examples of appalling presentation of dataas well as some exemplary ones. Too many books onpresentations show you some good ones without being braveenough to call out presentations that don’t work.The next book is very analytical and contains a lot of data aboutwhat works and what does not in presentations. It is AndrewAbela’s Advanced Presentations by Design. One key finding isthat if you put the points you want to make in bullets in yourpresentation, then when you present it (so you are speaking aswell as showing the slides), it is actually less effective thansimply showing the presentation and shutting up, or giving thespeech and not showing the slides.Next, two books that really are about putting togetherpresentations. Garr Reynolds’s has a book called PresentationZenand Nancy Duarte has one called slide:ology. These two bookssomewhat cover the same material with slightly differentperspectives. In fact, the blurb on the back of each book is written 219EDAgraffiti
by the other author. You probably don’t need both of them butyou’ll need to look at them both to decide which one you feelmost comfortable with. Both books carry on from the analysis Imentioned above, emphasizing that a presentation should bedesigned to reinforce visually what you are saying, not repeat ittextually. A presentation is not a crux for the presenter, not a sortof teleprompter for not having rehearsed enough.Finally there is Jerry Weissman’s Presenting to Win. This iscomplementary to the other books in that it focuses much less onthe visual aspect of a presentation and much more on how tomake a presentation tell a story. His track record and focus isputting together presentations for IPO roadshows, which areprobably a type of presentation that has more money riding on itthan anything else. But most of what he says is appropriate forother types of presentations.With these books, you get instruction on how to create acompelling narrative in a presentation, how to maximize thevisual impact of your presentation, how to display quantitativeinformation compellingly, and more analysis that you probablycare to read about what works and what doesn’t in presentations.Two other resources that I think are good for presentations: anySteve Jobs keynote speech (look at the iPhone announcement ifyou only look at one) and many of the speakers at TED, whichhas a 20 minute time-limit and so forces speakers to maximizetheir impact and focus on the most important messages.
220Chapter 8. EngineeringThe heart of any EDA company, especially a startup, isengineering. In fact in a startup, engineering maybe all there isapart from a CEO.Developing EDA software is different from many types ofsoftware since the developer is not a user. A lot of software isdeveloped by one of more engineers to scratch an itch they have:most Internet software starts like that. But good softwareengineers generally don’t know how to design chips and viceversa. So, in addition to the challenge of writing the code, there isthe challenge of understanding the customer need. Of course, alot of other software is like that too—dentist office schedulingsoftware for example—but that sort of software is also usuallymuch easier to understand because it is simpler and because theunderlying technology is not racing along with major changesevery year or two.
222 Chapter 8: Engineering
225EDAgraffiti
Internal developmentOne potential change to the way chips are designed is for EDA tobecome internal to the semiconductor companies. In the earlydays of the industry, it always was.Until the early 1980s there wasn’t really any design automation.There were companies like Calma and Applicon that soldpolygon level layout editors (hardware boxes in those days), and226 Chapter 8: Engineering
programs like Spice and Aspec that were used for circuitsimulation (and usually ran on mainframes). Also there were acouple of companies supplying DRC software, which alsotypically ran on mainframes.In the early 1980s, companies started to develop true designautomation internally. This was implemented largely by the firstset of students who’d learned how to design chips in college aspart of the Mead and Conway wave. Hewlett-Packard, Intel andDigital Equipment, for example, all had internal developmentgroups. I know because I interviewed with them. Two startupsfrom that period, VLSI Technology (where I ended up workingwhen I first came to the US) and LSI Logic had ambitiousprograms because they had a business of building chips for otherpeople. Until that point, all chips were conceived, designed andmanufactured internally within semiconductor companies. VLSIand LSI created what we initially called USICs (user specificintegrated circuits) but eventually became known, less accurately,as ASICs (application specific integrated circuits). It was the ageof democratizing design. Any company building an electronicproduct (modems, Minitel, early personal computers, disccontrollers and so on) could design their own chips. At this stagea large chip was a couple of thousand gates. The EDA tools toaccomplish this were supplied by the semiconductor companyand were internally developed.First front-end design (schematic capture and gate-levelsimulation) moved out into a third party industry (Daisy, Mentor,Valid) and then more of design did with companies like ECAD,SDA, Tangent, Silicon Compilers, Silicon Design Labs and moremoved out from the semiconductor companies into the EDAindustry.At first, the quality of the tools was almost a joke. I remembersomeone from the early days of Tangent, I think it was, tellingme about visiting AT&T. Their router did very badly set againstthe internal AT&T router. But there was a stronger focus and abigger investment behind theirs and it rapidly overtook theinternal router. Since then, almost all EDA investment movedinto the third party EDA industry. ASIC users, in particular, were 227EDAgraffiti
228 Chapter 8: Engineering
There is no real market today for tools for FPGA design. Thetools are all (OK, mostly) internally developed. But theeconomics wouldn’t work when there are only two or threeFPGA vendors. It is more economic for each vendor to developtheir own suite (not to mention that it better fits their businessmodel).One future scenario is that all semiconductor design becomes likemicroprocessor design and FPGA design. Too few customers tojustify an external EDA industry, too specialized needs in eachcustomer to make a general solution economic. Design movesback into the semiconductor companies. I don’t have much directknowledge of this happening, but industry analyst and punditGary Smith is always pointing out that it is an accelerating trend,and he sees much better data than I do.One other issue is that for any design tool problem (such assynthesis or simulation), there is only a small number of expertsin the world and, by and large, they are not in the CAD groups ofsemiconductor companies—they are in the EDA companies. Ipredicted earlier that the world is looking towards a day of threesemiconductor clubs. In that environment, it is much more likethe FPGA world and so it is not farfetched to imagine each clubneeding to develop their own tool suite. Or acquiring it. Nowhow many full-line EDA companies are there for the three clubs?Hmm.
Groundhog DayYou’ve probably seen the movie Groundhog Day in which theBill Murray, self-centered weatherman character is stuck in atime warp, waking up every morning to exactly the same dayuntil, after re-examining his life, he doesn’t. Taping out a chipseems to be a bit like that, iterating and trying to simultaneouslymeet budgets in a number of dimensions: area, timing and power.And, of course, schedule. Eventually, the cycle is broken and thechip tapes out.
229EDAgraffiti
The simulator needs to know that to get the timing right. But Vddand Vss don’t occur explicitly in the netlist. This is mainly forhistorical reasons since they didn’t occur explicitly in schematicseither. Besides, back then, there was only one of each so therewasn’t the possibility for ambiguity.The CPF and UPF standards were the most recent EDA standardwar. It looks like another Verilog/VHDL standoff where bothsides sort of win, and tools will need to be agnostic and supportboth. Both standards are really a way of documenting powerintent for the techniques for power reduction that advanceddesign groups have struggled to do manually. CPF (commonpower format, but think of the C as Cadence, although it isofficially under Si2 now) seems slightly more powerful than UPF(universal power format, but think of the universal as Synopsys,Magma and Mentor, although it is officially under Accellera nowand is on track to becoming an IEEE standard P1801). CPF andUPF attempt to separate the power architecture from everythingelse so that changes can be made without requiring, in particular,changes to the RTL.Both standards do a lot of additional detailed housekeeping, butone important job that they do is to define for each group of gateswhich power supply they are attached to so that all tools can pickthe correct performance, hook up the correct wires, select theright library elements during synthesis, know when a block isturned off and so on.The detailed housekeeping that the standard formats take care ofacknowledge that the netlist is not independent of the powerarchitecture. For example, if two blocks are attached to powersupplies with different voltages, then any signals between the twoblocks need to go through level shifters to ensure that signalsswitch properly. But they don’t appear explicitly in the netlist.Since those level shifters will eventually be inserted at place androute, any earlier tools that analyze the netlist need to considerthem too or they will be confused.The purpose of the CPF and UPF formats is to make it explicitwhat these changes to the netlist are so that all tools in the flow
232 Chapter 8: Engineering
make the same decision and are not surprised to find, say, anisolation cell in the layout that doesn’t correspond to anything inthe input netlist. Or, indeed, an isolation cell missing in thelayout, which should have been inserted despite the fact that itdoesn’t appear in the input netlist either.You can learn a lot about low-power techniques by reading thetutorial documents and presentations on the various websitesassociated with these two important standards.
MulticoreAs most people know, power is the main reason that PCprocessors have had to move away from single core chips withincreasingly fast clock rates and towards multicore chips.Embedded chips are starting to go in the same direction too;modern cell-phones often contain three or more processors evenwithout counting any special purpose ones used for a dedicatedpurpose like mp3 decode. The ARM Cortex is multicore.Of course, this moves the problem from the IC companies—howto design increasingly fast processors—to the software side—how to write code for multicore chips. The IC companies havecompletely underestimated the difficulty of this.The IC side of the house has assumed that this is a problem thatjust requires some effort for the software people to write theappropriate compilers or libraries. But, in fact, this has been aresearch problem for over forty years: how do you build a reallybig powerful computer out of lots of small cheap ones? It isunlikely to be solved immediately, although clearly a lot moreresearch is going on in this area now.There are some problems, traditionally known as “embarrassinglyparallel,” which are fairly easy to handle. Far from beingembarrassing, the parallelism is so simple that it is easy to makeuse of large numbers of processors at least in principle. Problemslike ray-tracing, where each pixel is calculated independently, arethe archetypal example. In fact nVidia and ATI graphicsprocessors are essentially multicore processors for calculating 233EDAgraffiti
234 Chapter 8: Engineering
The other big problem is that most code already exists in librariesand in legacy applications. Even if a new programming paradigmis invented, it will take a long time for it to be universally used.Adding a little multi-threading is a lot simpler than completelyrewriting Design Compiler in a new unfamiliar language, whichis probably at least a hundred man-years of effort even given thatthe test suites already exist.There are some hardware issues too. Even if it is possible to usehundreds of cores, the memory architecture needs to supportenough bandwidth of the right type. Otherwise, most of the coreswill simply be waiting for relatively slow access to the mainmemory of the server. Of course, it is possible to give eachprocessor local memory, but if that is going to be effective thoselocal memories cannot be kept coherent. And programmingparallel algorithms in that kind of environment is known to besomething only gods should attempt.I’ve completely ignored the fact that it is known to be a hardproblem to write parallel code correctly, and even harder whenthere really are multiple processors involved not just the pseudo-parallelism of multiple threads or processes. As it happens,despite spending my career in EDA, I’ve got a PhD in operatingsystem design so I speak from experience here. Threads andlocks, monitors, message passing, wait and signal, all that stuffwe use in operating systems is not the answer.Even if the programming problem is solved with cleverprogramming languages, better education and improved parallelalgorithms, the fundamental problems remain. Amdahl’s Lawlimits speedup, the bottleneck moves from the processor to thememory subsystem, and there’s the need to dynamically handlethe parallelism without introducing significant overhead. Theyare all hard problems to overcome. Meanwhile, although thenumbers are small now, the number of cores per die is increasingexponentially; it just hasn't got steep yet.Our brains manage to be highly parallel though, and without ourheads melting, so there is some sort of existence proof of what is
235EDAgraffiti
slightly out of focus, the metal was slightly over-etched. But withOPC, identical transistors may get patterned differently on thereticle, depending on what else is in the neighborhood. Thismeans that when the stepper is slightly out of focus it will affectidentical transistors (from the designer’s point of view)differently.Treating worst-case timing as an absolutely solid and accuratebarrier was always a bit weird. I used to share an office with aguy called Steve Bush who had a memorable image of this. Hesaid that treating worse case timing as accurate to fractions of apicosecond is similar to the way the NFL treats first down. Thereis a huge pile of players. Somewhere in there is the ball.Eventually people get up and the referee places the ballsomewhere roughly reasonable. And then they get out chains andsee to fractions of in inch whether it has advanced ten yards ornot.Statistical static timing analysis (SSTA) allows some of thisvariability to be examined. There is a problem in static timing ofhandling reconvergent paths well, so that you don’tsimultaneously assume that the same gate is both fast and slow. Ithas to be one or the other, even though you need to worry aboutboth cases.But there is a more basic issue. The typical die is going to be at atypical process corner. But if we design everything to worst case,then we are going to have chips that actually have a much higherperformance than necessary. But now that we care a lot aboutpower this is a big problem: chips consume more power thannecessary giving us all that performance we cannot use. Therehas always been an issue that the typical chip has performancehigher than we guarantee, and when it is important, we bin thechips for performance during manufacturing test. But withincreased variability, the range is getting wider and when power(rather than timing) is important, too fast is a big problem.One way to address this is to tweak the power supply voltage toslow down the performance to just what is required, along with acommensurate reduction in power. This is called adaptive voltage
237EDAgraffiti
CDMA talesIt is very rare for a company to develop a new standard andestablish it as part of creating differentiation. Usually companiespiggyback their wares on existing standards and attempt toimplement them better than the competition in some way. Therewere exceptions with big companies. When AT&T was a bigmonopoly it could simply decide what the standard would be for,say, the modems of the day or the plug you phone would use.IBM, when it was an effective monopoly in the mainframe world,could simply decide how magnetic tapes would be written. Isuppose Microsoft can just decide what .NET is and millions ofenterprise programmers jump.Qualcomm, however, created the basic idea of CDMA, made itworkable, owned all the patents, and went from being a companynobody had heard of to being the largest fabless semiconductorcompany and have even broken into the list of the top 10 largestsemiconductor companies.The first time I ran across CDMA, it seemed unworkable. CDMAstands for code-division multiple access, and the basic techniquerelies on mathematical oddities called Walsh functions. These arefunctions that everywhere take either the value 0 or 1 and areessentially pseudo-random codes. But they are very carefullyconstructed pseudo-random codes. If you encode a data stream(voice) with one Walsh function and process it with another atthe receiver, you get essentially zero. If you process it with thesame Walsh function, you recover the original data. This allowseveryone to transmit at once using the same frequencies, and onlythe data stream you are trying to listen to gets through. It is
238 Chapter 8: Engineering
239EDAgraffiti
who built chips (even if they only sold them to people whoalready had a Qualcomm phone license). They were hated byeveryone. Now that’s differentiation. The royalty rates were toohigh for us and we ended up walking from the deal.I was in Israel two days from the end of a quarter when I got acall from Qualcomm. They wanted to do a deal. But only if allroyalties were non-refundably pre-paid up front in a way theycould recognize that quarter. Sounds like an EDA license deal!We managed to do a deal on very favorable terms. (I stayed up allnight two nights in a row, after a full day’s work, since I was 10hours different from San Diego, finally falling asleep before wetook off from Tel Aviv and having to be awakened after we’dlanded in Frankfurt.) The license was only about $2 million or soin total, I think, but that was the relatively tiny amountQualcomm needed to avoid having a quarterly loss, therebyimpacting their stock price and their ability to raise the funds thatthey would need to make CDMA a reality. Which they proceededto do.
240 Chapter 8: Engineering
241EDAgraffiti
pharmaceuticals. Drugs cost a lot less to make than they sell for,but if you artificially (by fiat or excessive discounting) reduce theprices too much, then current drugs are cheap but no new oneswill be forthcoming. Unlike with drugs that don’t get developed,if there are no workable tools for the next process node, then wewill all know what we are missing; it is not just a profitopportunity foregone, it is a disaster.The next problem with EDA is that you can’t get the job donewith tools from only one vendor. So if you use SaaS to deliver allyour EDA tools, you will repeatedly need to move the designfrom one vendor to another. But these files are gigabytes in sizeand not so easily moved. So it seems to me that if SaaS is goingto work, it has to be through some sort of intermediary who hasall (or most) tools available, not just the tools from a singlevendor. If you use a Cadence flow but you use PrimeTime(Synopsys) for timing signoff and Calibre (Mentor) for physicalverification, then this doesn’t seem workable unless all areavailable without copying the entire design around.Another problem is that SaaS doesn’t work well for highly-interactive software. Neither Photoshop nor layout editors seemlike they are good candidates for SaaS since the latency kills theuser experience versus a traditional local application. Yes, I knowAdobe has a version of Photoshop available through SaaS, but goto any graphics house and see if anyone uses it.There are some genuine advantages in SaaS. One is that softwareupdate is more painless since it is handled at the server end. Youdon’t normally notice when Google tweaks its search algorithm.But designers are rightly very wary of updating software duringdesigns: better the bugs you know than some new ones. So again,EDA seems to be a bit different, at least has been historically.The early part of the design process and FPGA design are a betterplace for SaaS perhaps. The files are smaller even if they need tobe moved, the market is more elastic (not everyone is alreadyusing the best productivity tools). But this part of the marketalready suffers from difficulty in extracting value from themarket and SaaS risks reducing the price without a corresponding
242 Chapter 8: Engineering
243EDAgraffiti
244 Chapter 8: Engineering
And don’t forget: almost everyone has more than the averagenumber of legs.
245Chapter 9. Investment andVenture CapitalEDA startups can’t get from founding the company to break evenwithout investment. If you and your cofounders are all rich, thenyou can put in the money yourself. But most people requireinvestment from outside, initially perhaps the 3 Fs, friends,family and fools and, later, venture capital.This chapter looks at venture capital for EDA, and also some ofthe submarkets of EDA that may be attractive for investment.
247EDAgraffiti
the current freeze in EDA investment, it is over for the time beingand maybe forever.One piece of advice I remember seeing, I forget where, is neverto do a job that has significant non-monetary compensation fordoing it. Too many people will want to do it for those otherreasons. Everyone wants to open a restaurant, write a book, andbe an actor.The company where my son works in San Francisco advertisedfor a graphic designer on Craigslist. They took the ad down afterover 200 people had applied for the job. They took the ad downafter…four hours. Too many people want to be graphic designersbecause they think it is cool, or arty, rather than because it is aprofitable business to which they are especially well suited.The person sitting next to me on a flight to Chicago once told methat he was in the concrete business. He had a dozen concreteplants in towns you’ve never heard of in unfashionable parts ofthe Midwest. The economics were simple. A town can supportone concrete plant but not two. Consequently, the owner of aconcrete plant has a sort of monopoly. Sure, a contractor can buyconcrete from another plant, but that is one town over, perhaps anadditional 50 miles round trip for the concrete truck, a cost thatmakes it non-competitive. His plants returned over 30% of theircapital every year. Concrete is far more profitable than EDA andpartly because it is so boring.If that guy was our Dad and we inherited the business, I’m surewe could all run it. But we don’t even consider businesses likethat because technology is more exciting. EDA is not badly paidby any means, but considering just how hard it is and how muchtraining and knowledge is required it is not that well paid either.I’ve read (but not verified) that one very well-paid group ofconsultants are people who do Cobol programming. Everyonewants to program next generation web applications using AJAXand Python, not some crusty programming language designed inthe 1950s. How much further from the trendy cutting edge canyou get.
250 Chapter 10: Investment and Venture Capital
251EDAgraffiti
252 Chapter 10: Investment and Venture Capital
253EDAgraffiti
to avoid having to absorb fab variances when the fab is not full,and to gain the capability to sell more than capacity when youhave a strong order book.In the web space, you no longer need to build your own high-capacity server farm. Amazon, Google and others will sell youserver time and disk space on a purely variable cost basis. If youwebsite becomes a big hit, then scaling should be much morestraightforward.In some ways you can look at Amazon S3 or TSMC ascompanies that are in the business of making the up-frontinvestment in fixed cost assets and then charging you a variablecost to use them. Lots of other companies do the same. It doesn’tcost an airline anything (well, not much) extra to fly an extrapassenger; it is basically in the job of taking airplanes (fixed cost)and working out good business models to sell trips (variablecost). Cell-phone companies largely have a network of base-stations (fixed cost) and work out how to charge each customerfor using them (variable cost). It’s not always obvious what thebest model is for making the cost variable: do you charge data permegabyte, or unlimited data for a month? How does the moneyget split when you are roaming on other people’s networks? Isdata the same price as the digitized data underlying a voice-call?When supply chains disaggregate, usually one thing that happensis that non-core areas, especially ones involving fixed costs suchas equipment or full-time employees, are divested. Newcompanies spring up to specialize in providing that non-coreactivity as their core competence. Ross Perot made his fortune atEDS taking companies’ IT departments off their hands andcreated a big specialist company to provide those services.Semiconductor companies got rid of their EDA groups and anEDA industry came into existence (Cadence, Synopsys, Mentoretc). Semiconductor companies got rid of some of their fabs and afoundry industry came into existence (TSMC, UMC, Charteredetc). Semiconductor companies got rid of their technologydevelopment (TD) groups and rely on the foundry industry forthat too. One interesting area of debate right now is whetherdesign is next, and how much of design. Nokia already moved its254 Chapter 10: Investment and Venture Capital
Technology of SOXSarbanes-Oxley, often abbreviated to SOX, is a set of accountingrules that were introduced by Congress in response to theaccounting scandals of Enron, Worldcom and their like duringthe dotcom boom. It is a mixture of different regulations, someconcerned with how companies are audited, some concerned withliability a CEO and CFO have for irregularities in theircompanies, and so on. Many provisions are completelyuncontroversial.But the biggest problem, especially for startups, comes aboutfrom section 302 and 404. Section 302 says that companies musthave internal financial controls, that the management of thecompany must have evaluated them in the previous 90 days.Section 404 says that management and the auditors must reporton the adequacy and effectiveness of the internal controls.In practice, this means that auditors must repeatedly go overevery minute piece of data, such as every cell in a spreadsheet,every line on every invoice, before they can sign off. For a smallcompany, the audit fees for doing this are a minimum of $3million per year. For larger companies, the amount grows, ofcourse, but slowly so that it is much less burdensome for a largeestablished company (where it might be 0.06% revenue) than fora small one.Only public companies are required to comply with SOX so youcould argue that it doesn’t matter that much for a small venture-funded startup. At one level, that is true. But it has also meantthat a company has to be much larger to go public. 255EDAgraffiti
FPGA softwareWhy isn’t there a large thriving FPGA software market? Afterall, something like 95% of semiconductor designs are FPGA sothere should be scope for somebody to be successful in thatmarket. If the big EDA companies have the wrong cost structure,then a new entrant with a lower cost structure, maybe.In the early 1980s, if you wanted to get a design done then yougot tools from your IC vendor. But gradually, the EDA marketcame into being as a separate market, driven on the customer sideby the fact that third-party tools were independent of
256 Chapter 10: Investment and Venture Capital
257EDAgraffiti
258 Chapter 10: Investment and Venture Capital
quarters’ P&Ls to reflect the fact that all that R&D that was doneshould really have been set against revenue back then. When wewere allowed to account for acquisitions through pooling ofassets, it was closer to this but still got stuck with the goodwillwhich really also should be partially set against prior quarters.Anyway, Wall Street loves this sort of deal, whatever the price,since it is seen as a write-off (purchase price) leaving a leanercleaner company to make more profit going forward. It doesn’tcare about prior quarters anyway.By contrast, if bigCo instead spent $1 million per quarter for theprevious couple of years, which is much less than the $100million it acquired startupCo for, then Wall Street would havepenalized it by lowering its stock price due to the lowerprofitability. Since the investment doesn’t show on the balancesheet, it is a pure loss with no visible increase in anything good.Of course, it is hard for anyone, especially financial types onWall Street, to know if the investment is going to turn out to beDesign Compiler (good) or Synergy (not good), if it is SiliconEnsemble (good) or Route66 (not good). But the same could besaid about any other investment: is that expensive factory foriPods (good) or Segways (bad).When a company goes public, it sells some shares for cash, soends up with lots of cash in the bank. But it then finds that it ishard to spend that cash except on acquisitions. If it invests it inR&D, then the net income will be lower than before and so theshare price will decline from the IPO price due to the reducedprofitability. If it uses it to acquire companies, then prior to theacquisition, its profit is high (no investment) so its stock price ishigh. After the acquisition, its profit is high (new product to sellwith largely sunk costs). At the acquisition, Wall Street doesn’tcare because it is a one-time event and Wall Street never caresabout one-time events. Even if, like emergency spending inCongress, they happen every year.I think it is bad when accounting rules, in effect, force a companyto make tradeoffs that are obviously wrong. It is obviously betterto develop a tool for $10 million than buy a company that buildsthe same tool for $100 million. Yet Wall Street prefers it, so 259EDAgraffiti
RoyaltiesVenture capitalists love royalties. They love royalties becausethey think that they might get an unexpected upside since theyare hoping that a customer, in effect, signs up for a royalty andsells far more of their product than they expected and thus has topay much more royalty than expected. Since a normal successfulEDA business is predictable (license fees, boring) it doesn’t havea lot of unlimited upside.My experience is that you need to be careful how to structureroyalty deals to have any hope of that happening. At VLSI, Iremember we once had a deal to build a chip for one of theJapanese game companies if we could do it really fast (we weregood at that sort of thing). It needed lots of IP, so we just signedeveryone’s deal with no license fees for as long as possible, butwhich all had ridiculous royalty rates. We probably had a total ofabout 25% royalty on the part, more than our margin. But wereasoned as follows: “One in three, the project gets canceled andnever goes into production (it did); one in three it goes intoproduction, but we never ship enough volume to reach the pointwe pay more in royalties than we would in license fees; one inthree it is a big success and we tell everyone the royalty they aregoing to get, and if they don’t accept, we design them out.”IP is more mature now, so the royalty rates and contracts aremore realistic. Back then the lawyers could consume the wholedesign cycle negotiating terms and there wasn’t enough time towait. Everyone thought their non-differentiated IP was worth abig royalty of a few percent, even though a big SoC (even then)might have dozens of IP blocks that size. So perhaps the problemhas gone away. If you were on a short time to market, you simply260 Chapter 10: Investment and Venture Capital
261EDAgraffiti
262 Chapter 10: Investment and Venture Capital
Term sheetsWhat is a term sheet? If you raise money from a venture capitalist(or an experienced angel), then the most important conditions forthe investment will be summarized in a term sheet. It sounds likethis should be a simple document of a single sheet of paper, butin fact, these days, it is dozens of pages of legalese that is a goodway towards the final legal documents. In fact, it is so complexthat typically the really really important stuff will indeed besummarized in a few bullet points ln a different piece of paper (oran email).
263EDAgraffiti
264 Chapter 10: Investment and Venture Capital
The antiportfolioYou have to be pretty brave to be a venture capitalist and keep an“anti-portfolio” page on your website. This lists the deals thatyou were offered but turned down. Bessemer Ventures is the onlyVC I know that does this. They’ve had some great exits over theyears (such as Skype, Hotjobs, Parametric Technology or goingback, Ungermann-Bass). But they also turned down FedEx (7
265EDAgraffiti
266 Chapter 10: Investment and Venture Capital
clearly the company stood or fell based on how good the e-beamtechnology turned out to be. By then, I’d got smart enough toknow that you don’t want to be in an “expense” department in asemiconductor company. It turned out the e-beam technologydidn’t work that well and the company failed. I think Cadencepicked over the bones of the software division.I was never offered a single digit badge number job at Google oranything like that. But it is always hard to tell which jobs aregoing to be with companies that turn out to be wildly successful.I asked a friend of mine who worked for me briefly as my financeguy before going on to be the CFO of Ebay and lead the mostsuccessful IPO of all time what was the most important criterionfor success: “Luck.”
Entrepreneurs agesEntrepreneurs are all twenty-somethings straight out of collegethese days aren’t they? Not so fast, it turns out that this is anillusion. It’s probably true in some areas, such as socialnetworking, where the young are the target audience too (at leastinitially).
268 Chapter 10: Investment and Venture Capital
CEO payIf you are an investor, what do you think the best predictors forsuccess for a startup are? If you could pick only one metric,which one would you use?
269EDAgraffiti
270 Chapter 10: Investment and Venture Capital
they are that much more numerous) unless the company managedto bootstrap without any significant investment.Thiel has a company, younoodle, that (among other things)attempts to predict a value a startup might achieve 3 years fromfounding. It is optimized for Internet companies that have not yetreceived funding, so may not work very well for semiconductoror EDA startups. And guess one of the factors that it takes intoaccount when assessing how successful the company will be:CEO pay.
271EDAgraffiti
272EpilogueI’ve been in EDA for nearly 30 years, more, if you count thework I did before I left academia. I’ve been in engineering andmarketing, run small companies and large divisions. Along theway, I’ve learned a lot about the industry and the ecosystem thatit inhabits. I’ve tried to communicate as much of that as possible,initially in my blog and now in this book. I don’t claimoriginality for a lot of what I say: it is either received wisdom orstuff I learned from others.But it is stuff that I wish had all been in one place when I firstentered the industry, especially as I moved up and started to havesome decision making responsibility. I hope you have as muchfun reading it as I did writing it.
Mult mai mult decât documente.
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.Anulați oricând. | https://ro.scribd.com/document/406110941/EDA-Graffiti-pdf | CC-MAIN-2020-16 | refinedweb | 28,453 | 52.7 |
:: python]
Trampolines decorator
---------------------
``fn.recur.tco`` is a workaround for dealing with TCO without heavy stack utilization. Let's start from simple example of recursive factorial calculation:
.. code-block:: python
def fact(n):
if n == 0: return 1
return n * fact(n-1)
This variant works, but it's really ugly. Why? It will utilize memory too heavy cause of recursive storing all previous values to calculate final result. If you will execute this function with big ``n`` (more then ``sys.getrecursionlimit()``) CPython will fail with
.. code-block:: python
>>> import sys
>>> fact(sys.getrecursionlimit() * 2)
... many many lines of stacktrace ...
RuntimeError: maximum recursion depth exceeded
Which is good, cause it prevents you from terrible mistakes in your code.
How can we optimize this solution? Answer is simple, lets transform function to use tail call:
..
from fn import recur
@recur.tco
def fact(n, acc=1):
if n == 0: return False, acc
return True, (n-1, acc*n)
``@recur.tco`` is a decorator that execute your function in ``while`` loop and check output:
- ``(False, result)`` means that we finished
- ``(True, args, kwargs)`` means that we need to call function again with other arguments
- ``(func, args, kwargs)`` to switch function to be executed inside while loop
The last variant is really useful, when you need to switch callable inside evaluation loop. Good example for such situation is recursive detection if given number is odd or even:
.. code-block:: python
>>>ertools recipes
-----------------
``fn.uniform`` provides you)
``fn.iters`` is high-level recipes to work with iterators. Most
of them taken from `Python
docs "]
It also give you move readable in many cases "pipe" notation to deal with functions composition:
..
from fn.op import apply, flip
from operator import add, sub
assert apply(add, [1, 2]) == 3
assert flip(sub)(20,10) == -10
assert list(map(apply, [add, mul], [(1,2), (10,20)])) == [3, 200]
`
from operator import methodcaller
from fn.monad import optionable
class Request(dict):
@optionable
def parameter(self, name):
return self.get(name, None)
r = Request(testing="Fixed", empty=" ")
fixed = r.parameter("testing")
.map(methodcaller("strip"))
.filter(len)
.map(methodcaller("upper"))
.get_or("")
``fn.monad.Option.or_call`` is good method for trying several variant to end computation. I.e. use have ``Request`` class with optional attributes ``type``, ``mimetype``, ``url``. You need to evaluate "request type" using at least one attribute:
.. code-block:: python
from fn.monad import Option
request = dict(url="face.png", mimetype="PNG")
tp = Option \
.from_value(request.get("type", None)) \ # check "type" key first
.or_call(from_mimetype, request) \ # or.. check "mimetype" key
.or_call(from_extension, request) \ # or... get "url" and check extension
.get_or("application/undefined")
Installation
------------
To install ``fn.py``, simply:
...`_
History
=======):
- 0 downloads in the last day
- 696 downloads in the last week
- 4164 downloads in the last month
- Author: Alexey Kachayev
- License:
Copyright 2013 Alexey Kach: kachayev
- DOAP record: fn-0.4.3.xml | https://pypi.python.org/pypi/fn/0.4.3 | CC-MAIN-2015-35 | refinedweb | 475 | 51.55 |
Type Safe State Machines in TypeScript
Type Safe State Machines in TypeScript
State machines are an important and often-used implement in software development. Let's take a look at how to create one using TypeScript.
Join the DZone community and get the full member experience.Join For Free
Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development..
The first thing we need is a description of the states. I'm going to use basic types like strings, numbers, and symbols because it greatly simplifies the rest of the implementation and we don't lose any generality
type Primitives = string | number | symbol;
Next we need a description of the transition function/map
type EventTypeMapping<I extends Primitives> = { [K in I]: unknown }; type TransitionMap<I extends Primitives, S, E extends EventTypeMapping<I>> = { [K in I]: (event: E[I], currentState: I, extraState: S) => I };
That definition looks a bit elaborate but in plain English it says for every state we have a function that takes an event (
E[I]), the current state (
I), some extra state (
S), and gives us back the next state. The events are indexed by the possible states the machine can be in because not every event is valid in every state. There is no way to actually enforce this constraint at compile time so our transition functions must deal with all event types which is what
E[I] is about.
Now we can construct the machine. We'll use a class to model the machine with the above ingredients
export class Machine<I extends Primitives, S, E extends EventTypeMapping<I>> { constructor( readonly state: S, readonly initialState: I, readonly transitions: TransitionMap<I, S, E>, public currentState: I = initialState ) { } }
The above machine doesn't do much because we are not leveraging the transition functions so let's remedy that.
// ... step(event: E[I]): [I, I] { const currentState = this.currentState; const newState = this.transitions[currentState]( event, currentState, this.state); this.currentState = newState; return [currentState, newState]; } // ...
That's it. We just implemented an abstract state machine. Abstract state machines aren't that useful so let's implement a concrete one that models an elevator in a 3 story building.
The elevator will start on the 1st floor and then go all the way to the 3rd floor. After reaching the 3rd floor it will go all the way down to the 1st floor and then repeat this cycle. Like I said, it's very simple.
The first thing we need is the description of the states. The most relevant part of the state is the floor the elevator is on so that's what we'll use for the state description.
type Floor = 1 | 2 | 3;
This elevator doesn't take any inputs so the event mapping for each state is also very simple.
type AllowedEvents = { 1: void; 2: void; 3: void };
We need one more bit of information. The elevator needs to know which direction it's going.
type ExtraState = { direction: 'up' | 'down' };
The transitions functions are a runtime concern so to fill those in we're going to instantiate our abstract machine and fill them in.
const elevator = new Machine<Floor, ExtraState, AllowedEvents>( { direction: 'up' }, 1, { 1: (e, f, s) => { return (s.direction = 'up', 2); }, // Can only go up 2: (e, f, s) => { return s.direction === 'up' ? 3 : 1 }, // Can go up or down 3: (e, f, s) => { return (s.direction = 'down', 2); } // Can only go down } );
Let's also step through the transitions a few times to verify that it's working as we expect.
console.log(`*starting*`); for (let i = 0; i < 12; i++) { const [previousFloor, nextFloor] = elevator.step(void(0)); console.log(`Elevator going ${elevator.state.direction}: ${previousFloor} -> ${nextFloor}`); if (elevator.currentState === 1) { console.log(`---Cycle complete---`); } } console.log(`*done*`);
*starting* Elevator going up: 1 -> 2 Elevator going up: 2 -> 3 Elevator going down: 3 -> 2 Elevator going down: 2 -> 1 ---Cycle complete--- Elevator going up: 1 -> 2 Elevator going up: 2 -> 3 Elevator going down: 3 -> 2 Elevator going down: 2 -> 1 ---Cycle complete--- Elevator going up: 1 -> 2 Elevator going up: 2 -> 3 Elevator going down: 3 -> 2 Elevator going down: 2 -> 1 ---Cycle complete--- *done*
Looks correct to me. Making the elevator less simple is left as an exercise for the reader. See if you can model people requesting to be taken to a specific floor as a starting point.
All the code along with the above example lives at GitHub: davidk01/state-machine.
Take a look at an Indigo.Design sample application to learn more about how apps are created with design to code software. }} | https://dzone.com/articles/type-safe-state-machines-in-typescript?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone | CC-MAIN-2019-09 | refinedweb | 774 | 53.71 |
These Drupal 6 imagecache files are included here as part of our Drupal source code examples project. (Please see that project page, including our promise to donate back to the Drupal open source community.)
This content is from the Drupal 6 imagecache project README file:. Getting Started: 1. Upload and enable both the ImageCache and ImageCache UI modules. 2. Go to Administer -> Site Building -> ImageCache. Click on the local task tab labeled "Add New Preset" to build a new set of image manipulation actions. 3. Enter a descriptive name of your choice (e.g. 'product_thumbnail') into the "Preset Namespace" box and click "Create New Preset". 4. Add actions to your preset that tell ImageCache how to manipulate the original image when it is rendered for display. Available actions include crop, scale, desaturate (grey scale), resize, and rotate. Multiple actions may be added to a preset. 5. Each action is configured in its own form, and the actions may be reordered from the preset's configuration form. If you need to make any changes to the order of actions in a preset, remember to click "Update Preset" when you're finished. Viewing Manipulated Images: Your modified image can be viewed by visiting a URL in this format: For example, if your preset is named 'product_thumbnail' and your image is named 'green-widget.jpg', you could view your modified image at:... NOTE: Each role that wishes to view the images generated by a particular preset must be given permission on the admin/user/permissions page. ImageCache also defines a theme function that you can use in your modules and themes to automatically display a manipulated image. For example, to use the theme function in a .tpl.php file, add the following line where you would like the image to appear: <?php print theme('imagecache', 'preset_namespace', $image_filepath, $alt, $title, $attributes); ?> Change 'preset_namespace' to the name of your imagecache preset and make sure that $image_filepath or some other variable contains the actual filepath to the image you would like to display. $alt, $title and $attributes are optional parameters that specify ALT/TITLE text for the image element in the HTML or other attributes as specified in the $attributes array. Using ImageCache with Contributed Modules: ImageCache presets can be put to use in various other modules. For example, when using CCK with the Imagefield module, you can use the "Display fields" local task tab to choose a preset to apply to images in that field. Similarly, you can specify a preset when displaying images attached to nodes using Imagefield in a View through the Views UI. For more information, refer to. (Images, page names, and form field names may refer to previous versions of ImageCache, but the concepts are the same.)
These are direct links to the Drupal 6 imagecache project source code files included in this project: | http://alvinalexander.com/drupal-code-examples/drupal-6-imagecache-module.shtml | CC-MAIN-2020-40 | refinedweb | 473 | 55.24 |
#include <wx/filesys.h>
Classes derived from wxFileSystemHandler are used to access virtual file systems.
Its public interface consists of two methods: wxFileSystemHandler::CanOpen and wxFileSystemHandler::OpenFile.
It provides additional protected methods to simplify the process of opening the file: GetProtocol(), GetLeftLocation(), GetRightLocation(), GetAnchor(), GetMimeTypeFromExt().
Please have a look at overview (see wxFileSystem) if you don't know how locations are constructed.
Also consult the list of available handlers.
Note that the handlers are shared by all instances of wxFileSystem.
wxPerl Note: In wxPerl, you need to derive your file system handler class from
Wx::PlFileSystemHandler.
Constructor.
Returns true if the handler is able to open this file.
This function doesn't check whether the file exists or not, it only checks if it knows the protocol. Example:
Must be overridden in derived handlers.
Works like wxFindFirstFile().
Returns the name of the first filename (within filesystem's current path) that matches wildcard. flags may be one of wxFILE (only files), wxDIR (only directories) or 0 (both).
This method is only called if CanOpen() returns true.
Returns next filename that matches parameters passed to wxFileSystem::FindFirst.
This method is only called if CanOpen() returns true and FindFirst() returned a non-empty string.
Returns the anchor if present in the location.
See wxFSFile::GetAnchor for details.
Example:
Returns the left location string extracted from location.
Example:
Returns the MIME type based on extension of location.
(While wxFSFile::GetMimeType() returns real MIME type - either extension-based or queried from HTTP.)
Example:
Returns the protocol string extracted from location.
Example:
Returns the right location string extracted from location.
Example: | https://docs.wxwidgets.org/3.0/classwx_file_system_handler.html | CC-MAIN-2019-09 | refinedweb | 265 | 51.85 |
Dj can go ahead and take a look at the form we’ll be building; it’s nothing fancy, but it’s enough to cover most of the bases. Make sure to try leaving fields blank, filling in an incorrect total in the second field, etc. And try it without JavaScript enabled — it’ll submit just like any “normal” HTML form.
Note: if you’re already pretty handy with both Django and JavaScript, you might just want to skim this article and look at the sample code; while I’d like to think anybody can learn a thing or two from this, my target audience here is people who know Django but don’t necessarily know much JavaScript, and so I’m going to spend a little time covering basics and general best practices as I go.
Now, let’s dive in.
First things first: think about what you’re going to do
The single most important question to ask when you’re thinking about AJAX is whether you should be using it at all; whole books could be written about when to use it and when not to use it. In general, I think AJAX is at its best for two main types of tasks:
- Fetching data that needs to be regularly refreshed, without the need for manual reloads or (worse)
metarefreshing.
- Submitting forms which don’t need to send you to a new URL when they’re done processing (for example: posting a comment, or editing part of a page’s content in-place).
The key theme there, you’ll notice, is that in both cases it makes sense for the browser to remain at the same URL during and after the AJAX effect; if you need to boil this down to a single general rule of thumb, that’s the one I’d recommend.
Now, think about it some more
The second step, equally important, is to plan out the way your effect would work without the AJAX, because that’s what you should write first. In this case, it means we’ll write a view and a template, and make sure that they work on their own, before adding any JavaScript at all. Jeremy Keith, who’s a man worth listening to, calls this technique “Hijax”, and I love that term. So let’s write our view to work normally first, and add the necessary AJAX support as we go; as it turns out, that’s going to be really easy.
A view to a form
The view is pretty simple; we want to verify that you’ve filled in both fields, and that you’ve figured out the correct answer to a little math problem. And regardless of whether you filled things in correctly or not, we want to preserve the things you entered across submissions.
Note: If I really wanted to be pedantic in this view, I’d do this using a custom manipulator and get all the validation for free (remember that manipulators don’t have to create or update instances of models, but can be used for any form you want validation on; the login view in
django.contrib.auth, for example, uses a manipulator this way). But this is just a quick and dirty view to support the AJAX example and manipulators are going to be changing soon anyway, so I’ll skip out on that for now.
So our view (let’s call it
ajax_example) might look something like this:
def ajax_example(request): if not request.POST: return render_to_response('weblog/ajax_example.html', {})'}) return render_to_response('weblog/ajax_example.html', response_dict)
Here’s how it breaks down:
- If the request wasn’t a
POST, we can go straight to the template with an empty context.
- If it was a
POST, we grab the
nameand
totalsubmitted, grabbing a default value of
Falseif they weren’t submitted.
- Go ahead and stick them in a dictionary for later use in creating a context.
- Make sure
totalis something that converts to an
int; if it doesn’t, set it to
False(which will coerce to zero).
- Do the actual checking: we make sure
nameand
totalactually had values, and if they don’t, create error messages saying they’re required. And if
totalwas submitted but wasn’t 10, add an error saying it’s incorrect. If everything’s OK, add a
successvariable to the dictionary we’ll be using for context.
- Wrap that up in a
render_to_response.
The template is pretty easy, too; you can just view source on the example to see what it looks like. The only thing you can’t see in there is that the
name and
total fields have
{{ name }} and
{{ total }}, respectively, inside their
value attributes, so if the form’s already been submitted they’ll be pre-filled.
Also, you might notice that the error messages look suspiciously like the ones you’d get in the Django admin; I’ve cribbed a little from the admin application’s stylesheet and icons to get that effect. Check the block of CSS at the top of the template to see how it’s done.
Adapting the view for AJAX
We’re going to need something on the backend which will respond to the AJAX submission, and from a logical point of view it makes sense to have it be the same view — why repeat the logic when the only change will be in the output?
Now, there’s no standard convention thus far in the Django world for distinguishing between “normal” and “AJAX’ submissions to a view, but in a few things I’ve worked on I’ve gotten into the habit of using an extra parameter in the URL called
xhr (mnemonic for “XMLHttpRequest”). So where the form’s “normal” submission URL is
/examples/ajax/1/, the AJAX submission URL will be
/examples/ajax/1/?xhr. In addition to being easy to check for from the view, this has the advantage of not requiring any changes to your site’s URL configuration.
And once we know whether we’re reponding to an AJAX submission or not, the response is extremely easy to generate; a Django template context is just a Python dictionary, and a Python dictionary translates almost perfectly into JSON, which is one of the simplest possible formats for an AJAX response. And best of all, Django includes, as part of its serialization framework, the
simplejson library, which provides an easy way to convert between Python objects and JSON.
So let’s take a look at the updated view:
from django.utils import simplejson def ajax_example(request): if not request.POST: return render_to_response('weblog/ajax_example.html', {}) xhr = request.GET.has_key('xhr')'}) if xhr: return HttpResponse(simplejson.dumps(response_dict), mimetype='application/javascript') return render_to_response('weblog/ajax_example.html', response_dict)
The only thing that’s changed here, really, is that we test for the
xhr parameter in the URL and, if it’s present, we return an
HttpResponse whose content is the JSON translation of the dictionary we would have used for template context. We use
application/javascript for the response’s
Content-Type header because the default —
text/html — can open up security holes.
And yes, it’s that easy to support AJAX on the server side.
In part 2, which will show up tomorrow or Wednesday, we’ll walk through writing the JavaScript side of things; go ahead and take a look at it if you’re curious, and hopefully any questions you might have about it will soon be answered. | https://www.b-list.org/weblog/2006/jul/31/django-tips-simple-ajax-example-part-1/ | CC-MAIN-2019-04 | refinedweb | 1,249 | 64.54 |
USAGE:
import com.greensock.TweenLite; import com.greensock.plugins.*; TweenPlugin.activate([FrameBackwardPlugin]); //activation is permanent in the SWF, so this line only needs to be run once. TweenLite.to(mc, 1, {frameBackward:15});
Note: When tweening the frames of a MovieClip, any audio that is embedded on the MovieClip's timeline (as "stream") will not be played. Doing so would be impossible because the tween might speed up or slow down the MovieClip to any degree.
Copyright 2008-2013, GreenSock. All rights reserved. This work is subject to the terms in or for Club GreenSock members, the software agreement that was issued with the membership. | http://www.greensock.com/asdocs/com/greensock/plugins/FrameBackwardPlugin.html | CC-MAIN-2022-40 | refinedweb | 105 | 59.5 |
Hello, I am new to C++ and programming in general and am having a problem with a word unscrambler I'm trying to code. I have two txt files, words.txt and wordlist.txt that contain the scrambled words and unscrambled words respectively. As a test, my scrambled word is tac and my wordlist contains: dog bark cat. It finds the right word, but it has trouble unscrambling it.
My output is:My output is:Code:#include <iostream> #include <fstream> #include <string> using namespace std; int main() { fstream list("wordlist.txt"); fstream words("words.txt"); string sWords, sList; getline(words, sWords); while(!list.eof()) { getline(list,sList); if(sWords.length()==sList.length()) { for(int i=0; i<sWords.length();i++){ for(int k=0; k<sList.length();k++){ if(sWords[i]==sList[k]){ sWords[i]=sList[k]; } } } } cout << sWords << endl; getline(words, sWords); } cin.get(); return 0; }
I've been trying for a long time and am wondering if someone could point me in the right direction. ThanksI've been trying for a long time and am wondering if someone could point me in the right direction. ThanksCode:tac tac tac tac ... loops for a while >_> | https://cboard.cprogramming.com/cplusplus-programming/104197-word-unscrambler.html | CC-MAIN-2017-13 | refinedweb | 197 | 79.06 |
A basic Question
Hello everyone,
after years of QWidget only creation I'm finally dabbling in QML.
My goal is it to create a "Button" item that changes it's displayed img depending on 2 things.
If the mosuecurser is hoveriung over it and if a state ist set or not.
In QWidgets I would use a Stylesheet for that, but that doesn't seem to be a thing for QML so I set different sources:
Item { id: root signal activated(int id) property int m_ID: 0 property bool m_connected: false property string imgDefault: "" property string imgConnected: "" property string imgHoverD: "" property string imgHoverC: "" function setConnected(connected){ m_connected = connected if(m_connected) mainImg.source = imgConnected else mainImg.source = imgDefault } Image { id: mainImg source: imgDefault anchors.fill: parent } MouseArea{ anchors.fill: parent hoverEnabled: true onClicked: root.activated(m_ID) onEntered: { if(!m_connected){ mainImg.source = imgHoverD }else{ mainImg.source = imgHoverC } } onExited: { if(!m_connected){ mainImg.source = imgDefault }else{ mainImg.source = imgConnected } } } }
Is this the right way to do it?
It feels wrong :(
It's OK, although the general "philosophy" behind QML is for it to be declarative: you define what you want, and Qt does it for you. Your implementation is imperative: you state exactly what you want.
Here is an alternative, more declarative solution:
Item { id: root property bool isConnected: false Image { id: mainImg source: { if (isConnected) { if (mouseArea.containsMouse) return imgHoverC else return imgConnected } else { if (mouseArea.containsMouse) return imgHoverD else return imcDefault } } anchors.fill: parent } MouseArea { id: mouseArea hoverEnabled: true } }
With that (you can remove other functions you created), the Image element will automatically respond to any changes in both mouse area and isConnected property.
Hi
Cool. How does it know what code to run when
property bool isConnected changes?
the source: {} is aware it uses isConnected inside and hence reacts?
@mrjj said in A basic Question:
the source: {} is aware it uses isConnected inside and hence reacts?
Is that a question to me?
Yes, it is aware. That's how QML engine works, it builds up an "understanding" of which property update should trigger which bindings to be recalculated. In this case, "source" will be recalculated each time root.isConnected is changed, and each time mouseArea.containsMouse changes.
Ah, I just noticed the OP has
property bool m_connected: false, I should have used that instead of adding my isConnected. Anyway, that's a small change to make.
- mrjj Qt Champions 2017
.
I remade that item(class?) about 4 times already, each time it has less code in it and becomes faster.
Your example works splendently!
I technically don't even need the setConnected function.
Will take a while to get my head around this different style of QML ...
Thanks again, time to dig back in!
@mrjj said in A basic Question:
.
Hey, no problem, I'm happy to explain :-)
@J.Hilk said in A basic Question:
I remade that item(class?) about 4 times already, each time it has less code in it and becomes faster.
Your example works splendently!
Great, good to hear.
I technically don't even need the setConnected function.
Yes, it should not be necessary. If you need to modify the value of m_connected, even from other file where your button is added, it will be enough to modify it via dot syntax. The change signal is emitted automatically. So, assuming your button is saved in MyButton.qml file, you can do this:
// some other QML file, for example main.qml MyButton { id: myButton m_connected: true }
Thanks again, time to dig back in!
Happy coding! :-)
@sierdzio
Super
Have a tiny little one extra
It knows to recalc source when MouseArea changes simply because its inside its scope?
I have same issue as J.Hilk trying to apply widget logic to QML and its really not. :))
- sierdzio Moderators
@mrjj said in A basic Question:
It knows to recalc source when MouseArea changes simply because its inside its scope?
Now, how scopes work in QML is a bit complicated, I'm sure you'll encounter lots of WTF? moments :-)
Yes, in this case the mouse area is in scope (the Image can access it's properties by calling it by ID, in my example the id is mouseArea). But in general, all QML engine needs is to get the onPropertyChanged signal - it does not matter from where it is coming from, it will simply register that signal as "hey, Property changed it's value, so I need to update the value here, too". It can be some global context property, QML singleton, other QML component, or even some C++ QObject that was exposed/ connected (via context property, or Connections element for example) and is visible to Image component.
Some things to be aware of here:
- the binding will be recalculated each time some (relevant) property changes. This can sometimes mean a lot of updates per second, for example if you bind to mouse.x (one tends to move the mouse quite a lot :-))
- thus, it is important not to overdo it (for example, if you create a Q_PROPERTY in c++, remember not to emit changed() signal when the property value has not changed:
if (newValue == oldValue) return;. Qt Creator automatically generates good code for properties, thankfully)
- if you (at some point) assign a value to property in JavaScript, the binding is broken. It won't update anymore. Here's a short example:
Item { id: obj1; height: obj2.height * 2 } Item { id: obj2 } MouseArea { onClicked: obj1.height = obj2.height * 3 // Boom! // The binding is broken when you click the mouse area. // Why? You tell obj1 that the height should be set // to a new value, right here right now. To QML, it is // the same as if you set it to obj1.height = 150. // Constant value }
In the example, if you want to change the binding to
obj1.height: obj2.height * 3and keep it updating when obj2.height changes, you can use Binding element.
Oh yes lots of those moments :)
Aha, so if u set to a constant value it wont auto update.
What if multiple objects are using the same binding?
Is it then disabled for all or only for that mouse area or is it globally?
@mrjj said in A basic Question:
What if multiple objects are using the same binding?
Each binding is used by single object. They are declared on the "receiving end", so to speak. Example:
Item { height: someObj.height + 15 } Item { height: someObj.height + 15 } Item { height: someObj.height + someObj.height }
These are 3 separate bindings. If you overwrite the height value in first Item with some constant, remaining 2 will still work and update automatically.
@sierdzio
Super. Then its all clear.
Also the global nature of it was escaping me.
like you can use
MyButton {
id: myButton
m_connected: true
}
with out any extern/include/add to scope extras.
Thank you.
@mrjj said in A basic Question:
with out any extern/include/add to scope extras.
Yes, although there are some rules here. Only top-level properties (defined in root element of any given QML file) are visible outside of the component. Also, no IDs are accessible outside of current QML file (with a few tiny exceptions). So:
/// Some other qml file MyButton { m_connected: true // Works fine mouseArea.hoverEnabled: false // Error. The ID 'mouseArea' is not visible outside of MyButton.qml file, // and additionally hoverEnabled is not a top-level property }
- mrjj Qt Champions 2017
@sierdzio
oh
so only first level of scope ?
Item {
can_be_seen
Item2 {
all here is private?
}
}
well maybe its good IDs are not global visible or one could make some crazy spaghetti code very easy.
@mrjj said in A basic Question:
@sierdzio
oh
so only first level of scope ?
Yes, only first level, unless I am mistaken ;-) Writing from memory now. And this applies to using the component somewhere else (in a different QML file). Within single file, there are no such strict visibility restrictions.
well maybe its good IDs are not global visible or one could make some crazy spaghetti code very easy.
Yea, it can be a bit annoying in the beginning, but enforces some rather good practices in the long run.
@sierdzio said in A basic Question:
well maybe its good IDs are not global visible or one could make some crazy spaghetti code very easy.
Yea, it can be a bit annoying in the beginning, but enforces some rather good practices in the long run.
I agree. Otherwise there's no private/public distinction in QML (and you can bypass even this visibility restriction runtime if you really want to) but I think it's reasonable to hide those inside IDs because otherwise it would encourage messy programming style with no real components. Now we at least have a possibility to have real "implementation details", some kind of data hiding. So sometimes it feels annoying but in the long run it's better.
About the original problem, here's another possible solution. Not as nice and tidy as @sierdzio's but in some cases might it be clearer not to use nested if-elses, and if you have to change several properties based on the same conditions you would have to duplicate those conditions. Here you can just add another property to PropertyChanges.
(changed image to rect to save some work...)
import QtQuick 2.6 import QtQuick.Controls 2.2 import QtQuick.Layouts 1.1 ApplicationWindow { visible: true width: 640 height: 480 ColumnLayout { id: columnLayout anchors.fill: parent Button{onClicked: root.isConnected=!root.isConnected} Item { id: root property bool isConnected: false Layout.fillHeight: true Layout.fillWidth: true Rectangle { id: mainImg anchors.fill:parent states:[ State{ name:"conn_mouse" when:root.isConnected && mouseArea.containsMouse PropertyChanges { target:mainImg color:"red" } }, State{ name:"conn_no_mouse" when:root.isConnected&&!mouseArea.containsMouse PropertyChanges { target:mainImg color:"green" } }, State{ name:"noconn_mouse" when:!root.isConnected&&mouseArea.containsMouse PropertyChanges { target:mainImg color:"blue" } }, State{ name:"noconn_nomouse" when:!root.isConnected&&!mouseArea.containsMouse PropertyChanges { target:mainImg color:"yellow" } } ] } MouseArea { id: mouseArea hoverEnabled: true anchors.fill:parent onClicked: console.log("clicked") } } } }
hi
yes it might be annoying at first. Like UI of widgets being private but
save you from pain down the road.
states
Oh that is a nice class. So that would be better if m_connected state were more complex
or more than source property we wanted to changed on the clicked etc.
¨Thank you for sharing.
Just one stylistic note... Quick Controls 2 standard library qml code uses this extensively so it may be at least good to know even if you don't want to use it. It's alternative syntax for nested if-else. Modifying my own code, just set the rectangle color (or in sierzio's code the image source):
color: root.isConnected ? (mouseArea.containsMouse ? "red" : "green") : (mouseArea.containsMouse ? "blue" : "yellow")
This is from the Material style's Button.qml:
color: !control.enabled ? control.Material.hintTextColor : control.flat && control.highlighted ? control.Material.accentColor : control.highlighted ? control.Material.primaryHighlightedTextColor : control.Material.foreground
Heh, actually the first version of my snipped used the question mark notation, but I changed it to if-else because I thought it would be more readable.
It is definitely a good approach, and for simple cases I would recommend it - QML engine can optimize the question mark operator more heavily than if-else.
hi
oh my gosh, is that like a c++ ternary operator that can be nested ?
But its not super readable unless really short.
It's the same, a c++ ternary operator can be nested.
@GrecKo
Yep, i realized that after asking but I think i never saw one in c++
like
!m_seedsfilter ? good=true : m_seedsfilter==1 ? good=newClusters(Sp) : good=newSeed(Sp);
(ugly as hell)
Off-topic but it would be
good = !m_seedsfilter? true : m_seedsfilter == 1 ? newClusters(Sp) : newSeed(Sp);, it's the same notation in Js and in c++
Thanks
but was just live sample from
But back to topic a bit.
Do you know how much of JS that is supported in QML ?
Can i include .js stuff ?
@mrjj said in A basic Question:
Do you know how much of JS that is supported in QML ?
Can i include .js stuff ?
I think V4 engine implements full ECMA 5.1 specs, so you can run any JavaScript there, unless it uses newer features.
@sierdzio said in A basic Question:
ECMA 5.1 specs
so that is pretty old ?
5.1 Edition / June 2011
So most from
might not work as 6 years in Web tech is a decade ?
It is old, indeed. But a lot of projects like node.js, charts.js etc. seem to be working (or used to work 1-2 years back).
There is a ticket for upgrading the engine, but it lays dormant since years
Ok sounds pretty good. even if older.
It is odd that its not been updated since lots of activities on QML.
Thank you for all the info :)
@mrjj said in A basic Question:
It is odd that its not been updated since lots of activities on QML.
There was a discussion about it on the mailing list once. If I recall it correctly, the priority for Qt devs working on QML was to keep the engine fast, and make it work 100% reliable in common QML use cases (and the most common uses are: small bindings/ assignments and short functions) - so they did not feel pressure to implement newer features. | https://forum.qt.io/topic/83602/a-basic-question | CC-MAIN-2018-39 | refinedweb | 2,222 | 58.48 |
README
@jonahsnider/util
A collection of simple, optimized utility functions that help you spend more time implementing real features instead of writing the same snippets over and over.
Written in TypeScript with strong typesafety in mind (more on that below).
Works in Node.js, mostly works in browsers.
If you're considering using the library I recommend taking a glance at the docs to see if anything seems helpful to you.
yarn add @jonahsnider/util # or npm install @jonahsnider/util
then
import {shuffle} from '@jonahsnider/util'; // or import * as util from '@jonahsnider/util'; const {shuffle} = require('@jonahsnider/util'); // or const util = require('@jonahsnider/util');
Why you should use this libraryWhy you should use this library
There's 3 main benefits this library offers:
Readability
Because JavaScript lacks a proper standard library, you will find yourself writing the same snippets again and again. Let's look at sorting an array in ascending order (low to high) as an example:
// Sort ascending array.sort((a, b) => a - b);
As an experienced dev you've probably seen this snippet in some form hundreds of times before. If you're a beginner you might not even be able to tell if this is an ascending or descending sort without the comment.
The alternative:
import {Sort} from '@jonahsnider/util'; array.sort(Sort.ascending);
If you were skimming through a file and saw this you can immediately understand what this code does.
This library works perfectly with existing idiomatic JavaScript and doesn't force you to change the way you write code.
(also - fun fact: the first snippet doesn't work with
bigints, the second snippet does)
Safety
Writing your own snippets doesn't just slow you down, it can introduce bugs.
Every function is tested with 100% coverage, ensuring bug-free code.
Features
This library isn't just 1-liners you could copy-paste yourself.
Want to do a binary search on an array? We've got you covered.
Combine a bunch of regular expressions into one? No problem.
Need a deck of cards? Only one import away.
TypeScriptTypeScript
In addition to all the useful functions this library provides, a major effort has been made to ensure the best possible experience for TypeScript users.
- Functions accept many types of arguments, either as a generic
Tor a union of related types like
number | bigint(mostly useful in the math functions)
Iterables and
ArrayLikes are used instead of
Arrays whenever possible, broader types ensure compatibility with your projects and let you avoid ugly type assertions
- When an array is needed, it's always
readonly T[]unless mutation is required
There's also a few types exported that can be handy in certain situations (ex.
NonEmptyArray or
Nullish).
My personal favorite is the
TypedEventEmitter which lets you ensure typesafety in event listeners. | https://www.skypack.dev/view/@jonahsnider/util | CC-MAIN-2022-21 | refinedweb | 464 | 53.21 |
[SOLVED] #include does not behave as i thought i
Good Evening.
First steps in Linux C ++ and I'm trying to convert to C ++ an old program written in Visual Basic (version for Visual Studio 10).
Use QtCreator and Qt, but I think the problem is not of Qt, but some particular C ++ in general do not know.
If the first part of the main write so:
#include <QtWidgets> QMainWindow *X_mainWindow; QMenuBar *X_menuBar; QWidget *X_centralWidget; int main(int argc, char *argv[])
and then there is a .h file in which the same variables I declared "extern", the program works correctly.
As if those same three variables I write to a file "dichiarative_unatantum.h" and I include in the main well:
#include <QtWidgets> #include "dichiarative_unatantum.h" int main(int argc, char *argv[])
in compilation gives me errors like "multiple definition of X_mainWindow".
Why is this happening? The #include directive should not simulate the insertion at that particular point of the three global variables?
Give me a hand to understand? Thank you.
Hi and welcome to devnet,
is the file
dichiarative_unatantum.hincluded only once in your program?
Can you show both that file and the one where the variables are declared as extern??
TIP: try to avoid to use global/extern variables; they make the code hard to mantain and debug.
First of all sorry for my english: I translate from Italian with google translator.
In the version that does not give error do:
// main.cpp #include "w000_dichiarative.h" #include "w001_main.h" #include "w010_partita1.h" #include <QtWidgets> // #include "w000_dichiarative_unatantum.h" invalidated QMainWindow *X_mainWindow; QMenuBar *X_menuBar; QWidget *X_centralWidget; int main(int argc, char *argv[]) {
The first of "w000_dichiarative.h" is so:
// W000_dichiarative.h #ifndef W000_DICHIARATIVE_H #define W000_DICHIARATIVE_H #include <QtWidgets> extern QMainWindow *X_mainWindow; extern QMenuBar *X_menuBar; extern QWidget *X_centralWidget;
So it works properly.
But if I substitute the main way:
#include "w000_dichiarative.h" #include "w001_main.h" #include "w010_partita1.h" #include <QtWidgets> #include "w000_dichiarative_unatantum.h" // QMainWindow *X_mainWindow; invalidated // QMenuBar *X_menuBar; invalidated // QWidget *X_centralWidget; invalidated int main(int argc, char *argv[]) {
and son-in "w000_dichiarative_unatantum.h" thus:
#ifndef W_DICHIARATIVEUNATANTUM_H #define W_DICHIARATIVEUNATANTUM_H #include <QtWidgets> QMainWindow *X_mainWindow; QMenuBar *X_menuBar; QWidget *X_centralWidget;
compilation gives me the error mentioned before. Why?
Hi,
no problem for the English (I'm Italian too and you can also use the Italia forum to write in your native language).
Can you post the code? is that header file included only once?
Ok Thanks.
This morning I had asked for help in the forum Linux Mint-Italian, but I had no answer. 2 days ago I wrote two questions on the forum Qt-Italian but nobody deigned to answer me. This is why I wrote in the American forum: you are more prepared and kind.
The file unatantum.h is inserted only in the main and only in the version that gives error. In the version that works well is not included even in the .pro
Oh nooo, I had left a #include "... a tantum.h" in another file. I deleted and now it works. But the initial #ifndef would not have to avoid duplication of definitions? In any case, thank you, thank you very much.
Hi,
I mean this forum.
BTW, the include guard
#ifndef XXXX #define XXXX ... #endif
avoids to include twice the same header in a source file but doesn't avoid that you include the same header in two different source file. | https://forum.qt.io/topic/53599/solved-include-does-not-behave-as-i-thought-i | CC-MAIN-2017-47 | refinedweb | 561 | 60.01 |
Log message:
Bump all packages for perl-5.18, that
a) refer 'perl' in their Makefile, or
b) have a directory name of p5-*, or
c) have any dependency on any p5-* package
Like last time, where this caused no complaints.
Log message:
update to 1.77.1
too many changes to list here, see the bundled changelog
Log message:
Drop superfluous PKG_DESTDIR_SUPPORT, "user-destdir" is default these days.
Log message:
Bump all packages that use perl, or depend on a p5-* package, or
are called p5-*.
I hope that's all of them.
Log message:
Add a buildlink3.mk.
Log message:
Split Makefile for dbtoepub
Log message:
epub XSLMOD requires ruby, so we no longer install it.
Log message:
Changes 1.76.1:
* Added eu.xml and gl.xml to SOURCES.
* Fixed bug when context was lost due to usage of xsl:key
Changes 1.76.0:
* Apply patch to support named destination in fop1.xsl.
* Remove the namespace mistakingly added with the last upload. | http://pkgsrc.se/textproc/docbook-xsl | CC-MAIN-2014-10 | refinedweb | 166 | 78.14 |
This is the mail archive of the binutils@sourceware.org mailing list for the binutils project.
Hi Alan, Thanks for taking the time to reply. While there is a small part of me that would like to tilt against this particular windmill, the lack of any specific point of non-compliance means that I have no firm ground to stand-on. In the end, it's would be about differences in interpretation of an ambiguous text based on historical precedents - which is an almost textbook recipe for a religious war. I have no desire to cast the first stone, so I'm going to let this sleeping dog lie (and also stop mixing metaphors) :) Craig Sent from my iPhone On 16/05/2011, at 10:15 AM, Alan Modra <amodra@gmail.com> wrote: > On Sun, May 15, 2011 at 08:14:19PM +1000, Craig Southeren wrote: >> At the heart of the issue is the timing of initialising statics at >> the global/namespace level. > > You won't get much traction on this issue here on the binutils list. > We did have a ld bug that affected you but that has now been fixed. > Further discussion should go to one of the gcc lists. If you can get > agreement that functions declared with __attribute__ ((constructor)) > ought to be treated exactly as standard C++ namespace scope > constructors regarding initialisation order, then it would be good to > have your testcase added to the g++ testsuite. That should ensure > both g++ and ld do not regress. > > FWIW, I think your testcase is quite reasonable. The main reason I > wanted the testcase removed from the ld testsuite because I found > the testcase failed using commonly available versions of g++, and > therefore a C++ testcase wasn't the best way to test ld behaviour. > > -- > Alan Modra > Australia Development Lab, IBM | http://sourceware.org/ml/binutils/2011-05/msg00202.html | CC-MAIN-2017-22 | refinedweb | 303 | 69.01 |
Namespaces are the backbone of file access and service discovery in Fuchsia.
Definition
A namespace is a composite hierarchy of files, directories, sockets, services, devices, and other named objects provided to a component by its environment.
Let's unpack that a little bit.
Objects are named: The namespace contains objects which can be enumerated and accessed by name, much like listing a directory or opening a file.
Composite hierarchy: The namespace is a tree of objects that.
Namespaces in Action
You have probably already spent some time exploring a Fuchsia namespace;
they are everywhere. If you type
ls / at a command-line shell prompt
you will see a list of some of the objects that are accessible from the
shell's namespace.
Unlike other operating systems, Fuchsia does not have a "root filesystem". As described earlier, namespaces are defined per-component rather than globally or per-process.
This has some interesting implications:
- There is no global "root" namespace.
- There is no concept of "running in a chroot-ed environment" because every component effectively has its own private "root".
- Components receive namespaces tailored to their specific needs.
- Object paths may not be meaningful across namespace boundaries.
- A process may have access to several distinct namespaces at once.
- The mechanisms used to control access to files can also be used to control access to services and other named objects on a per-component basis.
Objects
The items within a namespace are called objects. They come in various flavors, including:
- Files: objects that contain binary data
- Directories: objects that contain other objects
- Sockets: objects that establish connections when opened, like named pipes
- Services: objects that provide FIDL services when opened
- Devices: objects that provide access to hardware resources
Accessing Objects that includes an object relative path expression which identifies the desired sub-object. This is much like opening files in a directory.
Notice that you can only access objects that are reachable from the ones you already have access to. There is no ambient authority.
We will now define how object names and paths are constructed.
Object Names:
- Minimum length of 1 byte.
- Maximum length of 255 bytes.
- Does not contain NULs (zero-valued bytes).
- Does not contain
/.
- Does not equal
.or
...
- Always compared using byte-for-byte equality (implies case-sensitive).
Object names are valid arguments to a container's
Open() method.
See FIDL Protocols..
Object Relative Path Expressions:
- Minimum length of 1 byte.
- Maximum length of 4095 bytes.
- Does not begin or end with
/.
- All segments are valid object names.
- Always compared using byte-for-byte equality (implies case-sensitive).
Object relative path expressions are valid arguments to a container's
Open()
method. See FIDL Protocols.
Client Interpreted Path Expressions
A client interpreted path expression is a generalization of object relative path expressions, but includes optional features that may be emulated by client code to enhance compatibility with programs that expect a rooted file-like interface.
Technically these features are beyond the scope of the Fuchsia namespace protocol itself but they are often used so we describe them here.
- A client may designate one of its namespaces to function as its "root". This namespace is denoted
/.
- A client may construct paths relative to its designated root namespace by prepending a single
/.
- A client may construct paths that traverse upwards from containers using
..path segments by folding segments together (assuming the container's path is known) through a process known as client-side "canonicalization".
- These features may be combined together. Protocols.
For example,
fdio implements client-side interpretation of
.. paths
in file manipulation APIs such as
open(),
stat(),
unlink(), etc.
Namespace Transfer
When a component is instantiated in an environment (e.g. its process is started), it receives a table that.
Namespace Conventions
This section describes the conventional layout of namespaces for typical components running on Fuchsia.
The precise contents and organization of a component's.
Typical Objects
There are some typical objects that a component namespace might contain:
- Read-only executables and assets from the component's package.
- Private local persistent storage.
- Private temporary storage.
- Services offered to the component by the system, the component framework, or by the client that started it.
- Device nodes (for drivers and privileged components).
- Configuration information.
Typical Directory Structure
Namespace Participants
Here is some more information about a few abstractions that interact with and support the Fuchsia namespace protocol.
Filesystems
Filesystems make files available in namespaces.
A filesystem is simply a component that publishes file-like objects from someone else's namespace.
Services
Services live in namespaces.
A service is a well-known object that
Components consume and extend namespaces.
A component is an executable program object that has been instantiated within some environment and given a namespace.
A component participates in the Fuchsia namespace in two ways:
It can use objects from the namespace
Environments construct namespaces.
An environment is a container of components. Each environment is responsible for constructing the namespace for its components.
The environment decides what objects a component may access and how the component's request for services by name will be bound to specific implementations.
Configuration
Components may have different kinds of configuration data exposed to them
depending on the features listed in their
component manifest
which are exposed as files in the
/config namespace entry. These are
defined by the feature set of the component. | https://fuchsia.dev/fuchsia-src/concepts/process/namespaces | CC-MAIN-2021-21 | refinedweb | 887 | 50.02 |
The ESP32 Sketch Data Upload plugin for the Arduino IDE allows you to upload files to the ESP32 memory area reserved for the file system (FS) managed using the SPIFFS file system. For the moment LittleFS is not yet officially supported by Espressif on the ESP32 platform.
The size of the flash memory varies depending on the ESP32 module on board the development board. Recent modules generally have a 4MB flash memory of which 1MB, 2MB or 3MB can be allocated to the file system (File System – FS).
Using LittleFS on ESP32
The LittleFS system is not yet officially supported on ESP32. You your project necessarily requires the support of LittleFS, there are however several projects
Warning, this project can only be used on Windows because it requires an external executable mklittlefs.exe
Only downside, it already dates from 2017 and no longer seems to receive regular corrections.
For those who develop directly with the API of the ESP-IDF SDK, you can test this project.
As usual, you will need to install a plugin on the Arduino IDE to upload files in LittleFS format. It is here.
How to organize the files of an ESP32 project with SPIFFS?
Here is an example of the file tree of an ESP32 project whose HTML interface code is separate from the Arduino code. In general, the files of a WEB server are stored in a folder named www.
www/ ├── index.html ├── style.css ├── code.js code.ino
Apart from the Arduino code files (.ino, .cpp, .h files), all the other files must be moved to a data folder. The Arduino project tree becomes this
for example
data/ ├── subfolder ├── index.hmtl
becomes
/data/subfolder/index.hmtl
You may quickly run into problems if you exceed 31 useful characters. Therefore, it is better to limit the depth of the tree … or not to create one.
Install ESP32 Sketch Data Upload tools for Arduino IDE
1 Before you begin, you need to install the ESP32 SDK from Espressif on the Arduino IDE. Everything is explained in detail in this article.
2 Then go here on GitHub to get the latest release (update) of the ESP32fs plugin for the Arduino IDE. Click on the link to the plugin zip archive to start the download.
3 Open the Arduino IDE working folder. Typically, it is in the My Documents folder on Windows and Documents on macOS or Linux. To know the path to the Arduino folder, you can open the preferences of the Arduino IDE
4 Create a new folder called tools
5 Unzip the archive directly into the tools folder. Do not change the tree
6 Reload the Arduino IDE. After restarting you should have a new option in the tools folder called ESP32 Sketch Data Upload.
Everything is ready.
Choose the size allocated to the filesystem (partition scheme)
It is possible to allocate a certain amount of the flash memory of the ESP32 to the file system (FS) like on the ESP8266. By default, the framework allocates portions of memory according to a table called Partition Table (or Partition Scheme on the Arduino IDE).
Espressif has set some default schemas (on this GitHub page), but any board maker can change it.
This is the default Partition Table (source file). As you can see, the SPIFFS area is at the end so that it can take up the most of the available space.
The size that can be allocated depends on the size of the flash memory (4MB or 16MB) and on each manufacturer.
Select your development board from the list then open the Partition Scheme menu.
For a LoLin D32 Pro, 4 schemes are possible
- advisedadvised
- 2MB of Flash memory2MB of Flash memory
I advise you to leave the default scheme especially if you want to set up a wireless update mechanism (OTA) in your project.
If this organization does not suit your development, you can switch to PlatformIO which allows you to finely define the Partition Table but using a csv file. More information here.
A test program with SPIFFS
On ESP32, the FS.h library has been renamed SPIFFS.h. The source code of the library is available on GitHub.
Create a new sketch, paste the code below and save
#include "SPIFFS.h" void setup() { Serial.begin(115200); // Launch SPIFFS file system if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); return; } // Open test file in read only mode File file = SPIFFS.open("/test.txt", "r"); if(!file){ // File not found | le fichier de test n'existe pas Serial.println("Failed to open test file"); return; } // Display file content on serial port Serial.println(); Serial.println("Read test.txt file content:"); while(file.available()){ Serial.write(file.read()); } file.close(); } void loop() { }
What does this code do?
We declare the SPIFFS.h library which allows access to the memory area using the SPIFFS file system.
#include "SPIFFS.h"
We start the SPIFFS file system. The error is reported in the event of a problem. Here, the program stops in the event of an error.
if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); return; }
We open the test.txt file with the open(filename, option) method in write only by specifying the option “r” (for read only). Use the “w” option to be able to write (and write) to a file.
File file = SPIFFS.open("/test.txt", "r");
We send the contents of the text file to the serial port
while(file.available()){ Serial.write(file.read()); }
And we close the file
file.close();
Add files to the data folder
There is a shortcut allowing you to directly open the Arduino project folder from the Sketch menu -> Show sketch folder
Create a text file with any text editor and paste a “Hello World” or any text of your choice.
Save the file with the name test.txt.
Upload files to the ESP32 memory area
To upload the files saved in the data folder, simply launch the tool from the Tools -> ESP32 Sketch Data Upload menu
The operation only takes a few seconds
[SPIFFS] data : /Users/diyprojects/Documents/Arduino/diyprojects/ESP32/SPIFFS/ESP32_SPIFFS_DEMO/data [SPIFFS] start : 2691072 [SPIFFS] size : 1468 [SPIFFS] page : 256 [SPIFFS] block : 4096 /test.txt [SPIFFS] upload : /var/folders/x_/w_k_y_ys531cxjfyvqpk1bwc0000gn/T/arduino_build_298507/ESP32_SPIFFS_DEMO.spiffs.bin [SPIFFS] address: 2691072 [SPIFFS] port : /dev/cu.usbserial-1420 [SPIFFS] speed : 115200 [SPIFFS] mode : dio [SPIFFS] freq : 80m4:e6:2e:88:xx:xx Uploading stub... Running stub... Stub running... Configuring flash size... Auto-detected Flash size: 4MB Compressed 1503232 bytes to 2848... Writing at 0x00291000... (100 %) Wrote 1503232 bytes (2848 compressed) at 0x00291000 in 0.3 seconds (effective 42700.6 kbit/s)... Hash of data verified. Leaving... Hard resetting via RTS pin...
Open the serial monitor and RESET the ESP32 module. This is what you should see
It is quite possible to access the SPIFFS file system on ESP32 through an FTP client. This is especially useful when developing projects with an HTML interface and you do not necessarily want to go through the Arduino IDE every time you want to upload the files.
Read, write, add data to a file by programming
In most cases, we will need to access and manipulate (write, add data, rename, delete …) files directly by programming with Arduino code. This is possible with the SPIFFS.h library presented in this article
Common issues
Here is a list of some common errors.
SPIFFS Upload failed!
If you get such a message, just close the serial monitor window.
esptool.py v2.6 Serial port /dev/cu.usbserial-1420.usbserial-1420: [Errno 16] Resource busy: '/dev/cu.usbserial-1420' Failed to execute script esptool SPIFFS Upload failed!
A fatal error occurred: Timed out waiting for packet content
If you get such a message6:e6:2e:78:0b:ae Uploading stub... Running stub... Stub running... Changing baud rate to 921600 Changed. Configuring flash size... A fatal error occurred: Timed out waiting for packet content
You just need to decrease the serial port speed to upload. Lower the speed to 115200 baud, then go back up to find out the maximum speed supported by the development board.
Updates
3/09/2020 First
Followed your instructions, but when I go to upload the data I get this error – SPIFFS Error: serial port not defined!
Where do I define the serial port? | https://diyprojects.io/esp32-sketch-data-upload-arduino-ide-upload-spiffs-files-flash-memory/ | CC-MAIN-2021-49 | refinedweb | 1,392 | 65.83 |
Hi i am trying to create a game that uses 2 scripts one that creats the game instance and another that hold the varbles of monsters and the player as well as the game my problem is i am trying to creat "rooms" for each area with in the game and i wanted each "room" to be its own class but i am haveing troble switching from one "room" aka class to another this is the test of thing i have so far
import random from random import randint class Player(object): ye def __init__ (self, name = " ", max_hp = 400, max_att = 200, max_def = 100,pl_lv = 0, pl_xp = 0,): self.name = name self.player_hp = max_hp self.player_att = max_att self.player_def = max_def self.player_lv = pl_lv self.player_xp = pl_xp self.big_hit = self.player_att * 5.0 def Player_lvl(self, xp_gain): self.xp_gain = xp_gain self.player_xp += xp_gain if self.player_xp == 500: self.lv += 1 self.max_hp += 10 * self.lv self.max_att *= 2.0 self.max_def *= 1.5 class Monster(object): def __init__ (self, mon_name, max_hp, max_att, max_def, max_lv,): self.mon_max_hp = max_hp self.mon_att = max_att self.mon_def = max_def self.mon_lv = max_lv self.mon_name = mon_name self.mon_big_hit = self.mon_att * 4.0 def Monster_lv (self): if self.mon_lv == 1: self.mon_hp *= 1.5 self.mon_att *= 2.0 self.mon_def *= 1.3 elif self.mon_lv == 2: self.mon_hp *= 2.0 self.mon_att *= 2.3 self.mon_def *= 1.5 elif self.mon_lv == 3: self.mon_hp *= 2.5 self.mon_att *= 2.5 self.mon_def *= 1.7 class Battle(object): def __init__(self,player,monster): self.player = player self.monster = monster self.attack_turn = True def Attacking(self): self.monster.mon_lv = self.player.player_lv while True: if self.monster.mon_hp <= 0: print " Congrats you have killed %s and gaine 100 xp" % self.monster.mon_hp self.player.Player_xp(100) if self.player.player_hp <= 0: print "You have been killed by %s" % self.monster.mon_name return "dead" if randint(1,5) == 3 or randint(0,5) == 5 and self.attack_turn: self.monster.mon_hp -= self.player.big_hit - self.monster.mon_def print "You hit %s in its weak spot for %f and has %f HP left" %(self.monster.mon_name,self.player.big_hit, self.monster.mon_hp) self.attack_turn = False else: self.monster.mon_hp -= self.player.player_att - self.monster.mon_def print " YOu hit %s for %f it has %f HP left" %(self.monster.mon_name, self.player.player_att, self.monster.mon_hp) if randint(1,5) == 3 or randint(0,5) ==5 and not self.attack_turn: self.player.player_hp -= self.monster.big_hit - self.player.player_def print " You have been hit by %s for %f and have %f hp left" %(self.monster.mon_name, self.monster.mon_big_hit, self.player.player_hp) else: self.player.player_hp -= self.monster.mon_att - self.player.player_def print " You have been hit by %s for %f and have %f hp left" %(self.monster.mon_name, self.monster.mon_big_hit, self.player.player_hp) class Game(object): def __init__(self, player, monster,start): self.player = player self.monster = monster self.start = start def play (self): next = self.start while True: start = getattr(self, next) next = start() class Start_room(object): print "*" print "Welcome to the world %s their will be tought times ahead for you on this enventful aventure" % self.player.name start_area = raw_input("\nTheir are 4 places you can start from \n\t1.City\n\t2.Mountain\n\t3.Ocaen\n\t4.Sky_Dungon\n Type in the number of the area you will go to") if start_area == "1": return "City" elif start_area == "2": return "Mountain" elif start_area == "3": return "Ocaen" elif start_area == "4": return "Sky_Dungon" else: "plese enter a a vaild number" def City(object): def __init__(self, player, monster): self.player = player self.monster = monster def Start(self): print "This is a test"
This is the Run script
from mygame import * player = Player(raw_input("Plese enter a name for your Char")) monster = Monster("Dragon",420, 150, 100, 0) monster2 = Monster("Ealgel", 500,120, 80, 0) game = Game(player, monster, "Start_room") game.play() | https://www.daniweb.com/programming/software-development/threads/439229/help-with-classes | CC-MAIN-2017-34 | refinedweb | 648 | 53.37 |
I'm using a teensy 3.6 to try and do some searching of twitter using the Temboo website and library.
Unfortunately I've fallen at the first fence.
The library is designed for the Arduino Yun but anyway always was up for a challenge.
When I include the library and compile I get the error unknown type name 'bool' from one of the files (TembooWebSocketRequestHandles.h). Looks like a fundamental problem somewhere and I'm not sure how the Arduino gets 'bool' defined but obviously it's not picking it up here. Has anyone had the same problem or can suggest what to change/include?
Fairly simple code:
#include <Temboo.h>
void setup()
{
}
void loop()
{
} | https://forum.pjrc.com/threads/58667-Temboo-library-error-Unknown-type-name-bool?s=84ecf9aa8e822633be7f3698455d2e43&p=224123 | CC-MAIN-2020-10 | refinedweb | 116 | 67.76 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
>I think you're SOL on this one. Yeah, pretty much what I've decided as well. > >> Is there a way to say, hey, for this file only use these rules? Or is >> there a way to break apart the libstdc++ and libsupc++ symbol versioning >> (which are now done at the same time as libsupc++ is a convenience >> library) so that in the libsupc++ symbol-versioning file all the >> namespace std:: symbols are not exported, and internal only? > >The convenience library is just a .a, isn't it? Urgh... > >Since versioning is done on a per-output file, not per-input, it might >be possible to insert an intermediate step in our build process, where >multiple .so's are created and thus can use multiple version scripts. >Then we take all the .so's and relink them into a single .so. (Dunno whether >that would preserve versions on the symbols or not.) That seems like what I wanted to explore, but I couldn't figure out how to express this in the current Makefile machinery. I've done what I wanted to do with a little symbol file intaglio. In doing that I found the __cxa_dyn_string exports. Ugh. thanks though. -benjamin | http://gcc.gnu.org/ml/libstdc++/2002-12/msg00270.html | crawl-001 | refinedweb | 218 | 75.4 |
Hi guys, I'm having some trouble with my app, I'm using Away3D and I just got a brand new error, even though the class has this:
import flash.display.Loader;
private var _loader : Loader;
It says that the class has not been imported....
Any help will be appreciated.
Regards!
copy and paste the error message after enabling "permit debugging". indicate the code in the line number mentioned in the error message.
I'm working with Flash Builder as a AS Project, the problem is inside a class of Awa3D which works perfectly if I just create a simple new project. The error that I get is a 1046: Type not found or was not compilie time constant: Loader
private var _loader : Loader;
I do know what that error means, but I imported the class which makes it really weird and if I change that line of code for this:
private var _loader : flash.display.Loader;
It removes the error symbol from there but not from the entire class, even though there are no more error marks on the class.
Thanks for your answer!
you're welcome.
I'm still having problems with that, it doesn't want to work,
copy and paste the error message after enabling "permit debugging". indicate the code in the line number mentioned in the error message.
Moving this thread to the Flash Builder forums. | https://forums.adobe.com/message/4582579?tstart=0 | CC-MAIN-2017-47 | refinedweb | 231 | 69.72 |
On Mon, 2006-11-20 at 22:02 -0600, Eric Sandeen wrote:
> Eric Sandeen wrote:
>
> > Eric Sandeen wrote:
> >
> >> ugh. it's broken on x86 too, so it's not just the alignment/padding,
> >>
> >> although that should be fixed for cross-arch mounts.
> >>
> >> -Eric
> >>
> >>
> > here's a testcase to corrupt it FWIW.
> >
> >
> Ok, with expert collaboration from Russell, Barry, Tim,
> Nathan, David, et al, how about this:
>
> For btree dirs, we need a different calculation for the space
> used in di_u, to set the minimum threshold for the fork offset...
>
> This fixes my testcase, but as Tim points out -now- we need to compact
> the btree ptrs, if we return (and use) an offset < current forkoff...
>
> whee....
>
> -Eric
>
It turns out this only fixes one of the problems it is still quite easy
to corrupt indoes with attr2.
The following patch is a short term fix that address the problem of
forkoff
moving without re-factoring the root inode btree root block.
Once the inode has be flipped to BTREE for the data space the forkoff is
fixed
to the that size, currently due to the way attr1 worked (fixed size
forkoff) the code is not handling the size to the root btree node due to
size changes in the attr portion of the inode.
The optimal solution is to adjust the data portion of the inode root
btree block down if space exists.
One easy fix that was resulting all attr add being pushed out of line is
added
the header size to the initial split of the inode, at least the first
attr add
should go inline now. Which should be a win the big attr user right now
SElinux.
Including the 2 test script that have been used.
--
Russell Cattelan <cattelan@xxxxxxxxxxx>
attr2_patch
Description: Text Data
r2.sh
Description: application/shellscript
e2
Description: application/shellscript
signature.asc
Description: This is a digitally signed message part | http://oss.sgi.com/archives/xfs/2006-11/msg00329.html | CC-MAIN-2016-40 | refinedweb | 317 | 67.89 |
Need help inserting something in Qt tree view with python
Hi
I want to get a Tree View running with PySide for that i used Qt Designer to get the window.
To compile my GUI code into Python code i used the pyuic compiler and i got a form.py
which is a module containing my dialog as a Python class and looks like this:
@class Ui_SleuthGUI(object):
def setupUi(self, SleuthGUI):
SleuthGUI.setObjectName("SleuthGUI")
...
self.tvDirTree = QtGui.QTreeView(self.centralwidget)
self.tvDirTree.setGeometry(QtCore.QRect(20, 120, 311, 451))
self.tvDirTree.setObjectName("tvDirTree")
...@
Now i didn't know how to write the python code to insert something in my treeview.
I tried plenty of things wich i found on the internet but nothing worked.
PyQt4 and PySide are installed.
@from PySide import QtGui, QtCore
from sleuthGUI_ui import Ui_SleuthGUI
self.ui.tvDirTree.itemDelegateForColumn(1)
items = []
for i in range(10):
items.append(QtGui.QTreeWidgetItem(None, QStringList(QString('item: %1').arg(i))))
self.ui.tvDirTree.insertTopLevelItems(None, items)
self.ui.tvDirTree.insertAction(None, items)
@
Another Problem is that PySide does not know QStringList or QString so i tried it with PyQt4.
But i could not insert anything.
Could anybody help me please? | https://forum.qt.io/topic/11477/need-help-inserting-something-in-qt-tree-view-with-python | CC-MAIN-2017-34 | refinedweb | 201 | 59.7 |
This is the current design document for HAMMER2. It lists every feature I intend to implement for HAMMER2. Everything except the freemap and cluster protocols (which are both big ticket items) has been completely speced out. There are many additional features verses the original document, including hardlinks. HAMMER2 is all I am working on this year so I expect to make good progress, but it will probably still be July before we have anything usable, and well into 2013 before the whole mess is implemented and even later before the clustering is 100% stable. However, I expect to be able to stabilize all non-cluster related features in fairly short order. Even though HAMMER2 has a lot more features then HAMMER1 the actual design is simpler than HAMMER1, with virtually no edge cases to worry about (I spent 12+ months working edge cases out in HAMMER1's B-Tree, for example... that won't be an issue for HAMMER2 development). The work is being done in the 'hammer2' branch off the main dragonfly repo in appropriate subdirs. Right now just vsrinivas and I but hopefully enough will get fleshed out in a few months that other people can help too. Ok, here's what I have got. HAMMER2 DESIGN DOCUMENT Matthew Dillon 08-Feb-2012 dillon@backplane.com * These features have been speced in the media structures. * Implementation work has begun. * A working filesystem with some features implemented is expected by July 2012. * A fully functional filesystem with most (but not all) features is expected by the end of 2012. * All elements of the filesystem have been designed except for the freemap (which isn't needed for initial work). 8MB per 2GB of filesystem storage has been reserved for the freemap. The design of the freemap is expected to be completely speced by mid-year. * This is my only project this year. I'm not going to be doing any major kernel bug hunting this year. Feature List * Multiple roots (allowing snapshots to be mounted). This is implemented via the super-root concept. When mounting a HAMMER2 filesystem you specify a device path and a directory name in the super-root. * HAMMER1 had PFS's. HAMMER2 does not. Instead, in HAMMER2 any directory in the tree can be configured as a PFS, causing all elements recursively underneath that directory to become a part of that PFS. * Writable snapshots. Any subdirectory tree can be snapshotted. Snapshots show up in the super-root. It is possible to snapshot a subdirectory and then later snapshot a parent of that subdirectory... really there are no limitations here. * Directory sub-hierarchy based quotas and space and inode usage tracking. Any directory sub-tree, whether at a mount point or not, tracks aggregate inode use and data space use. This is stored in the directory inode all the way up the chain. * Incremental queueless mirroring / mirroring-streams. Because HAMMER2 is block-oriented and copy-on-write each blockref tracks both direct modifications to the referenced data via (modify_tid) and indirect modifications to the referenced data or any sub-tree via (mirror_tid). This makes it possible to do an incremental scan of meta-data that covers only changes made since the mirror_tid recorded in a prior-run. This feature is also intended to be used to locate recently allocated blocks and thus be able to fixup the freemap after a crash. HAMMER2 mirroring works a bit differently than HAMMER1 mirroring in that HAMMER2 does not keep track of 'deleted' records. Instead any recursion by the mirroring code which finds that (modify_tid) has been updated must also send the direct block table or indirect block table state it winds up recursing through so the target can check similar key ranges and locate elements to be deleted. This can be avoided if the mirroring stream is mostly caught up in that very recent deletions will be cached in memory and can be queried, allowing shorter record deletions to be passed in the stream instead. * Will support multiple compression algorithms configured on subdirectory tree basis and on a file basis. Up to 64K block compression will be used. Only compression ratios near powers of 2 that are at least 2:1 (e.g. 2:1, 4:1, 8:1, etc) will work in this scheme because physical block allocations in HAMMER2 are always power-of-2. Compression algorithm #0 will mean no compression and no zero-checking. Compression algorithm #1 will mean zero-checking but no other compression. Real compression will be supported starting with algorithm 2. * Zero detection on write (writing all-zeros), which requires the data buffer to be scanned, will be supported as compression algorithm #1. This allows the writing of 0's to create holes and will be the default compression algorithm for HAMMER2. * Copies support for redundancy. The media blockref structure would have become too bloated but I found a clean way to do copies using the blockset structure (which is a set of 8 fully associative blockref's). The design is such that the filesystem should be able to function at full speed even if disks are pulled or inserted, as long as at least one good copy is present. A background task will be needed to resynchronize missing copies (or remove excessive copies in the case where the copies value is reduced on a live filesystem). * Intended to be clusterable, with a multi-master protocol under design but not expected to be fully operational until mid-2013. The media format for HAMMER1 was less condusive to logical clustering than I had hoped so I was never able to get that aspect of my personal goals working with HAMMER1. HAMMER2 effectively solves the issues that cropped up with HAMMER1 (mainly that HAMMER1's B-Tree did not reflect the logical file/directory hierarchy, making cache coherency very difficult). * Hardlinks will be supported. All other standard features will be supported too of course. Hardlinks in this sort of filesystem require significant work. * The media blockref structure is now large enough to support up to a 192-bit check value, which would typically be a cryptographic hash of some sort. Multiple check value algorithms will be supported with the default being a simple 32-bit iSCSI CRC. * Fully verified deduplication will be supported and automatic (and necessary in many respects). * Non-verified de-duplication will be supported as a configurable option on a file or subdirectory tree. Non-verified deduplication would use the largest available check code (192 bits) and not bother to verify data matches during the dedup pass, which is necessary on extremely large filesystems with a great deal of deduplicable data (as otherwise a large chunk of the media would have to be read to implement the dedup). This feature is intended only for those files where occassional corruption is ok, such as in a large data store of farmed web content. GENERAL DESIGN nature of the filesystem implies that any modification whatsoever will have to eventually synchronize new disk blocks all the way to the super-root of the filesystem and the volume header itself. This forms the basis for crash recovery. All disk writes are to new blocks except for the volume header, thus allowing all writes to run concurrently except for the volume header update at the end. Clearly this method requires intermediate modifications to the chain to be cached so multiple modifications can be aggregated prior to being synchronized. One advantage, however, is that the cache can be flushed at any time WITHOUT having to allocate yet another new block when further modifications are made as long as the volume header has not yet been flushed. This means that buffer cache overhead is very well bounded and can handle filesystem operations of any complexity even on boxes with very small amounts of physical memory. I intend to implement a shortcut to make fsync()'s run fast, and that is to allow deep updates to blockrefs to shortcut to auxillary space in the volume header to satisfy the fsync requirement. The related blockref is then recorded when the filesystem is mounted after a crash and the update chain is reconstituted when a matching blockref is encountered again during normal operation of the filesystem. Basically this means that no real work needs to be done at mount-time even after a crash. Directories are hashed, and another major design element is that directory entries ARE INODES. They are one and the same. In addition to directory entries being inodes the data for very small files (512 bytes or smaller) can be directly embedded in the inode (overloaded onto the same space that the direct blockref array uses). This should result in very high performance. Inode numbers are not spatially referenced, which complicates NFS servers but doesn't complicate anything else. The inode number is stored in the inode itself, an absolutely necessary feature in order to support the hugely flexible snapshots that we want to have in HAMMER2. HARDLINKS Hardlinks are a particularly sticky problem for HAMMER2 due to the lack of a spatial reference to the inode number. We do not want to have to have an index of inode numbers for any basic HAMMER2 feature if we can help it. Hardlinks are handled by placing the inode for a multiply-hardlinked file in the closest common parent directory. If "a/x" and "a/y" are hardlinked the inode for the hardlinked file will be placed in directory "a", e.g. "a/3239944", but it will be invisible and will be in an out-of-band namespace. The directory entries "a/x" and "a/y" will be given the same inode number but in fact just be placemarks that cause HAMMER2 to recurse upwards through the directory tree to find the invisible inode number. Because directories are hashed and a different namespace (hash key range) is used for hardlinked inodes, standard directory scans are able to trivially skip this invisible namespace and inode-specific lookups can restrict their lookup to within this space. The nature of snapshotting makes handling link-count 2->1 and 1->2 cases trivial. Basically the inode media structure is copied as needed to break-up or re-form the standard directory entry/inode. There are no backpointers in HAMMER2 and no reference counts on the blocks (see FREEMAP NOTES below), so it is an utterly trivial operation. FREEMAP NOTES In order to implement fast snapshots (and writable snapshots for that matter), HAMMER2 does NOT ref-count allocations. The freemap which is still under design just won't do that. All the freemap does is keep track of 100% free blocks. This not only trivializes all the snapshot features it also trivializes hardlink handling and solves the problem of keeping the freemap sychronized in the event of a crash. Now all we have to do after a crash is make sure blocks allocated before the freemap was flushed are properly marked as allocated in the allocmap. This is a trivial exercise using the same algorithm the mirror streaming code uses (which is very similar to HAMMER1)... an incremental meta-data scan that covers only the blocks that might have been allocated between the last allocation map sync and now. Thus the freemap does not have to be synchronized during a fsync(). The complexity is in figuring out what can be freed... that is, when one can mark blocks in the freemap as being free. HAMMER2 implements this as a background task which essentially must scan available meta-data to determine which blocks are not being referenced. Part of the ongoing design work is finding ways to reduce the scope of this meta-data scan so the entire filesystem's meta-data does not need to be scanned (though in tests with HAMMER1, even full meta-data scans have turned out to be fairly low cost). In other words, its an area that we can continue to improve on as the filesystem matures. Not only that, but we can completely change the freemap algorithms without creating incompatibilities (at worse simply having to require that a R+W mount do a full meta-data scan when upgrading or downgrading the freemap algorithm). CLUSTERING Clustering, as always, is the most difficult bit but we have some advantages with HAMMER2 that we did not have with HAMMER1. First, HAMMER2's media structures generally follow the kernel's filesystem hiearchy. Second, HAMMER2's writable snapshots make it possible to implement several forms of multi-master clustering. The general mechanics for most of the multi-master clustering implementations will be as follows: (a) Use the copies mechanism to specify all elements of the cluster, both local and remote (networked). (b) The core synchronization state operates just as it does for copies, simply requiring a fully-flushed ack from the remote in order to mark the blocks as having been fully synchronized. The mirror_tid may be used to locate these blocks, allowing the synchronization state to be updated on the fly at a much later time without requiring the state to be maintained in-memory. (also for crash recovery resynchronization purposes). (c) Data/meta-data can be retrieved from those copies which are marked as being synchronized, with priority given to the local storage relative to any given physical machine. This means that e.g. even in a master-slave orientation the slave may be able to satisfy a request from a program when the slave happens to be the local storage. (d) Transaction id synchronization between all elements of the cluster, typically through masking (assigning a cluster number using the low 3 bits of the transaction id). (e) General access (synchronized or otherwise) may require cache coherency mechanisms to run over the network. Implementing cache coherency is a major complexity issue. (f) General access (synchronized or otherwise) may require quorum agreement, using the synchronization flags in the blockrefs to determine whether agreement has been reached. Implementing quorum voting is a major complexity issue. There are lots of ways to implement multi-master environments using the above core features but the implementation is going to be fairly complex even with HAMMER2's feature set. Keep in mind that modifications propagate all the way to the super-root and volume header, so in any clustered arrangement the use of (modify_tid) and (mirror_tid) is critical in determining the synchronization state of portion(s) of the filesystem. Specifically, since any modification propagates to the root the (mirror_tid) in higher level directories is going to be in a constant state of flux. This state of flux DOES NOT invalidate the cache state for these higher levels of directories. Instead, the (modify_tid) is used on a node-by-node basis to determine cache state at any given level, and (mirror_tid) is used to determine whether any recursively underlying state is desynchronized. * Simple semi-synchronized multi-master environment. In this environment all nodes are considered masters and modifications can be made on any of them, and then propagate to the others asynchronously via HAMMER2 mirror streams. One difference here is that kernel can activate these userland-managed streams automatically when the copies configuration is used to specify the cluster. The only type of conflict which isn't readily resolvable by comparing the (modify_tid) is when file data is updated. In this case user intervention might be required but, theoretically, it should be possible to automate most merges using a multi-way patch and, if not, choosing one and creating backup copies if the others to allow the user or sysop to resolve the conflict later. * Simple fully synchronized fail-over environment. In this environment there is one designated master and the remaining nodes are slaves. If the master fails all remaining nodes agree on a new master, possibly with the requirement that a quorum be achieved (if you don't want to allow the cluster to split). If network splits are allowed the each sub-cluster operates in this mode but recombining the clusters reverts to the first algorithm. If not allowed whomever no longer has a quorum will be forced to stall. In this environment the current designated master is responsible for managing locks for modifying operations. The designated master will proactively tell the other nodes to mark the blocks related to the modifying operation as no longer being synchronized while any local data at the node that acquired the lock (master or slave) remains marked as being synchronized. The node that succesfully gets the lock then issues the modifying operation to both its local copy and to the master, marking the master as being desynchronized until the master acknowledges receipt. In this environment any node can access data from local storage if the designated master copy is marked synchronized AND its (modify_tid) matches the slave copy's (modify_tid). However, if a slave disconnects from the master then reconnects the slave will have lost the master's desynchronization stream and must mark its root blockref for the master copy HAMMER2_BREF_DESYNCHLD as well as clear the SYNC1/SYNC2 bits. Setting DESYNCCHLD forces on-demand recursive reverification that the master and slave are (or are not) in sync in order to reestablish on the slave the synchronization state of the master. That might be a bit confusing but the whole point here is to allow read accesses to the filesystem to be satisfied by any node in a multi-master cluster, not just by the current designated master. * Fully cache coherent and synchronized multi-master environment. In this environment a quorum is required to perform any modifying action. All nodes are masters (there is no 'designated' master) and all nodes connect to all other nodes in a cross-bar. The quorum is specified by copies setup in the root volume configuration. A quorum of nodes in the cluster must agree on the copies configuration. If they do not the cluster cannot proceed to mount. Any other nodes not in the quorum which are in the cluster which disagree with the configuration will inherit the copies configuration from the quorum. Any modifying action will initiate a lock request locally to all nodes in the cluster. The modifying action is allowed to proceed the instant a quorum of nodes respond in the affirmative (even if some have not yet responded or are down). The modifying action is considered complete once the two-phase commit protocol succeeds. The modifying action typically creates and commits a temporary snapshot on at least a quorum of masters as phase-1 and then ties the snapshot back into the main mount as phase-2. These locks are cache-coherency locks and may be passively maintained in order to aggregate multiple operations under the same lock and thus under the same transaction from the point of view of the rest of the quorum. A lock request which interferes with a passively maintained lock will force the two-phase commit protocol to complete and then transfer ownership to the requesting entity, thus avoiding having to deal with deadlock protocols at this point in the state machine. Since any node can initiate concurrent lock requests to many other nodes it is possible to deadlock. When two nodes initiate conflicting lock requests to the cluster the one achieving the quorum basically wins and the other is forced to retry (go back one paragraph). In this situation no deadlock will occur. If three are more nodes initiate conflicting lock requests to the cluster a deadlock can occur whereby none of the nodes achieve a quorum. In this case every node will know which of the other nodes was granted the lock(s). Deadlock resolution then proceeds simultaniously on the three nodes (since they have the same information), whereby the lock holders on the losing end of the algorithm transfer their locks to one of the other nodes. The lock state and knowledge of the lock state is updated in real time on all nodes until a quorum is achieved. * Fully cache coherent and synchronized multi-master environment with passive read locking. This is a more complex form of clustering than the previous form. Take the previous form and add the ability to passively hold SHARED locks in addition to the EXCLUSIVE locks the previous form is able to hold. The advantage of being able to passively hold a shared lock on a sub-tree (locks can be held on single nodes or entire sub-trees) is that it is then possible for all nodes to validate a node (modify_tid) or entire sub-tree (mirror_tid) with a very short network transaction and then satisfy a large number of requests from local storage. * Fully cache coherent and synchronized multi-master environment with passive read locking and slave-only nodes. This is the MOST complex form of clustering we intend to support. In a multi-master environment requiring a quorum of masters to operate we implement all of the above plus ALSO allow additional nodes to be added to the cluster as slave-only nodes. The difference between a slave-only node and setting up a manual mirror-stream from the cluster to a read-only snapshot on another HAMMER2 filesystem is that the slave-only node will be fully cache coherent with either the cluster proper (if connected to a quorum of masters), or to one or more other nodes in the cluster (if not connected to a quorum of masters), EVEN if the slave itself is not completely caught up. So if the slave-only cluster node is connected to the rest of the cluster over a slow connection you basically get a combination of local disk speeds for any data that is locally in sync and network-limited speeds for any data that is not locally in sync. slave-only cluster nodes run a standard mirror-stream in the background to pull in the data as quickly as possible. This is in constrast to a manual mirror-stream to a read-only snapshot (basically a simple slave), which has no ability to bypass the local storage to handle out-of-date requests (in fact has no ability to detect that the local storage is out-of-date anyway). -Matt | http://leaf.dragonflybsd.org/mailarchive/users/2012-02/msg00020.html | CC-MAIN-2014-15 | refinedweb | 3,697 | 50.57 |
I'm trying to set it up so that my game will allow the user to turn off the sounds and allow the music already playing from a background app like iTunes to keep playing. However, even if I don't initialize the Sound Engine, the background audio is turned off when my game starts.
How can prevent a CocosSharp app from taking the audio focus?
Hey Steven
That is not possible right now but what platforms specifically are you looking to target for this.
I'm looking for it on iOS and Android. However, I did figure it out for the iOS side at least. At the start of the AppDelegate in the iOS specific project I put the following lines, and then I have a flag that doesn't start my internal background music. The sound effects play over the iTunes music
These classes are found in the
AudioToolboxnamespace.
Hey Steven
Did you figure out how to get this working on Android?
when application audio is active mode, then background music is pause,then application audio active mode is end, in this condition background music should be run from resume. Is it possible in ios xamarin ? | https://forums.xamarin.com/discussion/35831/how-do-you-allow-background-music-from-another-app-to-continue-playing | CC-MAIN-2018-13 | refinedweb | 198 | 70.63 |
Tringle is a closed figure with three sides. An equilateral triangle has all sides equal. Area and perimeter of an equilateral triangle can be found using the below formula,
Area of equilateral triangle = (√3)/4*a2
Perimeter of equilateral triangle = 3 * a
To find the area of an equilateral triangle program uses square-root and power functions. The math library has both these functions and can be used to do the calculation in the program.
The below code display program to calculate the area and perimeter of an equilateral triangle,
#include <stdio.h> #include <math.h> int main(){ int side = 5, perimeter; float area; perimeter = (3 * side); area = (sqrt(3)/4)*(side*side); printf("perimeter is %d\n", perimeter); printf("area is %f", area); return 0; }
perimeter is 15 area is 10.825317 | https://www.tutorialspoint.com/program-to-calculate-area-and-perimeter-of-equilateral-triangle | CC-MAIN-2021-31 | refinedweb | 133 | 62.58 |
A recent NDepend v2.10.2 feature is the analysis of Silverlight application. Silverlight recently went from beta to 2.0 Release
Candidate 0 . In this blog post I will first
compare Silverlight 2.0 RC0 and .NET Framework v3.5 SP1 assemblies. I
will then compare Silverlight 2.0 RC0
and Silverlight 2.0 beta assemblies.
To do so, I’ll use the Build Comparison feature of NDepend.
Silverlight 2.0 RC0 vs. .NET Framework v3.5 SP1
Let’s notice first that
only the following assemblies can be compared: mscorlib, System, System.Core, System.Net, System.Runtime.Serialization,
System.ServiceModel, System.ServiceModel.Web, System.Xml. While the .NET Framework has numerous other
assemblies not supported by Silverlight,
Silverlight comes with only 2 assemblies
not present in the .NET Framework: System.Windows.Browser and System.Windows. Another detail: the
assembly System.dll had been renamed
(I think by mistake) system.dll in Silverlight?!
SELECT TYPES WHERE IsPublic AND WasAdded
Silverlight has 44 new public
types.
41 of them are in the assembly System.Net,
and the public class System.Xml.XmlXapResolver
and the 2 enumerations System.Xml.DtdProcessing,
System.Xml.NamespaceHandling are in the assembly System.Xml.
Interestingly enough, it is easy to see that these System.Net types are in the assembly System of the regular .NET
Framework. So the decision has been to move these types.
SELECT METHODS WHERE WasRemoved
By visualizing the .NET Framework methods removed in Silverlight with the NDepend metric/treemap
view, it is really obvious that Silverlight
is a mini-mini-.NET Framework (methods removed are in blue).
For the concerned
assemblies, 8 129 types on 9 989 have been removed. It is even more impressing
in terms of methods, 72 515 methods on 90 574 have been removed. In terms of IL
instructions, 1 607 974 IL instructions on 2 046 294 have
been removed, around 78.5%!. It is interesting to list the 100 namespaces that
have been completely discarded in Silverlight.
SELECT NAMESPACES WHERE WasRemoved ORDER BY NbTypes DESC, NbILInstructions DESC
Concerning dependencies, Microsoft did the work of removing many
dependencies to .NET Framework assemblies not part of the Silverlight release. This can be shown with the following graph,
where .NET Framework 3.5 SP1small
assemblies in yellow are considered by NDepend
as tier assemblies, because they are indeed not part of the list of assemblies
chosen.
Concerning Silverlight internal dependencies, the
following Dependency Matrix tells us that hopefully the Silverlight assemblies are layered, which is not the case of the
regular .NET Framework. For example in
the regular .NET Framework mscorlib.dll and System.dll are mutually dependent.
Notice that we infer from the following matrix that there are no cycles between
assemblies of Silverlight, because
the matrix is triangularized. Moreover, cells with a red tick
represent dependencies between assemblies that have been changed (basically all
of them). Cells with a red tick and a plus/minus,
dependency represent dependencies between assemblies that have been created/removed especially for
Silverlight.
Here is the same set of Silverlight internal dependencies
represented with a graph. Here, we infer from the graph that there are no
cycles between assemblies of Silverlight,
because the graph is perfectly layered from top to bottom (i.e there are no
rows that goes from bottom to top).
Silverlight 2.0 RC0 vs. Silverlight 2.0 beta
The delta between Silverlight
beta and RC0 is significant.
73 public types have been added
(a lot of controls).
SELECT TYPES WHERE IsPublic AND WasRemoved
11 public types have been
removed.
By looking at the metric
view for methods added or refactored, it is pretty clear that a lot of work has
been done on controls in System.Window (methods added or refactored are in blue):
SELECT METHODS WHERE CodeWasChanged OR WasAdded
Analyzing your Silverlight application with
NDepend
To make possible Silverlight assemblies analysis, we
needed to tweak a bit how NDepend
resolves .NET Framework assemblies.
Indeed, not only NDepend analyzes the
code of your application, but also the code used by your application, what we
call Tiers Code. And the .NET Framework code (mscorlib,
System.Core…) is by-design tier code
for all .NET applications since all .NET Assemblies references mscorlib.dll (except mscorlib.dll itself ).
As shown below, we added
the possibility to choose the .NET
Framework targeted. There is no magic behind this, but just the
modification of the list of folders that contain the .NET Framework assemblies.
As you can see on the
screenshot above, NDepend references Silverlight beta assemblies version 2.0.30523.8. Silverlight RC0 has the version number 2.0.30923.0 and you then need to update the version number as
shown below. Of course the next version of NDepend
will be updated.
[Advertisement]
Pingback from Dew Drop – October 1, 2008 | Alvin Ashcraft's Morning Dew
I believe that Silverlight's System.Windows assembly contains various types located in the .NET Framework assemblies: WindowsBase, PresentationCore and PresentationFramework. Perhaps some comparison can be made between these?
Yes you are right Rob,
but so far NDepend can't see when a type is moved from a namespace or assembly to another. It considers that atype is deleted and then added. But by looking at type added name, you are right indeed.
Pingback from 2008 October 02 - Links for today « My (almost) Daily Links
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #193
As I already did with several popular.NET projects like NUnit , NHibernate , .NET Framework , Silverlight | http://codebetter.com/blogs/patricksmacchia/archive/2008/10/01/comparing-silverlight-and-the-net-framework.aspx | crawl-002 | refinedweb | 914 | 61.02 |
Lingering Misconceptions on CSS Preprocessors
I recently received this email from a reader who is just getting started as as front end developer and wanted to get into CSS preprocessing. It has a few common misconceptions in it that I hear quite often. So, blog post.
I'm still not keen on LESS. The fact that it requires adding more JavaScript to my pages puts me off. Sass seems a lot more user friendly. After reading your article I'm even more convinced. The only issue I have is the installation. To install Sass I needed to install Ruby, and to install Ruby I needed to install git, and to install git I needed to install the oskeychain helper... Gor someone who is very new to using the command line, this was an awful way to get something so useful installed.
Let's start with this one:
I'm still not keen on LESS. The fact that it requires adding more JavaScript to my pages puts me off.
LESS is written in JavaScript. It does allow you to
<link> up
.less files and load the LESS compiler in a
<script> and it will process them "on the fly." But you should never use it that way, except in really specific testing situations. You should use LESS to preprocess CSS and
<link> up the
.css files it creates.
The "pre" part of preprocessing you can think of as "before you send these files to the live website."
When the speed of LESS vs. Sass is considered, it's just in how fast they can preprocess the files and turn them into CSS. It's just an authoring convenience to have it have it fast. It doesn't affect your live site.
And then this one:
To install Sass I needed to install Ruby, and to install Ruby I needed to install git, and to install git I needed to install the oskeychain helper...
Oi! That stuff can make my head spin too. Fortunately you don't have to go through all that. Download CodeKit, have it watch your project folder, done. If you're a Windows user, try the LiveReload alpha. I'm not saying you should avoid the command line, but I am saying an app for preprocessing makes things way easier and better.
If you are on Windows it is much easier than it sounds you were attempting. There is a ruby installer and a couple of command prompts or you can use Scout for SASS.
Here is the basic install/use of SASS.
And here is Scout
I already had ruby installed when I got Scout so you might have to do the same but I am unsure, the install could include it.
Oh and the reason I didn’t suggest LiveReload is because it didn’t work for me at all, though I haven’t tried it in a while.
I can’t agree more with Brad: getting started with Sass through the command line is really not that hard. I thought it would be, but it’s not.
I’m running both Mac and Windows environment, so I didn’t even take the time to try CodeKit since I knew it wasn’t an option for me on Windows.
On Windows, you only need to download and install the Ruby Installer (), which give you some kind of advanced terminal able to run Ruby (sorry if I’m popularising).
On Mac, you don’t even need to install anything since the Terminal is already able to run Ruby.
Once you get there, it’s not harder. The 2 following commands install both Sass and Compass, nothing else.
* gem install sass
* gem install compass
Then, you have to initialize your project. You have to find a folder. My advice is to manually create a folder where you want, then reach it from the command line.
There are 3 commands to know here:
* “dir” (Windows) / “ls” (Mac) to look for folders; basically it shows you every folders (and files) there are where you stand in your path
* “cd folderName” to open a folder
* “cd ..” to close a folder
Once you’ve find your folder, the only thing left to do is to initialize your project. The second line makes Compass watching for changes in your .scss files to compile them into .css as soon as there is a change.
* compass init
* compass watch
If you want to concatenate stylesheets, you might want to go with:
* compass watch –output-style compressed
This is my way of doing it at least. If someone feels like I said some mistake, please be sure to correct me. ;)
Would just like to add another windows app for SASS. (although not free)
I personally use Fire.app and just found it easier to use and has project templates.
@Hugo, the fact that your SASS setup comment takes 7-8 paragraphs to do is a testament to the fact that it’s too complicated to setup.
This is one thing that bothers me as a developer is the tools we use are usually lacking in ease of use and user interface is an afterthought (if the tool even has one)
I’ve used Scout before and that’s exactly what we need more of in the community.
I can’t recommend Scout enough if you are on Windows. It makes the whole process cake. Also their team is helpful.
On Windows 8 Scout wouldn’t work so I reached out to them on Twitter and they got back to me quickly and pointed out that Java had to be installed first.
I have actually meant to make a quick writeup of getting started with Scout because the interface, while it is simple, does not off guidance so your first project setup can be just clicking and hoping to find out what happens.
Brad, I’m using LiveReload on a Windows 7/65bit machine. I use notepad++ for a text editor, and watch the results in a browser. I haven’t gotten into Sass yet so thanks for links above. From the Scout video, it looks like LiveReload is included, and I could keep using notepad++.
Just wanted to point out that the current beta version of LiveReload works fine on a Windows 7 machine. I use two monitors. One for the text editor, one for a browser to watch what I’m doing. All I have to do is Ctrl+s in the text editor, and the browser displays what I’ve written.
Very simple, very fast, has saved me tons of time and makes coding HTML/CSS/JavaScript much faster. It’s also a setup that is completely language independent. For instance, a contact form that includes PHP/MySQL works fine without changing the workflow.
Sass just seems to add a lot of extra steps to that workflow, but I’ll learn it just because there are lots of jobs out there that include it as a prerequisite. I enjoy coding in native languages, but if Sass really does save me time and adds flexibility I’ll use it. The variables and objects parts of Sass leave me unconvinced since PHP/CSS gives me the same capabilities.
@Neal: it takes 7-8 paragraphs because I explain with details. :P
Scout seems nice. I’ll definitely have to have a closer look at it. :)
If you use Visual Studio, you can download the workbench
If you must use visual studio I second workbench plugin @Trang linked too.
Anyone not required to use a visual studio, highly recommend start using sublime text 2.
Fair or not their is a stigma associated with those with a Microsoft background doing front end work.
And nice work on clearing up the LESS thing Chris. I think that confuses a lot of people.
You can use Less with PHP too.
Yep, this is what I have used in all my projects for the past year. It works a treat, and means you can do work live on the server, if that’s your thing (remembering to change back to the static .css file once the site is live, to reduce server load).
Totally my way. Compiling less via php and there you go, totally convenient.
Doing that for a good year now: awesome!
Using a LESS or SASS-based framework such as Twitter Bootstrap helps tremendously in understanding the true power of preprocessors. It is a lot easier to see preprocessors working in action verus trying to grok everything right away.
This is true.
I knew about preprocessors but never really understood them until I started working with Bootstrap and Inuit which use LESS and SASS. After fiddling around with those pre built systems I realized how powerful preprocessors were (As well as code kit :)).
My default answer to the issue: SASS on Windows is quite easy, and does not require Ruby installation or any other command line stuff. See:
Thanks, Mark!
I would recommend setting up Node and NPM and then installing the needed preprocessors via that. There are LiveReload servers and such available as well. For instance you can set up grunt (a build system) to run a LR server and compile the CSS for you. On browser side you need to set up the plugin once but after that’s done it’s just great!
TLDR: Learn to use Node, NPM and Grunt (or equivalent). It pays off.
you’ve gone through all the work so try out yeoman.io
Hello all,
just to add that you can use also zuss if you use java.
So your answer to someone who doesn’t want to install a lot of software he is not happy with as he doesn’t know what it does is that it is not a problem as all he needs to do is to switch to another editor or install some software that constantly watches and changes a folder on his hard drive?
That reminds me of people bashing Flash as it needs to be installed and write browser-specific HTML5 solutions that require people to install a different browser (in a lot of cases in a beta version).
I really think the original email has some very good points and we are right now relying on a flaky set of tools to give us a more convenient way of developing. Nobody questions the usefulness of preprocessors for the initiated but it is quite an overhead in our development tool stack that needs documenting. This is counterproductive to getting browsers to do these things out of the box for us and standards to add the things developers want. John Alsopp really brought this to the point in his Full Frontal talk 30:40 onwards.
The web became such a success as it was dead simple to start creating something. The more tools we add the more complex we make it. If those tools are not reliable and keep changing we concentrate on the wrong thing – we should make it easier for people to use the web, not jump through hoops to write the least amount of code. Case in point is JavaScript – we abstracted all the annoyances away into libraries and now we are faced with a new mobile environment and we realise these libraries are an overhead that is too slow for that. Abstraction is in a lot of cases a short term win that hurts us later down the line.
I think it is very important to question our tools and concentrate on improving the platform instead.
Regarding your comments on JavaScript libraries, I don’t believe that example is an apples-to-apples comparison with CSS preprocessing. I do agree that choosing a library like jQuery over a speedier JS solution potentially forces a slower, less enjoyable experience on your end users. The users have no say in the matter, and indeed we should consider where those decisions are leading us. Point taken.
However, CSS preprocessors (if used correctly as Chris describes above) have no negative effect on the end user whatsoever. In fact, the minification features present in preprocessors like SASS/LESS should actually improve the end user’s experience.
In regards to making web design less accessible, I view CSS preprocessing as one of the many optional tools for use by web designers looking to make their job easier. Creating a simple website straight out of 1995 is just as easy (if not easier) today as it was back then. The barrier to entry on the web is not bigger; the breadth of possibilities is simply larger. The community has acknowledged that reality and started developing their own tools to address new challenges.
You wouldn’t discourage a young web developer from switching from Notepad to a more advanced text editor just because it requires installing a new application. So why discourage CSS preprocessing just because there’s a small setup process, which apps like CodeKit mitigate?
Sure, native support for SASS-like features in every major browser would be great. But do you really think the browser developers would consider incorporating those features if several thousands of web developers weren’t already proving their use case on a daily basis?
Chris, I agree with you, I am glad I am not alone in thinking we require a lot to do so little.
I have yet to see using preprocessors as adding efficiency to my process. I have inherited over-nested SASS and LESS files causing output selector confusion. I have also inherited projects with broken workflows using these tools that can be tricky to repair or require re-architecting in order to make a minor change causing project overruns. To add insult to injury, I find setup and mapping troubleshooting tend to use up all of the time I save with mixins and variables. The biggest reason being that in CSS, I just use find and replace in my editor and the task generally only takes a couple of seconds anyway.
Now that it’s 1+ year later, I wonder, are you still coding CSS3 or have you switched to a preprocessor? Has anyone done a cost-benefit analysis of using preprocessors? When you look at the examples on their sites they are (expectedly) rigged in their favor. I want to believe that developers have a level head about this and aren’t just going with “gut feeling” or not questioning the marketing on the preprocessor’s site, but it seems like the industry has accepted preprocessors without di riguer.
On the Apple side, you don’t have to install Ruby, it’s already installed. It’s not up to date, but for Sass and Compass, it’s fine. “Gem install sass”, “gem install compass”, depending on your setup, maybe toss in a sudo.
But for anyone just getting started on front-end development, I would make sure that they understand a few things. One is that CSS preprocessors are the future. Personally I prefer Sass (not Scss) over Less, but that’s because I don’t like the extra markup. The other thing they should understand is that a front-end developer has to be learning ALL THE TIME (which is what makes sites like this invaluable). If you don’t love learning new things, I’d recommend another industry.
As far as installing Git, any front-end developer worth their salt should be learning that as well. Homebrew can make that a one command install.
Personally, I prefer Compass and LiveReload over Codekit. Codekit is fine too, I bought the license, but if you take your front-end development even further and start using tools like MiddlemanApp, you’ll find that LiveReload is already supported.
Excellent post. I really enjoy working with Preprocessors myself. Clarification was needed and very well put. I use LiveReload through the Mac App store simply because that is what I bought. I wish I had known about CodeKit though.
Three reasons why I prefer LESS over SASS:
1. LESS is independent of the server. If I develop a WebApp, I want it to be possible to serve it from any server, regardless of the back-end env. In addition, I often don’t even have access to the server – just to the static directory – so I can’t install and configure SASS. Most of my apps are developed against an API (I don’t have any control on server code).
When testing, I don’t want to compile my CSS files after each change I make, and using LESS enables that – I just insert it as a postprocessor into the index.html, and my .less files are served and compiled in runtime. Lag is usually up to one second – not that annoying.
When I want to create a build for my app, I can use LESS from within the Node.js script I have for “compiling” projects, and so for stable releases, I have only 1 minified CSS file, avoiding the lessjs processing time.
Sass is independent from the server too.
And with something like CodeKit or LiveReload, you can get live changes as well.
You can also concatenate Sass files and keep it all in the same file for production.
When using SASS you set it to “watch” a file and it compiles everything automatically into one (optionally minified) file.
I don’t like much these pseudo-languages created to compile to another language. I use this (created by me) that uses standard css code.
I don’t know about SASS (I believe it can do some pretty programmatical stuff with loops etc) but LESS doesn’t have to be treated like another language per se — it’s more of an improved CSS syntax than a whole new language. Stuff like nested rules and variables make perfect sense to even a novice CSS developer (or should!) and the improvement in productivity and speed are huge (at least, they were for me).
As someone mentioned Fire.app here, I want to include Compass.app which is basically Fire.app but without all the HAML/ERB stuff.
It is just for SASS and Compass and I really like it. Not free but 10$ is a reasonable price I think. The app is cross-platform as well so you can use it on OS X, Linux and Windows. Still not as great as CodeKit but for non-Mac users a nice alternative.
I use LESS, and auto-compile with the handy-dandy SimpLESS tool. You just point your LESS file to your CSS file and it will auto-compile every time you save afterwards. If you happen to have incorrect syntax, it won’t compile, but will show a subtle alert on your screen telling you what line the error is on. Version 1.4 was released back in July ’12 with a (sometimes) broken installer, but until they get 1.5 out you can just launch via the .exe!
I found that the LiveReload for Windows Alpha doesn’t allow for enough configuration for the compilers. I use WinLess, a Windows GUI for less.js:. Or, if you want a full editor on Windows:. Before those I used SimpLESS:
+1 for Crunch!
for windows better use winless – imho it developed well since last versions and is worth a look. Now you can also determine the outputfiles directly and watch whole directories.
My basic tools for making front-end are: Sublime text 2, Scout app and CSS Reloader for Firefox.
I use Scout app on Windows to get the minimized css file. Which is used in the production. Not a big thing to setup for the projects, but the scss save time, and mixins are awesome. I can make sprites on my own in an easy, fast way :)
sometimes I love being in the windows/asp.net/visual studio world. For less, all you need to is install a nuget package, add a couple of lines of code to a bundle, and .net will bundle/minify/cache your less on the fly and serve them to the end user as one big .css file. And in debug mode, it will just convert single files to .css without the combining/minification (so you can easily find what file the style is coming from).
then on the page
and you’re done. same for javascript files as well.
LESS is really not a problem to install on Linux and use from the command line. It’s just a few lines:
sudo apt-get install python-software-properties python g++ make
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs npm
sudo npm install -g less
Of course, it first requires you to install python and then node.js with npm. But putting these commands in a tutorial makes installation easy for someone who is new to the command line.
Compiling can then be as easy as typing lessc in the command line.
I think all the new features that preprocessors add to make development easier make them a no-brainer. They’re especially easy for the uninitiated, since you can use it as just CSS without the preprocessor features anyway. You can ease yourself into using LESS/SASS features, and as you ease yourself into them, you’ll find each feature makes something you were already doing more convenient.
It really doesn’t add too much complexity. Unless you want to build from source every time, there’s documentation and tutorials to make this stuff very accessible to the absolute beginner. You just need to learn a little bit more with regards to setup, or you might not even have to do that and just refer to the documentation if you ever need to re-install the preprocessor tools. We can improve on CSS right now, with full browser compatibility, for very little additional effort.
I would advice against using sudo to install nodejs and npm.
GhostTX has a very nice blogpost on installing Nodejs, NVM and NPM on Ubuntu 12.04, and though it is slightly dated, it has the advantage of keeping everything in the user’s home directory.
Well said, Sir Chris.
The mixin example given by LESS is exactly the reason you shouldn’t use it…. they are assigning styles to specific id’s and only using them once…terrible!
I have seen LESS files with THOUSANDS of lines of css because people use it badly.
css is designed to be object oriented, resuable and namespaced. If you find yourself writing lots of css with many duplicates then you aren’t doing it properly. please research OOCSS and namespacing and learn your craft properly, rather than using a shortcut which create more junk than Dreamweaver!
For those using Sublime text 2, this page is very handy for first installing ruby / sass as well as a build engine for a way to complile .scss files. All free and easy to follow so no excuses.
Windows folk using Visual Studio, there’s another great free extension web workbench, it has less/sass and handles the ruby / sass setup behind the scenes.
Im forced to work in a Visual Studio environment and found the latter is extremely helpful
I always despair a little when I read things like this. Any self-respecting web designer/developer should take the time to learn basic command line administration techniques. It really isn’t as hard as you think it is and you’ll become a much more efficient user of your computer for it.
Caveat: I do most of my development using Debian GNU/Linux & derivatives thereof, so my setup is probably a little different from that of all you Mac users, but the fact is that Mac OS X and GNU/Linux are both part of the Unix family of operating systems.
Should any self respecting auto maker learn to make buggy whips just to be safe?
It is 2013, instead of thousands of tutorials on basic command line usage, how about just writing a GUI?
I don’t think it’s that simple though. The beauty of css , html and javascript is that they are interpreted rather than compiled. Compiled or preprocessed languages are inevitably more complex ( with lockins to specific dev tools, linkers , naming conventions , platforms … ). So by transforming css into a compiled language, it becomes more powerful but substantially more complex — i think this is at the heart of the concern about preprocessors. This concern isn’t by saying its ok if you develop on a mac and use codekit, in my opinion.
I’ll also +1 Visual Studio & Web workbench – makes it really easy on Windows. The Web Essentials extension is supposed to support sass soon too.
I was hoping someone would have stumbled across my solution for Windows:
It’s basically a standalone wrapper for a precompiled Ruby dll and a SASS compiler. The blog post explains a little more. I’ll admit though that it is kinda slow as A sublime text build system.
I am using Compass.App for my Sass development. It works on Mac, Linux and Windows :-) I has many of the same features of CodeKit for the Mac, but works on all platforms. It’s only $10
I just today got sass working just by typing:
sudo gem install sass
sudo gem install listen
sudo gem install –version ‘~> 0.9.1′ rb-fsevent
sass –watch style/scss:style/css
you don’t even need to install Ruby!
i use simpless for less, and scout for sass. easy.
If you develop in chrome, this extension works as LiveReload. LiveReload didn’t work for me at all, so don’t bother checking the latest version. LivePage is pretty good.
For the rest, Scout or FireApp are brilliant.
First i used LESS, because it seemed more actively developed than SASS. Then i found Compass, which only works with SASS. I never wrote a single line of “regular” CSS since then. Using Compass and SASS is absolutely a must these days. It just simply saves you a ton of work when it comes to responsiveness, or cross-browser issues, CSS3 transformations, etc.
Thanks everyone, this post has helped me get up and running on the whole preprocessor thing. Im on Windows and What worked for me was:
Liveloader extension for firefox
and scout at
sorry I couldn’t figure out how to make it linkable, the markup didnt work for me
anyway, thanks everyone for the sharing. Now I need a good tutorial on how to code scss so I can learn mixins and such.
this comment is just so I can subscribe to the comments, which I forgot. No way to edit last comment.
Man!! you’re a Randy Jackson Look-Alike!!
The best place to look for SASS guides is. The homepage has 4 super simple examples on variables, mixins, nesting and extends. You can also look at Chris’s screencast to get a walk-through:
All CSS stuff aside – Compass/SASS combo wins for sprites alone! It’s insanely easy :)
I agree with Chris. Somehow we are creating tools, which actually going to make our life miserable sometime sooner or later. Any Frontend Engineer first of all need to understand the basic technologies, which browsers and platforms understand. I won’t prefer some JavaScript library to generate CSS file, might be that build up interdependency on different parts of basic Web tools.
This problem would be more severe when we want to have unified web application for desktop and tablets. One side best practices for mobile development is suggesting to avoid the JS libraries and coming up with as tiny as possible JS libs. If we want to add one JS which would give you better CSS, then its not suggested.
Personally I see CSS evolving into Sass or Less and future browsers will have a standard processor built in..
So I don’t see these technologies as “quick fixes” which will cause problems further down the line….
They are simply a step in the evolution of web development….. Remember 256 web safe colours :-)
Everything which is helpful for you, could be complex for others, but always should be independent. It doesn’t matter how many packages and dependencies needs used code, if it possible to move and re-use code. You can, but shouldn’t use prepocessors if don’t know how work selectors, priority or inheritance. Start with basics, then learn advanced technics.
I put off learning the whole css preprocessor thing for far too long. I’ve installed on both my Mac and Windows 7 based PC. Sure the PC install took a little bit more work to get Ruby up and running but once you actually start using SASS and maybe drop in some other cool little bits and pieces like Compass or Susy, you sit back and wonder what you’ve been piffling around at hand coding every line of CSS for over the years.
To broaden the discussion: What’s the sense of CSS preprocessors built with Ruby (SASS) or Node.js (LESS) if your webapp doesn’t use Rails or Node.js? If you webapp is written in .NET or .PHP why wouldn’t you just use those languages to generate a CSS file? You could use whatever variables, loops,… you need in .NET or PHP syntax that you’re already familiar with.
If you are using only basic features, I think Bart is correct that you could just as well write your CSS as a PHP or ASP template. But make sure that you only run this script once each time you modify your template, then link the generated CSS file on your pages! Serving up dynamically generated CSS files will either conflict with or bypass browser caching mechanisms for CSS files, and potentially waste CPU cycles on your server. Aside from this condition, the only real advantage I see to using a preprocessor (if restricted to basic features) is that the preprocessor’s syntax will look cleaner than a templated CSS.
PHP Example:
SCSS Example:
Preprocessors also include more sophisticated techniques, such as nested selectors. A real-world case could resemble the following:
Of course, it’s possible to accomplish the same thing using pure CSS, but it will be more difficult to read and write, especially if you have more nesting levels and longer class names. To implement the above using a PHP template would require parsing the template, and converting it into pure CSS. By the time you have done that, you are well on your way to implementing your own preprocessor, but one that lacks the numerous advanced features offered by existing preprocessors. | http://css-tricks.com/lingering-misconceptions-on-css-preprocessors/ | CC-MAIN-2015-32 | refinedweb | 5,117 | 71.95 |
------------ INTRODUCTION ------------ JSPL.pm is a bridge between Mozilla's SpiderMonkey JavaScript engine and the Perl engine. JSPL allows you to export perl functions, classes and even entire perl namespaces to javascript, then compile and execute javascript code and call javascript functions. You can pass any variable or value between both interpreters and JSPL does automatic reflexion between perl and javascript datatypes. You can start using all this by writing JavaScript code and running it with the included "jspl" shell: #!/usr/bin/jspl // This JavaScript code uses perl's features in a transparent way say('Hello World!'); say('Are you ' + Sys.Env['USER'] + '?'); if(Sys.Argv.length) say('My argv: ' + Sys.Argv.toString()); Or execute JavaScript code from perl: use JSPL; my $ctx = JSPL->stock_context; $ctx->eval(q| for (i = 99; i > 0; i--) { say(i + " bottle(s) of beer on the wall, " + i + " bottle(s) of beer"); say("Take 1 down, pass it around, "); if (i > 1) { say((i - 1) + " bottle(s) of beer on the wall."); } else { say("No more bottles of beer on the wall!"); } } |); Even use installed CPAN modules directly from JavaScript: #!/usr/bin/jspl require('Gtk2', 'Gtk2'); install('Gtk2.Window', 'Gtk2::Window'); install('Gtk2.Button', 'Gtk2::Button'); Gtk2.init(); var window = new Gtk2.Window('toplevel'); var button = new Gtk2.Button('Quit'); button.signal_connect('clicked', function() { Gtk2.main_quit() }); window.add(button); window.show_all(); Gtk2.main(); say('Thats all folks!'); ------------ INSTALLATION ------------ Prerequisites ------------- To compile and install JSPL, make sure you have SpiderMonkey's headers and libraries installed. See <> Currently this module support SpiderMonkey versions 1.7.0 to 1.8.5 Note that for SpiderMonkey after version 1.8.0, you require a C++ compiler. If you have build your own SpiderMonkey from sources but not installed it, set the environment variable 'JS_SRC' to the path of your build directory, normally one below SM's 'js/src' directory, and skip to "Building" below. But be aware that using JS_SRC imply that you want to use a SM _static_ build. Otherwise the simplest way to get SM's headers an libraries is to install a recent copy of the XULRunner SDK (aka Gecko SDK) or a packaged SpiderMonkey for your distribution. * Linux Most Linux distributions provide the XULRunner SDK: in Fedora it is provided in the package 'xulrunner-devel', in Debian in 'xulrunner-dev', Ubuntu distributes it in parts, you need 'libmozjs-dev' or 'libmoz185-dev'. Some linux distributions ship a 'js-devel' package that can be used too. All those should include a pkg-config's file that Makefile.PL will use to get the required compilation parameters, but different distributions use different names. An easy way to known if you has one of them installed is to execute: $ pkg-config --list-all | grep js Makefile.PL will automatically search for some known pkg-config's files, but you can select which one to use. * Windows Grab a copy of XULRunner SDK from <>, unzip it and include its "bin" directory in front of the PATH environment variable. For example, if you unzip it at e:\xulrunner-sdk, you should setup your path with: C:\> set PATH=e:\xulrunner-sdk\bin:%PATH% That way Makefile.PL can find all the required files. We use VS 6.0+ for all testing. * MacOS Untested, some hacking may be required. * Other Unixes All should work as long as Makefile.PL finds a 'pkg-config' file for your installed SM. Building -------- To build and install this module, do the following: > perl Makefile.PL > make > make test > make install In linux, you can pass the name of the pkg-config module that you want to use as a single argument to Makefile.PL, for example: > perl Makefile mozilla-js2 In Windows, substitute "make" with "nmake". ------- SUPPORT ------- Please submit any questions, bug reports, comments, feature requests, etc., to Salvador Ortiz <sortiz@cpan.org>. I'm also subscribed to the perl-javascript@perl.org mailing list that is a proper place to discuss this module. --------- COPYRIGHT --------- Copyright (c) 2009 - 2012, Salvador Ortiz <sortiz@cpan.org>. All rights reserved. Some code adapted from JavaScript module Copyright (c) 2001 - 2008, Claes Jakobsson <claesjac@cpan.org>. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See | https://metacpan.org/release/SORTIZ/JSPL-1.07/source/README | CC-MAIN-2021-25 | refinedweb | 708 | 59.09 |
are interested in the mobile development space, you must’ve heard by now: Xamarin.Forms is evolving into .NET Multi-platform App User Interface (MAUI).
In this tutorial, I will tell you all about the ins and outs of this change and what it might mean for you.
Don’t worry, nothing is going away!
Everything will just get faster, better and simpler for you – the developer.
For those who are not familiar with what Xamarin and Xamarin.Forms is all about, let me quickly refresh your memory.
Before Xamarin was Xamarin, it had a different name and was owned by several different companies, but that is not relevant to this story.
In 2011, Xamarin, in its current form, was founded by Miguel de Icaza and Nat Friedman. With Xamarin, they built a solution with which you can develop cross-platform applications on iOS, Android and Windows, based on .NET and C#. Nowadays you can even run it on macOS, Tizen, Linux and more!!
Since developing with Xamarin was all based on the same language, you could share your code across all supported platforms, and thus reuse quite a bit.
The last piece that wasn’t reusable was the user interface (UI) of each platform.
In 2014, Xamarin.Forms was released as a solution to overcome that problem. With Forms, Xamarin now introduced an abstraction layer above the different platforms’ UI concepts. By the means of C# or XAML, you were now able to declare a Button, and Xamarin.Forms would then know how to render that button on iOS, and that same button on Android as well.
With this in place, you would be able to reuse up to 99% of your code across all platforms.
In 2016 Xamarin was acquired by Microsoft. Together with this acquisition, most of the Xamarin code became open-source and free for anyone to use under the MIT license.
If you want to learn more about the technical side of Xamarin, please have a look at the documentation here:
As already mentioned, Xamarin and Forms are free and open source today!
This means a lot of people are happily using it to build their apps – both for personal development of apps as well as to create Line of business (LOB) enterprise apps. Over the years, new tooling was introduced: Visual Studio for Mac allows you to develop cross-platform solutions on Mac hardware for Xamarin apps, and also for ASP.NET Core or Azure Functions solutions.
And of course, all the Xamarin SDKs got updated with all the latest features all the way up to iOS 14 and Android 11 which have just been announced at the time of writing.
Xamarin.Forms is no different: it has seen a lot of development over the years. New features are introduced with every new version.
Not just new features; even new controls are now “in the box”. While earlier, Forms would only render the abstraction to the native counterpart; they have now introduced some controls that are composed from other UI elements.
Effectively that means Forms now has several custom controls. Currently those are: CheckBox, RadioButton and Expander for instance.
If we go a little deeper into how Xamarin.Forms works, we quickly find something called renderers.
Each VisualElement, which is basically each element that has a visual representation (so pages and controls mostly), has a renderer. For instance, if we look at the Button again, Button is the abstract Xamarin.Forms component which will be translated into a UIButton for iOS and an Android.Button on Android.
To do this translation, Forms uses a renderer. In this case, the ButtonRenderer. Inside of that renderer, two things happen basically:
1. Whenever a new Button (or other control) is created, all the properties are mapped to their native controls’ counterparts. i.e.: the text on a Button is mapped to the right property on the targeted platform so it shows up the right way
2. Whenever a property changes on the Button (or other control) the native control is updated as well.
The renderer controls the lifecycle of that control. You might decide that you need things to look or act a bit different, or that maybe a platform-specific feature is not implemented in Forms. For those scenarios, you can create a custom renderer. A custom renderer allows you to inherit from the default renderer and make changes to how the control is rendered on a specific platform.
If you want to learn more about renderers and custom renderers, this Docs page is a good starting point:
In May 2020, the Build 2020 conference was held. Because of the current situation around the world, this was the first time this event was completely virtual, just like a lot of events. Amongst a lot of other great announcements, there was also the news that Xamarin.Forms will evolve into something called .NET MAUI. If you want to (re)watch the announcement from Build, you can see the session here on Channel 9:.
Editorial Note: Here’s a quick recap of Build 2020 for Developers.
Notice how they are using the word evolve.
This means a couple of things.
First and most importantly: nothing will be taken away from you. Everything that is in Forms today, will be available in .NET MAUI.
Second: while everything will still be available for you, things will definitely change. The team has taken all the learnings over the past few years from Forms and will incorporate that into .NET MAUI.
There will be some breaking changes. Everything that is marked deprecated today or until .NET MAUI is released, will be removed. Also, and probably most importantly, the architecture of the renderers will change, and the namespace will change.
In .NET MAUI, the renderers that are available right now will evolve to so-called slim renderers. The renderers will be reengineered and built from the ground up to be more performant. Again, this will be done in a way so that they should be useable in your existing projects without too much hassle.
The benefit you will get is faster apps out of the box.
You might wonder what will happen to your custom renderers? Well those should just keep working. There will probably be exceptions where it will cause some issues, but the goal here, again, is to keep everything as compatible as possible.
If you are wondering about some of the details that are shaping up as we speak, please have a look at the official Slim Renderers spec on GitHub:
Microsoft is showing its dedication to Xamarin.Forms.
With .NET MAUI, Forms is taken into the .NET ecosystem as a first-class citizen. The new namespace will be System.Maui. By the way, Xamarin.Essentials, the other popular library, will take the same route and you can find that in the System.Devices namespace.
As you can imagine, this is quite the change and even a breaking change. The team has every intention of providing you with a transition path or tool that will make the switch from Forms to .NET MAUI, as pain free as possible.
If you have worked with Xamarin.Forms today, you know that you will typically have at least three projects: the shared library where you want all your code to be so it can be reused, an iOS project and an Android project. For each other platform that you want to run on, you will have to add a bootstrap project in your solution.
While this is technically not a feature of .NET MAUI, .NET MAUI is the perfect candidate for this. In the future, you will be able to run all the apps from a single project.
Figure 2: Screenshots of how the single project structure might look like
With the single project structure, you will be able to handle resources like images and fonts from a single place instead of per platform. Platform-specific metadata like in the info.plist file will still be available. Writing platform-specific code will happen the same way as you would write multi-targeting libraries today.
See the bottom-right most screenshot in Figure 2.
Another thing that has been announced is that .NET MAUI will be supported in Visual Studio Code (VS Code). This has been a long-standing wish from a lot of developers, and it will finally happen. Additionally, everything will be available in the command-line tooling as well, so you can also spin up your projects and builds from there if you wish.
Xamarin.Forms, and other Microsoft products for that matter, have mostly been designed to work with the Model-View-ViewModel (MVVM) pattern.
With .NET MAUI, this will change.
While MVVM will still be supported (again, nothing is taken away), because of the new renderer architecture, other patterns can be implemented now.
For instance, the popular Model View Update (MVU) pattern will now also be implemented. If you are curious what that looks like, have a look at the code below.
readonly State count = 0;
[Body]
View body() => new StackLayout
{
new Label("Welcome to .NET MAUI!"),
new Button(
() => $"You clicked {count} times.",
() => count.Value ++)
)
};
This can even open the door to completely drawn controls with SkiaSharp for instance. This is not in any plans right now, but it’s certainly a possibility, even if it comes from the community.
After all the good news, you’re probably excited to get started, right now!
Unfortunately, it will be a while before the evolution is complete. The first preview is expected together with the .NET 6 preview which should happen in Q4 2020. The first release of .NET MAUI will happen a year after that, again with the release of the .NET 6 final; November 2021. For a more detailed roadmap, have a look at the wiki on the repository:.
However, you can already be involved today. All the new plans, features, enhancements and everything will be out in the open. You can head over to the repository right now and let the team know what is important to you.
There are already lively discussions happening about all kinds of exciting new things. Also, the code is there too, so you can follow progress and even start contributing to be amongst the first contributors of this new product.
You can find the repository here:
After the release of .NET MAUI, Forms will be supported for another year. That means; it will still get bugfixes and support until November 2022. That should give you enough time to transition to .NET MAUI with your apps.
There is also a big community supporting Xamarin and Forms, so this will also give library authors all the time they need to adapt to this new major version.
As you might have already gotten from all the new names and namespaces, the brand Xamarin is bound to disappear. Also, the iOS and Android SDKs will be renamed to .NET for iOS and .NET for Android.
I think this was always expected from the beginning when Microsoft took over. It’s just that these transitions take time.
Of course, this is very sad, the monkeys, logo and all that belongs to the Xamarin name will be history. I think it’s for the best and this means that the Xamarin framework has grown up to be a technology that is here to stay – Backed by Microsoft, incorporated into .NET, your one-stop solution for everything cross-platform.
I’m very excited to see what the future will bring, and I hope you are too!
This article was technically reviewed by Damir Arh and editorially! | https://www.dotnetcurry.com/xamarin/dotnet-maui-multi-platform-app-ui | CC-MAIN-2022-05 | refinedweb | 1,937 | 65.83 |
Although not exactly fair, while reading the C++ Primer book
to refresh my C++11 knowledge during Christmas break (OK, after kids went asleep), I came across the example given for the new C++11 features (lambdas), strings and STL usage in chapter 16, which does nothing more than counting words in a text file. This is a rather classical example and I happen to have it as a part of the Python course I give to my colleagues. Of course I couldn’t hold myself from comparing C++11 and Python as C++11 actually tries to get higher level and closer to languages like Python. Here is what I’ve got.
What it does
It reads words from a text file (hard-coded file name, basic error handling provided) and counts the number of occurrences of each word in the file ignoring upper/lower case. For brevity a word is defined anything surrounded by whitespaces, so “end” and “end.” happen to be different words. So be it.
Quick and dirty summary
C++11 does the job in whopping 54 effective lines of code (ELOC) or in 42 not taking into account lines with single curly braces. The readability could be better and simple constructs seem to require quite some time to get right.
Python does it in just 17 ELOC. The code is straight and clean.
Details
Here is how it looks like
C++11 code
The following code was compiled under GNU c++ 4.6.3-1ubuntu5 using the following command
# c++ word_count.cpp -o word_count -std=c++0x
// Counts words in a text file in C++ (using C++11) #include <iostream> #include <iomanip> // for 'setw' #include <fstream> #include <vector> #include <set> #include <map> #include <iterator> #include <algorithm> // for 'transform' #include <cstdlib> using namespace std; // Required as the default tolower accepts an integer. char to_lower2(char ch) { return tolower(ch); } string & to_lower(string & st); int main() { ifstream file_in; const char * file_name = "word_count.txt"; file_in.open(file_name); if (file_in.is_open() == false) { cerr << "Cannot open file '" << file_name << "'. Aborting.\n"; exit(EXIT_FAILURE); } // Read words into vector. vector<string> words; string item; file_in >> item; while (file_in) { words.push_back(item); file_in >> item; } cout << "Words: \n"; for_each(words.begin(), words.end(), [](const string & word) {cout << word << " "; } ); file_in.close(); // Put all words in set lowercase set<string> words_set; transform(words.begin(), words.end(), insert_iterator<set > (words_set, words_set.begin()), to_lower); // Perform actual counting map<string, int> words_map; set<string>::iterator si; for (si = words_set.begin(); si != words_set.end(); si++ ) { words_map[*si] = count(words.begin(), words.end(), *si); } // Report results cout << "\n\nOccurences:\n"; for (si = words_set.begin(); si != words_set.end(); si++ ) { cout << setw(16) << left << *si << ": " << words_map[*si] << endl; } return 0; } string & to_lower(string & st) { transform(st.begin(), st.end(), st.begin(), to_lower2); return st; }
Python code
The following code was run using Python 2.7.3.
#!/usr/bin/env python # Counts words in a text file in Python. words = [] words_count = {} file_name = 'word_count.txt' try: with open(file_name) as file_in: # Get the words. for line in file_in: words += line.lower().split() print "Words:" for word in words: print "%s" % word, # Count the words. for word in words: words_count[word] = words_count.get(word, 0) + 1 print "\n\nOccurences:" for word, count in words_count.iteritems(): print "%-16s: %s" % (word, count) except IOError: print "Failed to open file '%s'. Aborting." % file_name
Some (biased) conclusions
Although not a fair comparison, still C++ still feels clumsy in simple string/file manipulations. It improves by providing proper containers and lambda’s support (and hey, regular expressions are also in the box!), but still using it feels rather unhandy and fails in readability against something like Python. There is too much chatty boilerplate code that needs to be written for simple manipulations. | http://blog.bidiuk.com/2014/01/cpp11-vs-python/ | CC-MAIN-2021-31 | refinedweb | 620 | 66.44 |
- NAME
- DESCRIPTION
- Usage
- Function Definitions
- C Configuration Options
- C-Perl Bindings
- The Inline Stack Macros
- Writing C Subroutines
- Examples
- SEE ALSO
- BUGS AND DEFICIENCIES
- AUTHOR
NAME
Inline::C - Write Perl Subroutines in C
DESCRIPTION
Inline::C is a module that allows you to write Perl subroutines in C. Since version 0.30 the Inline module supports multiple programming languages and each language has its own support module. This document describes how to use Inline with the C programming language. It also goes a bit into Perl C internals.
If you want to start working with programming examples right away, check out Inline::C-Cookbook. For more information on Inline in general, see Inline.
Usage
You never actually use
Inline::C directly. It is just a support module for using
Inline.pm with C. So the usage is always:
use Inline C => ...;
or
bind Inline C => ...;
Function Definitions
The Inline grammar for C recognizes certain function definitions (or signatures) in your C code. If a signature is recognized by Inline, then it will be available in Perl-space. That is, Inline will generate the "glue" necessary to call that function as if it were a Perl subroutine. If the signature is not recognized, Inline will simply ignore it, with no complaints. It will not be available from Perl-space, although it will be available from C-space.
Inline looks for ANSI/prototype style function definitions. They must be of the form:
return-type function-name ( type-name-pairs ) { ... }
The most common types are:
int,
long,
double,
char*, and
SV*. But you can use any type for which Inline can find a typemap. Inline uses the
typemap file distributed with Perl as the default. You can specify more typemaps with the TYPEMAPS configuration option.
A return type of
void may also be used. The following are examples of valid function definitions.
int Foo(double num, char* str) { void Foo(double num, char* str) { SV* Foo() { void Foo(SV*, ...) { long Foo(int i, int j, ...) {
The following definitions would not be recognized:
Foo(int i) { # no return type int Foo(float f) { # no (default) typemap for float unsigned int Foo(int i) { # 'unsigned int' not recognized int Foo(num, str) double num; char* str; { void Foo(void) { # void only valid for return type
Notice that Inline only looks for function definitions, not function prototypes. Definitions are the syntax directly preceeding a function body. Also Inline does not scan external files, like headers. Only the code passed to Inline is used to create bindings; although other libraries can linked in, and called from C-space.
C Configuration Options
For information on how to specify Inline configuration options, see Inline. This section describes each of the configuration options available for C. Most of the options correspond either to MakeMaker or XS options of the same name. See ExtUtils::MakeMaker and perlxs.
AUTO_INCLUDE
Specifies extra statements to automatically included. They will be added onto the defaults. A newline char will be automatically added.
use C => Config => AUTO_INCLUDE => '#include "yourheader.h"';
AUTOWRAP.
BOOT
Specifies C code to be executed in the XS BOOT section. Corresponds to the XS parameter.
CC
Specify which compiler to use.
CCFLAGS
Specify extra compiler flags.
FILTERS.
INC
Specifies an include path to use. Corresponds to the MakeMaker parameter.
use C => Config => INC => '-I/inc/path';
LD
Specify which linker to use.
LDDLFLAGS
Specify C => Config => LIBS => '-lyourlib';
or
use C => Config => LIBS => '-L/your/path -lyourlib';
MAKE
Specify the name of the 'make' utility to use.
MYEXTLIB
Specifies a user compiled object that should be linked in. Corresponds to the MakeMaker parameter.
use.
PREFIX C => Config => PREFIX => 'ZLIB_';
TYPEMAPS
Specifies extra typemap files to use. These types will modify the behaviour of the C parsing. Corresponds to the MakeMaker parameter.
use C => Config => TYPEMAPS => '/your/path/typemap';
C-Perl Bindings
This section describes how the
Perl variables get mapped to
C variables and back again.
First, you need to know how
Perl passes arguments back and forth to subroutines. Basically it uses a stack (also known as the Stack). When a sub is called, all of the parenthesized arguments get expanded into a list of scalars and pushed onto the Stack. The subroutine then pops all of its parameters off of the Stack. When the sub is done, it pushes all of its return values back onto the Stack.
The Stack is an array of scalars known internally as
SV's. The Stack is actually an array of pointers to SV or
SV*; therefore every element of the Stack is natively a
SV*. For FMTYEWTK about this, read.
A return type of
void has a special meaning to Inline. It means that you plan to push the values back onto the Stack yourself. This is what you need to do to return a list of values. If you really don't want to return anything (the traditional meaning of
void) then simply don't push anything back.
If ellipsis or
... is used at the end of an argument list, it means that any number of
SV*s may follow. Again you will need to pop the values off of the
Stack yourself.
See "Examples" below.
The Inline Stack Macros.
- Inline_Stack_Vars
You'll need to use this one, if you want to use the others. It sets up a few local variables:
sp,
items,
axmacros.
- Inline_Stack_Items
Returns the number of arguments passed in on the Stack.
- Inline_Stack_Item(i)
Refers to a particular
SV*in the Stack, where
iis an index number starting from zero. Can be used to get or set the value.
- Inline_Stack_Reset
Use this before pushing anything back onto the Stack. It resets the internal Stack pointer to the beginning of the Stack.
- Inline_Stack_Push(sv)
Push a return value back onto the Stack. The value must be of type
SV*.
- Inline_Stack_Done
After you have pushed all of your return values, you must call this macro.
- Inline_Stack_Return(n)
Return
nitems on the Stack.
- Inline_Stack_Void.
Writing C Subroutines
The definitions of your C functions will fall into one of the following four categories. For each category there are special considerations.
int Foo(int arg1, char* arg2, SV* arg3) {
This is the simplest case. You have a non
voidreturn type and a fixed length argument list. You don't need to worry about much. All the conversions will happen automatically.
void Foo(int arg1, char* arg2, SV* arg3) {
In this category you have a
voidreturn_Doneto mark the end of the return stack.
If you really want to return nothing, then don't use the
Inline_Stack_macros. If you must use them, then set use
Inline_Stack_Voidreturn type and an unfixed number of arguments. Just combine the techniques from Categories 3 and 4.
Examples
Here are a few examples. Each one is a complete program that you can try running yourself. For many more examples see Inline::C-Cookbook.
Example #1 - Greetings
This example will take one string argument (a name) and print a greeting. The function is called with a string and with a number. In the second case the number is forced to a string.
Notice that you do not need to
#include <stdio.h>. The
perl.h header file which gets included by default, automatically loads the standard C header files for you.
use Inline C; greet('Ingy'); greet(42); __END__ __C__ void greet(char* name) { printf("Hello %s!\n", name); }
Example #2 - and Salutations
This is similar to the last example except that the name is passed in as a
SV* (pointer to Scalar Value) rather than a string (
char*). That means we need to convert the
SV to a string ourselves. This is accomplished using the
SvPVX function which is part of the
Perl internal API. See)); }
Example #3 - Fixing the problem
We can fix the problem in Example #2 by using the
SvPV function instead. This function will stringify the
SV if it does not contain a string.
SvPV returns the length of the string as it's second parameter. Since we don't care about the length, we can just put
PL_na there, which is a special variable designed for that purpose.
use Inline C; greet('Ingy'); greet(42); __END__ __C__ void greet(SV* sv_name) { printf("Hello %s!\n", SvPV(sv_name, PL_na)); }
SEE ALSO
BUGS AND DEFICIENCIES
If you use C function names that happen to be used internally by Perl, you will get a load error at run time. There is currently no functionality to prevent this or to warn you. For now, a list of Perl's internal symbols is packaged in the Inline module distribution under the filename
'symbols.perl'. Avoid using these in your code.
AUTHOR
Brian Ingerson <INGY@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | https://metacpan.org/release/INGY/Inline-0.43/view/C/C.pod | CC-MAIN-2021-39 | refinedweb | 1,466 | 67.55 |
django-filer is a great tool for reusing uploaded content across Django sites. It's an easy choice for new projects, but what about existing projects? Painless steps for migrating existing image and file uploads to django-filer.
When you want to add image support to your Django app, like allowing
content editors to upload images, how do you do that? Probably by using
Django’s built in
ImageField model field, which will store the file on
the file system (or other storage backend). But what about when you want
to be able to reuse images? Using a plain
ImageField means you have to
upload images for each new use or perhaps create your own
Image model
as a related entity… neither option tends to work out well for the
people actually using the site.
For Django projects, the best solution we’ve found so far to this problem is django-filer. Filer is an application that combines convenient model fields for developers with a familiar file tree like structure for users. Combined with standard image fields like captions and alt text, as well as extra thumbnailing support, filer stands out as a superior way to manage image content.
This is all well and good when you’re starting a new project, but it’s never too late to make the transition to django-filer. Here we’ll walk through the process of rescuing your image content and surfacing it for your users.
Installation is a perfunctory first step. You’ll need to download django-filer (preferably by adding it to your project’s pip requirements file) along with easy-thumbnails, which filer uses to show, well, thumbnails of your images.
pip install easy_thumbnails django-filer
Next add it to your
INSTALLED_APPS tuple:
INSTALLED_APPS = ( 'easy_thumbnails', 'filer', 'breeds', )
‘breeds’ listed above is an app which tracks dog breeds in our imaginary project.
Filer maintains its own database tables for tracking files and folders, so you’ll need to migrate the database changes it introduces.
python manage.py migrate
At this point you should be ready to start uploading media using the filer backend. However to make any significant use of the filing system you’ll need to add filer’s fields to your models.
Here’s a snippet from the basic breed model used to show different breeds of dog:
class Breed(models.Model): name = models.CharField(max_length=140) image = models.ImageField(upload_to="breeds") description = models.TextField(blank=True)
Using the Django admin to add and edit breeds, each form will have a standard file input like this:
What we want is the ability to upload a new image or select an existing image.
Given that we have a bunch of content already present, the first thing we need to do is add the new filer field in addition to the existing fields.
from filer.fields.image import FilerImageField class Breed(models.Model): name = models.CharField(max_length=140) image = models.ImageField(upload_to="breeds") description = models.TextField(blank=True) img = FilerImageField(null=True)
Pretty easy! You’ll notice two important things right off the bat. The first being that we have to use a new name for this field since two fields can’t share the same name, and two that the new field is nullable.
Regardless of what you want your final data model to look like, the column has to be nullable in our first step in order to simply add the database column. After we’ve added all the content in you can go ahead and remove this ‘feature’ if you want.
The big step then is the data migration. We need to move all of the data from the old image fields to the new image fields. The nice thing is that we don’t need to move the files themselves! That’s a misconception I’ve heard voiced before, but in reality all we need to do is ensure we capture the references to these files and then delete the old references
For current versions of Django that looks like this:
python manage.py makemigrations --empty breeds
Using South with an older version of Django, the command looks like this:
python manage.py datamigration breeds migrate_to_filer_images
That will create a data migration file named
migrate_to_filer_images
for the app
breeds.
In our data migration we’re going to cycle through all of the existing
Breed instances and either find or create new
FilerImage instances
for each image path.
def create_filer_images(apps, schema_editor): from filer.models import Image Breed = apps.get_model('breeds', 'Breed') for breed in Breed.objects.all(): img, created = Image.objects.get_or_create(file=breed.image.file, defaults={ 'name': breed.name, 'description': breed.description, }) breed.img = img breed.save() class Migration(migrations.Migration): dependencies = [ ('filer', '0002_auto_20150606_2003'), ('breeds', '0002_breed_img'), ] operations = [ migrations.RunPython(create_filer_images), ]
And using South for older versions of Django:
class Migration(DataMigration): def forwards(self, orm): from filer.models import Image from breeds.models import Breed for breed in Breed.objects.all(): img, created = Image.objects.get_or_create(file=breed.image.file, defaults={ 'name': breed.name, 'description': breed.description, }) breed.img = img breed.save() def backwards(self, orm): pass
The first thing to notice is the no good, very bad, terrible thing here,
directlyly importing the models into the migration file. This is
exactly what the default migration template tells you not to do!
There are good reasons for not doing this, generally, however here
following the guidelines doesn’t work. Filer uses multitable inheritence
to subclass
File in the
Image model, so South’s internal schema (and
likewise the subsequence Django machinery)
doesn’t see a relationship between our table and the
Image table. So
instead we import the models with the implicit understanding that we’ll
squash these migrations later (its terrible anyhow to find long since removed
apps in your migrations).
The next thing to notice is that we’re using the
get_or_create method
here to avoid creating duplicates. We shouldn’t find any, but this is
an excellent way to avoid problems with edge cases. We can populate some
of the initial data from our model directly and change it later as
desired.
The
ImageField on our model is really a foreign key so we need to
create our
Image instance and then assign it to the individual breed.
We have filer images now so we’re ready to start using them.
A simple URL reference like this:
<img src="{{ MEDIA_URL }}{{ breed.image.url }}" />
Now references the
image attribute using the
ImageField as a foreign
key:
<img src="{{ MEDIA_URL }}{{ breed.img.image.url }}" />
If you happen to be using easy-thumbnails you’re simply change the field name provided to the template tag, from this:
<img src="{% thumbnail breed.image 400x200 %}" />
To this:
<img src="{% thumbnail breed.img 400x200 %}" />
If for some reason it turns out that changing your templates is too much of a hassle, keep reading for a few alternatives.
Similarly with templates you’ll need to update any forms and views. This is usually pretty straightforward, with the exception of any custom validation or data handling.
As with templates there’s an alternative way of getting around this at
least for simple cases. Any code in your forms or views that references
the image field as an image field will need to be updated to ensure
comptability with the foreign key presented by the
ImageField.
The last step is swapping out the old field. The primary way of doing this is to make the old field nullable and ensure it’s no longer required. You can take care of this in your forms, and if you’re using the Django admin’s default ModelForm you’ll need to ensure this field is allowed to be blank.
class Breed(models.Model): name = models.CharField(max_length=140) image = models.ImageField(upload_to="breeds", null=True, blank=True) img = FilerImageField(null=False) description = models.TextField(blank=True)
The follow up here would be to remove the old field altogether. This, however, is a post-deployment step. You should only do this once you’re ready to squash or remove your migrations, since the way we’ve implemented the data migration here depends on the presence of specific fields on the model. Simplest way to do this? Just remove the content from the data migration so that it does nothing and imports none of your models.
This kind of data migration is a one-shot migration to deal with legacy content. Once you’ve executed it you don’t need it anymore. You won’t be running the migration again in your production environment, only in fresh environments like test or development machines, in which case there is no legacy content. So if you do decide to get rid of the old field and/or rename the new field, clean up that data migration first.
I referenced a couple of work arounds with regard to changing the field
name in the rest of your code, i.e. templates, forms, views, etc. Both
options require that you’ve gone ahead and removed the original field.
The first is to add a model property with the name of the old field.
This should return a file instance just like the
models.ImageField
would.
class Breed(models.Model): name = models.CharField(max_length=140) img = FilerImageField(null=False) description = models.TextField(blank=True) @property def image(self): return self.img.file
If, say, what you’re primarily worried about is templates and you happen to be using easy-thumbnails then there’s an alternate solution: rename the new field to that of the old field. You’ll need to specify the database column name to avoid having to do yet another migration, a rather pointless one by this time.
class Breed(models.Model): name = models.CharField(max_length=140) image = FilerImageField(null=False, db_column="img_id") description = models.TextField(blank=True)
The key to everything here is ensuring that you have the required sequence of database migrations.
Learn from more articles like this how to make the most out of your existing Django site. | https://wellfire.co/learn/migrating-to-django-filer/ | CC-MAIN-2019-04 | refinedweb | 1,672 | 56.76 |
The objective of this post is to explain how to connect the ESP8266 to a Flask Webserver and send a HTTP GET Request.
The Python code
For this example, we will use Flask to deploy a simple webserver that will listen to HTTP GET requests on a certain URL and output a simple text message to the client. The code will be very similar to the one of this previous post, where we explain the basics of Flask.__)
Then, we will declare the route where our web server will be listening to incoming requests. We will use the /helloesp URL. The handler function for this route will just return a simple hello message, as seen bellow.
@app.route('/helloesp') def helloHandler(): return 'Hello ESP8266, from Flask'
Finally, we run our application with the run method.
app.run(host='0.0.0.0', port= 8090)
So, as indicated by the arguments of the run method, our server will be listening on port 8090 and on the machine default IP address. You can check here in more detail the meaning of the 0.0.0.0 IP address. But, to sum up, in the context of servers, 0.0.0.0 means all IPv4 addresses on the local machine [1], which is what we want so the server becomes available on our local network.
The full code for the server can be seen bellow.
from flask import Flask app = Flask(__name__) @app.route('/helloesp') def helloHandler(): return 'Hello ESP8266, from Flask' app.run(host='0.0.0.0', port= 8090)
I recommend to do a quick test without the ESP8266 to check for problems in the server code. This way is much easier to debug than running the whole code and trying to figure out in which component the problems are.
So, we can run the code, for example, from IDLE, the Python IDE.
Since we want to confirm that our server is available in the network, we need to discover its IP on the network. In windows, we can do it from the command line using the ipconfig command. On Linux, we can use the ifconfig command.
After discovering our IP address on the network, we can test the server using, for example, a web browser or a tool like Postman, which allows us to do HTTP Requests very easily.
Ideally, if you have another computer on your network, you can send a HTTP request from there, to the previously discovered IP address. If not, you can test it from the same machine, although it will only allow to confirm that the server code is running correctly, not that it is reachable from other devices on the network.
As seen in figure 1, I tested it from Postman. So, I sent a GET request on the following URL: IP:port/route, where the /route is equal to the /helloesp we defined early. In postman, we don’t need to put the http://.
Figure 1 – Testing the server code with Postman.
If you don’t wan’t to use Postman, simply open a web browser and put in the address bar, as seen in figure 2.
Figure 2 – Testing the server code from a web browser.
Important:. So, our use case only works if the ESP8266 is connected to the same router of the computer that is running the Flask server.
The ESP8266 code
The ESP8266 code will also be based on a previous post, which you can check for more details on the functions used.
First, we need do include some libraries that will allow us to connect to a WiFi Network and to send HTTP requests.
#include <esp8266wifi.h> #include <esp8266httpclient.h>
Now, in the setup function, we will start a serial connection, so we can print the output of the request to our server. We will also connect to the WiFi network.
void setup () { Serial.begin(115200); WiFi.begin("YourNetwork", "YouNetworkPassword"); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting.."); } Serial.println("Connected to WiFi Network"); }
Our code to make the request will be now specified in the main loop function. First, we declare an object of class HTTPClient, which we will simply call http. This class provides the methods to create and send the HTTP request.
After that, we call the begin method on the http object and pass the URL that we want to connect to and make the GET request. The URL will be the one specified in the previous section. Note that in this example code I used the IP of my machine where the Flask server is running. You should change it to the one you discovered with the ipconfig or ifconfig command.
HTTPClient http; //Declare an object of class HTTPClient http.begin(""); //Specify request destination
Then, we send the request by calling the GET method on the http object. This method will return the status of the operation. If the value is greater than 0, then it’s a standard HTTP code. If the value is less than 0, then it’s a client error, related with the connection. All available error codes for this method are listed here.
So, if the code is greater than 0, we can get and print the response payload, by calling the getString method on the http object. If not, we print an error message.
int httpCode = http.GET(); //Send the request if (httpCode > 0) { //Check the returning code String payload = http.getString(); //Get the request response payload Serial.println(payload); //Print the response payload }else Serial.println("An error ocurred");
Finally, we call the end method. This is very important to close the TCP connection and thus free the resources.
http.end(); //Close connection
The full code can be seen bellow. Note that we introduced a delay of 10 seconds between each GET request.
#include <ESP8266WiFi.h> #include <ESP8266HTTPClient.h> void setup () { Serial.begin(115200); WiFi.begin("YourNetwork", "YouNetworkPassword"); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting.."); } Serial.println("Connected to WiFi Network"); } }else Serial.println("An error ocurred"); http.end(); //Close connection } delay(10000); //Send a request every 10 seconds }
Testing everything
To test the full system, just upload the code to the ESP8266 using the Arduino IDE, with the Flask server running. You should get something similar to figure 3.
Figure 3 – Output of the ESP8266 application.
We can also test if our ESP8266 application works well if an error occurs on the Flask server. To do so, after some successful requests, just shut down the server, wait for some error messages, and then re run the server. The ESP8266 application should handle it fine, as shown in figure 4.
Figure 4 – Response of the ESP8266 application when the server is down.
Related Posts
References
[1]
Technical details
- Python version: v3.4.2
- Flask library: v0.10.1
- ESP8266 libraries: v2.2.0
Nice writeup!
LikeLiked by 1 person
Thanks 🙂
THANK YOU SO MUCH!!!! IT HELPED ME ALOT!!
LikeLiked by 1 person
You’re welcome, I’m glad the content was useful 🙂 | https://techtutorialsx.com/2016/12/26/8948/ | CC-MAIN-2017-26 | refinedweb | 1,170 | 66.33 |
Since there is no official thread for increasing the system partition size (not data partition size as there is one already), I've decided to make a thread.
Disclaimer:
#include <std_disclaimer.h>
/*
*.
*/
Make a backup!! Why am I saying this again? Because there is always a few careless people who do this process and complain how they lost all of their data.
If you try to use the modified system partition with stock rom, your rom will not boot! You must revert to stock system partition sizes first then install stock rom!
What you need:
- A Samsung Galaxy S2 i9100 16GB International variant. You must have this model otherwise you will brick your device! Other models can use the lanchon repit tool.
- A kernel in .tar format. I highly recommend gustavo kernel as it doesn't suffer the formatting problem that dorimanx (using default recovery) or apolo kernel (TWRP) does. Pre-made files Update: DorimanX kernel supports isorec recoveries in the newest builds. Flashing a isorec recovery will remove this formatting limit and will be usable in all partition sizes.
- System pit files (Provided in the attachments)
- A PC (preferably Windows, if using OS X or linux, use JOdin instead!)
- A archiving/compression utility such as 7-zip or winrar. I recommend 7-zip because it is for free and it is open source.
- ODIN (to flash the kernel of course, again use JOdin for linux and OS X). Also, I have made ODIN installation easier with fewer options!
Benefits
Why do people increase the partition sizes? Here are some reasons why:
- Can allocate more storage space for more apps (removes the need for moving apps to sd card)
- Allows people to upgrade to android lollipop or marshmallow easier (if you don't want to go through any messages saying that there isn't enough space)
- Since it will delete everything, this will restore all lost space taken up by apps and other programs
- Gapps (Google Apps) will be much easier to install (you can choose larger gapps file if you want, only if you increase system partition). For CM12.1 and CM13, re-partitioning has become a necessity before flashing current gapps packages.
- Can fix soft bricks. However hard bricks can not be solved this way!
- Can fix problems with partition errors
Please note that this guide requires a PC and a working USB jack. If you don't meet these requirements, use lanchon's tool instead!
CM13 Nightlies have been released. Please note that the link to the premade .tar kernels are compatible up to Android Lollipop. You will still be able to repartition your device using Lollipop kernel! Since you will be reflashing or restoring CM13 during the process, marshmallow kernels are not required!
Instructions
- Make a recovery backup (CWM or TWRP) onto your external sd card or PC (VERY IMPORTANT!!! All backups in internal sd card will be deleted so move them!)
- Download the system pit files (attached in OP)
- Extract the pit zip file
- Open ODIN (Attached in the OP, JOdin for linux and OS X)
- Connect your phone to your PC via USB while in download mode (make sure your phone is detected on ODIN, it should say COM:{number})
- Click on the PIT button and locate and select your desired pit file
- Click on PDA or AP and select your kernel .tar file (look at this post if you don't have one: Pre-made kernel tar Link)
- Make sure that re-partition has been ticked then click start. If your phone successfully flashes, move onto the next step. If not, try changing usb ports, changing ODIN version, changing pit files or seeing if your device has a corrupt nand (or broken). If your memory chip is corrupt or broken, you won't be able to flash with ODIN (and you will need to buy a new motherboard). If you are still unsure what to do, look at the screenshot in the attachments.
- Disconnect the USB cable. Take out your phone battery, then re-insert it.
- Press and hold button combinations (home button + power button + volume up) to boot into recovery. You only need to hold for about 5 seconds.
- If you have CWM recovery, go to mounts and storage then select format /sd card0 or /internal sd card. Select default and your internal sd card should successfully format. If not, try ext4 format. If you have TWRP recovery, you will need to go to wipe>advanced and select format emmc or /sd card0, then swipe to confirm . If you can successfully format your phone here, skip steps 11, 12, 13, 14. Note: DorimanX kernel doesn't support ext4 formatted sd cards. You will need to format your sd cards to vfat or fat32 for DorimanX kernel to detect.
----IF FORMATTING FAILS IN RECOVERY----
- Connect your phone back to your PC.
- If formatting sd card in recovery fails, connect your phone to your PC. (in attachments). Also, if your phone doesn't get detected and you've done everything in this step, try changing usb ports, computers and even usb cables.
- Click format
- You can leave all of the settings to default, I personally like to change the allocation size to 4KB (4KB is good if you want to make the most out of your phone's storage while having enough speed)
- Click format.
----AFTER SUCCESSFULLY FORMATTING----
- Go back to your phone. You can choose to restore from your backup or install a new rom. If you choose to restore from a backup, skip all of the remaining steps. (Note: DorimanX kernel recoveries can not format /data partition more than 2gb, use an alternative first when restoring a backup then flash back if needed)
- In mounts and storage, choose format /data, /cache and /system (Note: DorimanX kernel recoveries can not format /data partition more than 2gb, use an alternative first then flash back if needed)
- Now you can flash a ROM as you normally would
Huge thanks to ElGamal for providing the modified pit files and metalgearhathaway for providing the stock pit file and the.gangster for providing the 1.5GB system pit files.
I have included pit files that will resize your system partition to 1GB or 1.5GB (depending on your choice).
Data partition sizes range from 3GB up to 6GB. The rest is for your internal data storage.
Everything has been nicely labeled so please pay attention to which pit file you select!
I don't recommend using the 32MB Preload pits because they can cause problems with some ROMs.
If you get bootloops, flash a pit with a smaller data partition size!
Too difficult for you? Try Lanchon's flashable repartition zip | https://forum.xda-developers.com/galaxy-s2/development-derivatives/mod-increase-partition-size-t3011162 | CC-MAIN-2018-34 | refinedweb | 1,113 | 72.46 |
Can you please tell me where to look?
I can't seem to find any working code, just some incomplete online articles :\
Can you please tell me where to look?
I can't seem to find any working code, just some incomplete online articles :\
I'd like to know if there's a custom control or something else that allows me to replicate the Facebook's lockscreen template selector UI without having to redefine manually the border's style and...
Let me clarify something before we're going on.
I'm building an app to customize the lockscreen, and that's why I need to export the image.
The app is really complex. I think that mine is a quite...
@theothernt: Nice point, I've changed it with a simple Image control.
@yan_: I've just tried your code, but it just renders the Grid.
I don't know what's your role here, if you're like...
<telerikSlideView:PanAndZoomImage Source="{Binding ImageUri}"
Stretch="Fill"
x:Name="BackgroundImage"
...
That's the first thing that I've tried, the result is a plain black picture
Background agent.
Now I'm doing it in the application just to see if everything works, but the goal is to have it working in a Background Agent
Hi there.
I don't know if mine is an usuale request or not, but I need your help to solve this thing.
I've got a component which shows data taken from a data model.
What I want to do is to...
This is a little part of my code:
phCam.GetPreviewBufferY(input.Buffer);
// NOKIA IMAGING SDK
var esf = await...
This is really a cool idea!
Do you have any idea on how to use the Imaging SDK to improve the picture?
Tried it now but it doesn't work.
Nothing moves :\
I need something that helps me tracking faces from a camera live stream.
I've tried with Microsoft's Face SDK and it works quite fine (even if it doesn't always work for no apparent reason).
...
I need to use a database in my WP8 app and I came across some tutorials about the Local Database.
Microsoft's tutorial requires me to build my classes by hand and this is quite hard as I've got...
The first is the root exception, the second one is the InnerException of the first, and the third is the InnerException of the first.
I'm dragging the control by hand, so namespace and stuff like...
Here are some more infos:
XamlParseException: [Line: 0 Position: 0]
> StackTrace :
> InnerException: TypeInitializationException: L'inizializzatore di tipo di 'Microsoft.Advertising.AdManager'...
As in the title, I can't get the AdControl to work in my project.
It loads fine in an empty project, but in the one that I'm using it throws a
TypeInitializationException that has a ...
I've tried now with this minimal page, still no success.
<phone:PhoneApplicationPage
x:Class="MyNamespace.MyClas"
xmlns=""...
Got it, I'll take a look at Conductors.
Thank you again for your help :)
Well, my custom controls have to get data from somewhere.
Actually, everything is done in the xaml.cs file of each control, but I want to separate each control's logic and it's view, that's why I'm...
Can you please give me some samples?
The objects that I'm working with are custom components, and this makes it harder to understand your reply.
I mean, if it was just about persons, students...
Hi there, I'm new in WP development and I'm trying to understand how the MVVM pattern works.
I also need to understand how it does work with inheritance and custom controls as I want to develop an... | http://developer.nokia.com/community/discussion/search.php?s=6145bc111efd9c86aaa1ccc6d0679cd3&searchid=2061742 | CC-MAIN-2014-15 | refinedweb | 623 | 73.88 |
import "google.golang.org/api/googleapi"
Package googleapi contains the common code shared by all Google API libraries.
const ( // Version defines the gax version being used. This is typically sent // in an HTTP header to services. Version = "0.5" // UserAgent is the header string used to identify this package. UserAgent = "google-api-go-client/" + Version // DefaultUploadChunkSize is the default chunk size to use for resumable // uploads if not specified by the user. DefaultUploadChunkSize = 8 * 1024 * 1024 // MinUploadChunkSize is the minimum chunk size that can be used for // resumable uploads. All user-specified chunk sizes must be multiple of // this value. MinUploadChunkSize = 256 * 1024 )
var WithDataWrapper = MarshalStyle(true)
WithDataWrapper marshals JSON with a {"data": ...} wrapper.
var WithoutDataWrapper = MarshalStyle(false)
WithoutDataWrapper marshals JSON without a {"data": ...} wrapper.
Bool is a helper routine that allocates a new bool value to store v and returns a pointer to it.
CheckMediaResponse returns an error (of type *Error) if the response status code is not 2xx. Unlike CheckResponse it does not assume the body is a JSON error document. It is the caller's responsibility to close res.Body.
CheckResponse returns an error (of type *Error) if the response status code is not 2xx.
CloseBody is used to close res.Body. Prior to calling Close, it also tries to Read a small amount to see an EOF. Not seeing an EOF can prevent HTTP Transports from reusing connections.
CombineFields combines fields into a single string.
ConvertVariant uses the JSON encoder/decoder to fill in the struct 'dst' with the fields found in variant 'v'. This is used to support "variant" APIs that can return one of a number of different types. It reports whether the conversion was successful.
Expand subsitutes any {encoded} strings in the URL passed in using the map supplied.
This calls SetOpaque to avoid encoding of the parameters in the URL path.
Float64 is a helper routine that allocates a new float64 value to store v and returns a pointer to it.
Int32 is a helper routine that allocates a new int32 value to store v and returns a pointer to it.
Int64 is a helper routine that allocates a new int64 value to store v and returns a pointer to it.
IsNotModified reports whether err is the result of the server replying with http.StatusNotModified. Such error values are sometimes returned by "Do" methods on calls when If-None-Match is used.
ResolveRelative resolves relatives such as "" and "topics/myproject/mytopic" into a single string, such as "". It strips all parent references (e.g. ../..) as well as anything after the host (e.g. /bar/gaz gets stripped out of foo.com/bar/gaz)..
VariantType returns the type name of the given variant. If the map doesn't contain the named key or the value is not a []interface{}, "" is returned. This is used to support "variant" APIs that can return one of a number of different types.(traceToken string) CallOption
Trace returns a CallOption that enables diagnostic tracing for a call. traceToken is an ID supplied by Google support.
func UserIP(ip string) CallOption
UserIP returns a CallOption that will set the "userIp" parameter of a call. This should be the IP address of the originating request.
ContentTyper is an interface for Readers which know (or would like to override) their Content-Type. If a media body doesn't implement ContentTyper, the type is sniffed from the content using http.DetectContentType. }
Error contains an error response from the server.
type ErrorItem struct { // Reason is the typed error code. For example: "some_example". Reason string `json:"reason"` // Message is the human-readable description of the error. Message string `json:"message"` }
ErrorItem is a detailed error code & message from the Google API frontend.:
Float64s is a slice of float64s that marshal as quoted strings in JSON.
Int32s is a slice of int32s that marshal as quoted strings in JSON.
Int64s is a slice of int64s that marshal as quoted strings in JSON.
MarshalStyle defines whether to marshal JSON with a {"data": ...} wrapper.
func (wrap MarshalStyle) JSONReader(v interface{}) (io.Reader, error)
MediaOption defines the interface for setting media options.(ctype string) MediaOption
ContentType returns a MediaOption which sets the Content-Type header for media uploads. If ctype is empty, the Content-Type header will be omitted.
MediaOptions stores options for customizing media upload. It is not used by developers directly.
func ProcessMediaOptions(opts []MediaOption) *MediaOptions
ProcessMediaOptions stores options from opts in a MediaOptions. It is not used by developers directly..
RawMessage is a raw encoded JSON value. It is identical to json.RawMessage, except it does not suffer from.
func (m RawMessage) MarshalJSON() ([]byte, error)
MarshalJSON returns m.
func (m *RawMessage) UnmarshalJSON(data []byte) error
UnmarshalJSON sets *m to a copy of data.
type ServerResponse struct { // HTTPStatusCode is the server's response status code. When using a // resource method's Do call, this will always be in the 2xx range. HTTPStatusCode int // Header contains the response header fields from the server. Header http.Header }
ServerResponse is embedded in each Do response and provides the HTTP status code and header sent by the server.
A SizeReaderAt is a ReaderAt with a Size method. An io.SectionReader implements SizeReaderAt.
Uint32s is a slice of uint32s that marshal as quoted strings in JSON.
Uint64s is a slice of uint64s that marshal as quoted strings in JSON.
Package googleapi imports 11 packages (graph) and is imported by 1619 packages. Updated 2019-09-14. Refresh now. Tools for package owners. | https://godoc.org/google.golang.org/api/googleapi | CC-MAIN-2019-39 | refinedweb | 910 | 59.5 |
On Sun, 8 Sep 2002, H. J. Lu wrote:> I have a very strange problem with ACPI and i386 kernel. I built an> i386 kernel with ACPI for RedHat installation since my new P4 machines> needs ACPI to get IRQ. It works fine on my ASUS P4B533-E MB with Intel> 845E chipset. However, on my Sony VAIO GRX560 which is a P4 1.6GHz> with Intel 845 chipset, the machine will reboot as soon as the kernel> starts to run. I tracked it down to CONFIG_X86_INVLPG. If I enable> it, kernel will be fine. Has anyone else seen this?Yes, I sent Marcelo the patch below on 27th Aug, it's in 2.4.20-pre5.I sent Linus a similar patch (copied to LKML) for the 2.5 tlbflush.h,but he didn't care for its "cpu_has_pge" test, nor did he put in its#define cpu_has_invlpg (boot_cpu_data.x86 > 3)replacement: I'll resend.CONFIG_M386 kernel running on PPro+ processor with X86_FEATURE_PGE mayset _PAGE_GLOBAL bit: then __flush_tlb_one must use invlpg instruction.The need for this was shown by a recent HyperThreading discussion.Marc Dietrich reported (LKML 22 Aug) that CONFIG_M386 CONFIG_SMP kernelon P4 Xeon did not support HT: his dmesg showed acpi_tables_init failedfrom bad table data due to unflushed TLB: he confirms patch fixes it.No tears would be shed if CONFIG_M386 could not support HT, but bad TLBis trouble. Same CONFIG_M386 bug affects CONFIG_HIGHMEM's kmap_atomic,and potentially dmi_scan (now using set_fixmap via bt_ioremap). Thoughit's true that none of these uses really needs _PAGE_GLOBAL bit itself.Patch below to 2.4.20-pre4 or 2.4.20-pre4-ac2: please apply.I'll mail Linus and Dave separately with the 2.5 version.Hugh--- 2.4.20-pre4/include/asm-i386/pgtable.h Thu Aug 22 20:59:51 2002+++ linux/include/asm-i386/pgtable.h Fri Aug 23 00:11:39 2002@@ -82,11 +82,19 @@ } while (0) #endif -#ifndef CONFIG_X86_INVLPG-#define __flush_tlb_one(addr) __flush_tlb()+_pge) \+ _ | http://lkml.org/lkml/2002/9/8/124 | CC-MAIN-2014-15 | refinedweb | 332 | 77.23 |
16 February 2011 06:35 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Sales in the fourth quarter grew 8% year on year to Swfr1.7bn, while earnings before interest and taxes (EBIT) swung to gain of Swfr31m, versus a loss of Swfr23m in the same period of 2009, the specialty chemicals maker said in a statement.
Most of the firm’s business units experienced solid underlying demand for their products and services, with its industrial & consumer specialties and oil & mining services segments outperforming the rest of the group, it said.
At the regional level, the highest growth rates were in Europe and
For the full year of 2010, the company swung to a net profit of Swfr191m, compared to a loss of Swfr194m in 2009, with sales up 13% year on year at Swfr7.12bn, Clariant said.
“The double-digit sales growth… was the result of the robust global economic growth supported by restocking activities in parts of the portfolio in the first half of the year,” the firm said.
“All regions reported double-digit sales growth,” it added.
In its outlook, Clariant said it expected global economic growth to continue but at a slower pace than in 2010.
Exchange rates of the most important currencies were expected to remain volatile, it said.
Growth would mainly come from the emerging markets in the Asia Pacific and
The firm expects raw material costs to increase in the high single-digit range, with commodity prices to rise again in 2011 after remaining momentarily stable in the second half of 2010.
“Clariant expects 2011 sales growth… in the low single-digit range,” the company added.
($1 = Swfr0 | http://www.icis.com/Articles/2011/02/16/9435708/swiss-clariant-swings-to-a-net-profit-of-swfr47m-in-q4-on-demand.html | CC-MAIN-2015-06 | refinedweb | 274 | 57.61 |
Java.lang.StringBuffer.delete() Method
Description
The java.lang.StringBuffer.delete() method removes the characters in a substring of this sequence. The substring begins at the specified start and extends to the character at index end - 1 or to the end of the sequence if no such character exists. If start is equal to end, no changes are made.
Declaration
Following is the declaration for java.lang.StringBuffer.delete() method
public StringBuffer delete(int start, int end)
Parameters
start -- This is the beginning index, inclusive.
end -- This is the ending index, exclusive.
Return Value
This method returns this object.
Exception
StringIndexOutOfBoundsException -- if start is negative, greater than length(), or greater than end.
Example
The following example shows the usage of java.lang.StringBuffer.delete() method.
package com.tutorialspoint; import java.lang.*; public class StringBufferDemo { public static void main(String[] args) { StringBuffer buff = new StringBuffer("Java lang package"); System.out.println("buffer = " + buff); // deleting characters from index 4 to index 9 buff.delete(4, 9); System.out.println("After deletion = " + buff); } }
Let us compile and run the above program, this will produce the following result:
buffer = Java lang package After deletion = Java package | http://www.tutorialspoint.com/java/lang/stringbuffer_delete.htm | CC-MAIN-2015-14 | refinedweb | 192 | 51.75 |
Hello,
after days of searching, I couldn't find my problem anywhere else, so i hope you can help me with it.
I am trying to import a DLL like this:
Read.h:
dll.h:dll.h:Code:#define _WIN32_DCOM #define USING_WRAPPER_CLASS #include "dll.h" class ReadData {... }
Read.cpp:Read.cpp:Code:#if 1 #ifdef USING_WRAPPER_CLASS #import "Read.dll" no_namespace named_guids #else #import "Read.dll" no_namespace, raw_interfaces_only,\ raw_native_types, named_guids #endif #else #include "Read.tlh" #endif void CheckHRESULT(HRESULT hr, char* str = NULL) { if( SUCCEEDED(hr) ){ if( str )printf("%s\n\r",str); else printf("Success!\n\r"); } else printf("Failed...(0x%08X)\n\r",hr); }
So far it works fine, but when I am going to include the Read.h in some other *.cpp file, I get the Linker-Error mentioned above.So far it works fine, but when I am going to include the Read.h in some other *.cpp file, I get the Linker-Error mentioned above.Code:#include "ReadData.h" ReadData::ReadData(void) {... }
I hope I explained the problem enough, I really dont know how to solve it. | http://cboard.cprogramming.com/cplusplus-programming/133694-lnk1005-symbol-already-defined-dll-import.html | CC-MAIN-2015-18 | refinedweb | 181 | 62.54 |
My PHP applications are generally using classes for namespacing. The methods within these classes are defined as static.
Now that PHP has introduced Traits, I'm trying to wrap my head around when to use them. I saw some examples of using traits, but I'm thinking this could just as easily be implemented through a static class method.
A quite thorough example using a logger was listed here: traits in php – any real world examples/best practices?
But why use a Trait, if you could also use a static Logger::log()? The only thing I can think of just now, is easy access to $this.
Another example I am facing right now, is a user-exists function. Trait it, or static method it?
Can anyone shed some light on this?
After reading the comments on the question, my take on the answer is this:
Traits allow the extending of a class without it being part of the class hierarchy. There's no need for something like
class Book extends Loggable, as Book itself is not a Loggable, we just want the Loggable functionality. The functionality within the Loggable could be stuffed in a trait, therefore being able to use the Loggable methods within Book as though you were extending from it.
The advantage of using traits above the use of static methods within classes (or namespaced functions) is that the trait has access to the full class scope, also private members.
The downside of using static functions instead of traits, is tight coupling (dependencies) between the classes, which hurts reusability and can hurt unit testing (for instance when using mock services). Dependencies should be injected at runtime, which indeed increases the effort of instantiating a class/method, but allow better flexibility over the full app. This was a new insight for me. | https://codedump.io/share/XCgPXzicSBzK/1/php-when-to-use-traits-and-when-to-use-static-methods | CC-MAIN-2018-05 | refinedweb | 303 | 62.27 |
Fixed aspect ratio for a widget
Hello,
In one of my projects, I had as a business requirement (flawed or not, this is out of scope) to have a fixed aspect ratio on my top level (and unique) widget.
Basically, user should select 4:3 or 16:9 then widget keeps aspect ratio.
I dug into QWidget documentation and code but I didn't manage to do this nor did I find any good solution.
I tried catching the resize event to hack ratio but it linked to some unsatisfactory results, I tried the setPreferedHeightForWidth, but it's well named, it works only in one way...
I finally lost faith and got rid of that requirement but now that this forum exist, does any one know a clean, good way of doing it ?
Take a look at Qxt and in particular the "QxtLetterBoxWidget":
- tobias.hunger Moderators last edited by
Does "heightForWidth()": do what you need?
@tobias
No it does not : it fulfills only a partial use case. Depending on the actual resize (shrinking only horizontally / only vertically - growing only vertically / only horizontally), the result can produce a wrong ratio.
Sorry, for not having linked the exact name, setPreferedHeightForWidth was what my mind remembered of height for width....
I'll try to make an example.
- tobias.hunger Moderators last edited by
You will of course also need to set height for width in the size polizy, true. I have never tried it, not being the UI kind of guy, but I am pretty sure that should work.
Easy enough, just try to compile that
@
#include <QWidget>
#include <QApplication>
#include <QSizePolicy>
class MyWidget:public QWidget { public: MyWidget():QWidget(){}; ~MyWidget(){}; virtual int heightForWidth ( int w ) const { return w*9/16;}; }; int main (int argv, char** argc) { QApplication a(argv,argc); MyWidget w; QSizePolicy qsp(QSizePolicy::Preferred,QSizePolicy::Preferred); qsp.setHeightForWidth(true); w.setSizePolicy(qsp); w.show(); a.exec(); }
@
and resize the windows : it doesn't care about aspect ratio at all... | https://forum.qt.io/topic/5210/fixed-aspect-ratio-for-a-widget | CC-MAIN-2019-43 | refinedweb | 328 | 50.87 |
Hi, I am trying to learn c++ from a book called c++ primer. I am using vs .net 7.1
This will sound like a silly question but has c++ changed heaps in this compiler?
When i start a new win32 console app this is what it looks like
#include "stdafx.h"
int _tmain(int argc, _TCHAR* argv[])
{
return 0;
}
_tmain?? whats that?
Any way, i tried putting this line above return
cout<<"hello world";
and i get this error
error C2065: 'cout' : undeclared identifier
Is my book way out of date or are there only minor changes or is it something else?
Thanks heaps for any info | http://cboard.cprogramming.com/cplusplus-programming/53191-need-help-compiling-hello-world.html | CC-MAIN-2015-48 | refinedweb | 108 | 90.19 |
Enjoys Reading, writes amazing blogs, and can talk about Machine Learning for hours.
What is Required ?
- Python, Numpy, Pandas
- Kaggle titanic dataset :
Goal
The machine learning model is supposed to predict who survived during the titanic shipwreck.
Here I will show you how to apply preprocessing techniques on the Titanic dataset.
Why do we need Preprocessing ?
For machine learning algorithms to work, it is necessary to convert the raw data into a clean data set and')
Lets take a look at the data format below
>>> df.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 891 entries, 0 to 890 Data columns (total 12 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 714 non-null float64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 204 non-null object Embarked 889 non-null object.
Dropping Columns which are not useful
Lets try to drop some of the columns which many not contribute much to our machine learning model such as Name, Ticket, Cabin etc.
cols = ['Name', 'Ticket', 'Cabin'] df = df.drop(cols, axis=1)
We dropped 3 columns:
>>>df.info()
Dropping rows having missing values
Next if we want we can drop all rows in the data that has missing values (NaN). You can do it like
df = df.dropna()
>>>df.info()
Problem with dropping rows having missing values
After dropping rows with missing values we find that the dataset is reduced to 712 rows from 891, which means we are wasting data. Machine learning models need data for training to perform well. So we preserve the data and make use of it as much as we can. We will see it later.
Creating Dummy Variables=<span class="s1">1</span>)
We have 8 columns transformed to columns. 1,2,3 represents passenger class.
>>>df.info() PassengerId 891 non-null int64 Survived 891 non-null int64 Age 714 S 891 non-null float64
Taking Care of Missing Data
All is good, except age which has lots of missing values. Lets compute a median or interpolate() all the ages and fill those missing age values. Pandas has a interpolate() function that will replace all the missing NaNs to interpolated values.
df['Age'] = df['Age'].interpolate()
Now lets observe the data columns. Notice age which is interpolated now with imputed new values.
>>>df.info() Data columns (total 14 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Age 891
Converting the dataframe to numpy)
Dividing data set into training set and test set
Now that we are ready with X and y, lets split the dataset for 70% Training and 30% test set using scikit model_selection
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
And That's about it folks.
You have learned how to preprocess data in the titanic dataset. So go on, try it for yourself and start making your own predictions. | https://hackernoon.com/implementation-of-data-preprocessing-on-titanic-dataset-9j1n927ky | CC-MAIN-2022-40 | refinedweb | 512 | 63.59 |
Patent application title: REPRESENTING AND MANIPULATING RDF DATA IN A RELATIONAL DATABASE MANAGEMENT SYSTEM
Inventors:
Souripriya Das (Nashua, NH, US)
Eugene Inseok Chong (Concord, MA, US)
Zhe Wu (Westford, MA, US)
Melliyal Annamalai (Nashua, NH, US)
Jogannathan Srinivasan (Nashua, NH, US)
Assignees:
ORACLE INTERNATIONAL CORPORATION
IPC8 Class: AG06F700FI
USPC Class:
707693
Class name:
Publication date: 2012-04-05
Patent application number: 20120084271
Abstract:
Techniques for generating hash values for instances of distinct data
values. In the techniques, each distinct data value is mapped to hash
value generation information which describes how to generate a unique
hash value for instances of the distinct data value. The hash value
generation information for a distinct data value is then used to generate
the hash value for an instance of the distinct data value. The hash value
generation information may indicate whether a collision has occurred in
generating the hash values for instances of the distinct data values and
if so, how the collision is to be resolved. The techniques are employed
to normalize RDF triples by generating the UIDS employed in the
normalization from the triples' lexical values.
Claims:
1-15. (canceled)
16. A computer implemented method employed in a relational database management system of compressing a data value which includes at least one internal delimiter, the method comprising: parsing the data value to locate the at least one internal delimiter; using the delimiter to divide the data value into a prefix and a suffix; placing the prefix and the suffix into separate fields of an entry in an object in the relational database management system, the object being specified in the relational database management system as employing compression for the separate fields including the prefix.
17. The computer implemented method set forth in claim 16, wherein: the object comprises an index for a table; and the separate fields are specified as keys for the index.
18. The computer implemented method set forth in claim 16, wherein: the object comprises a table; and the separate fields are specified as columns of the table.
19. The computer implemented method set forth in claim 16, wherein: the data value comprises a URI; and the parser uses a rightmost "/" delimiter or a # delimiter of the URI to divide the URI into the prefix and the suffix.
Description:
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The subject matter of this patent application is related to the subject matter of US published patent application 2006/0235823, Eugene Inseok Chong, "integrating RDF data into a relational database system", filed 18 Apr. 2005 and to the subject matter of U.S. Ser. No. 12/188,267, Zhe Wu, Database-based inference engine for RDFS/OWL constructs, filed on even date with the present patent application.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] Not applicable.
REFERENCE TO A SEQUENCE LISTING
[0003] Not applicable.
BACKGROUND OF THE INVENTION
Field of the Invention
[0004] The techniques disclosed herein relate to representing and manipulating RDF data in a large RDBMS. Particular techniques include efficient bulk loading of RDF data, using hash functions to generate the identifiers for the lexical values of the RDF data in the RDBMS, and techniques for compressing lexical values that are URIs.
Representing Information Using RDF
[0006] There are two standard vocabularies defined on RDF: RDF Schema (RDFS) and the Web Ontology Language (OWL). These vocabularies introduce RDF terms that have special semantics in those vocabularies. For simplicity, in the rest of the document, our use of the term RDF will also implicitly include RDFS and OWL. For more information and for a specification of RDF, see [0007] RDF Vocabulary Description Language 1.0: RDF Schema, available at [0008] OWL Web Ontology Language Overview, available at [0009] Frank Manola and Eric Miller, RDF Primer, published by W3C and available in September, 2004 at.
[0010] The RDF Vocabulary Description Language 1.0: RDF Schema, OWL Web Ontology Language Overview, and RDF Primer are hereby incorporated by reference into the present patent application.
Representation of Facts as RDF Triples
[0011] FIG. 1 and FIG. 2 provide an overview of RDF. Facts in RDF are represented by RDF triples. Each RDF triple represents a fact and is made up of three parts, a subject, a predicate (sometimes termed a property), and an object. For.
[0012] The following are examples of URIs: [0013] [0014] [0015] [0016] [0017]
[0018] A URI is a standardized format for representing resources on the Internet, as described in RFD 2396: Uniform Resource Identifiers (URI): Generic Syntax,. RFD 2396 is hereby incorporated by reference into the present patent application. In the triples, the lexical values for the object parts may be literal values. In RDF, literal values are strings of characters, and can be either plain literals (such as "Immune Disorder") or typed literals (such "2.4" xsd: decimal). The interpretations given to the lexical values in the members of the triple are determined by the application that is consuming it. For a complete description of RDF, see Frank Manola and Eric Miller, RDF Primer, published by W3C and available in September 2004 at. The RDF Primer is hereby incorporated by reference into the present patent application.
Representing the RDF Triple as a Graph
[0019] RDF triples may be represented as a graph as shown at 109 in FIG. 1. The subject is represented by a node 103, the object by another node 107, and the predicate by arrow 104 connecting the subject node to the object node. A subject may of course be related to more than one object, as shown with regard to age 103. An example triple is shown at 117. In the following general discussion of RDF, lexical values that are URIs will be replaced by the names of the entities the URIs represent.
[0020] Note that for clarity in the various figures and descriptions, URIs such as :Reviewer and :Person are shown in a simplified format in which default namespaces are omitted: thus :Reviewer is shown simply as Reviewer, with the initial colon designating the default namespace omitted.
Making RDF Models Using RDF Graphs
[0021] An RDF representation of a set of facts is termed in the following an RDF model. A simple RDF model Reviewers is shown at 101 in FIG. 1. The model has two parts: RDF data 113 and RDF schema 111. RDF schema 111 is made up of RDF triples that provide the definitions needed to interpret the triples of RDF data 113. Schema triples define classes of entities and predicates that relate classes of entities. A definition for the predicate age is shown at 112. As shown there, a predicate definition consists of two RDF triples for which the predicate is the subject. One of the triples, which has the built-in rdfs:domain predicate, indicates what kind of entities must be subjects for the predicate. Here, it is entities belonging to the class Person. The other triple, which has the built-in rdfs:range predicate, indicates what kinds of entities must be objects of the predicate; here, it is values of an integer type called xsd:decimal. Schema 111 uses the rdfs:subClassOf predicate 110 to define a number of subclasses of entities belonging to the class Person. Also defined are Conference and University classes of entities, together with predicates that relate these entities to each other. Thus, an entity of class Person may be a chairperson of a conference and an entity of class Reviewer may be a reviewer for a conference. Also belonging to Schema 111 but not shown there is the built-in RDF predicate rdf:type. This predicate defines the subject of a triple that includes the rdf:type predicate as an instance of the class indicated by the object. As will be explained in more detail, RDF rules determine logical relationships between classes. For example, a built-in RDF rule states that the rdfs:subClassof relationship is transitive: if A is a subclass of B and B a subclass of C, then A is a subclass of C. Thus, the class Faculty is a subclass of Person.
[0022] The data triples to which schema 111 applies are shown at 113; they have the general pattern <individual entity>, <predicate>, <object characterizing the individual entity>. Thus, triple 115 indicates that ICDE 2005 is an entity characterized as belonging to the class Conference and triple 117 indicates that John is characterized by having the age 24. Thus, RDF data 113 contains the following triples about John: [0023] John has an age of 24; [0024] John belongs to the subclass Ph.D.student; [0025] John is a ReviewerOf ICDE 2005.
[0026] An RDF model is a set of assertions. Hence, as a set, it should not contain duplicate assertions, that is, all <subject, predicate, object> data triples should be unique, and not be repeated within a model. However, two distinct RDF models may contain some data triples that are the same in the two models. The requirement that data triples not be duplicated or repeated in an RDF model is referred to as the set property.
Inferencing in RDF Models
[0027] As is well known, an inferencing operation in RDF derives additional triples by applying RDF rules to the existing triples. These rules specify one or more triple patterns to be matched. If the patterns of the rule are matched, the output is a number of new triples. The rules used for inferencing may either be built in to RDF or particular to a model. In the latter case, the rules are specified with the model. The built-in RDF rule that the rdfs:subClassof predicate is transitive is an example of how a RDF rule can be used to infer new triples.
[0028] In FIG. 1, none of these triples states that John is a Person; however, the fact that he is a Person and a Reviewer is inferred from the fact that he is stated to be a Ph.D.student, which is defined in schema 111 as a subclass of both Person and Reviewer. Because the rdfs:subClassof predicate is transitive (by virtue of the built-in rule to that effect), the fact that John is a PhD Student means that he is a potential subject of the Age and ReviewerOf properties.
Using RDF Patterns to Query RDF Models
[0029] RDF models are queried by applying to an RDF model a set of RDF triples in which one or more subject and objects are replaced by variables. Such an RDF triple is termed an RDF pattern. As is well known, an RDF query (such as may be done using an RDF query language such as SPARQL) applies this set of query triples to the RDF model and returns the subgraphs that satisfy the query as a result. For a description of SPARQL, see SPARQL Query Language for RDF, W3C Working Draft, 12 Oct. 2004, found at.
[0030] For purposes of the present discussion RDF models are best represented as lists of RDF triples instead of graphs. FIG. 2 shows a table of triples 201 that lists triples making up schema 111 and a table of triples 203 that lists triples making up RDF data 113. At the bottom of FIG. 2 is an RDF pattern 205. There are many different ways of expressing RDF patterns; what follows is a typical example. When RDF pattern 205 is applied to RDF model 101, it will return a subgraph of RDF model 101 that includes all of the reviewers of conference papers who are PhD students. The pattern is made up of one or more patterns 207 for RDF triples followed by an optional filter that further restricts the RDF triples identified by the pattern. The identifiers beginning with the character "?" are variables that represent values in the triples belonging to the subgraph specified by the RDF pattern. Thus, the first pattern 207(1) specifies every Reviewer for every Conference indicated in the RDF data 203; the second pattern 207(2) specifies every Reviewer who belongs to the subclass Ph.D.student, and the third pattern 207(3) specifies every Person for which an age is specified. The result of the application of these three patterns to RDF data 203 is the intersection of the sets of persons specified by each of the patterns, that is, the intersection of the set of reviewers and the set of PhD Students of any age. The intersection is John, Tom, Gary, and Bob, who are indicated by the triples in data 203 as being both PhD students and reviewers.
Implementations of Systems for Querying RDF Models
[0031] A number of query languages have been developed for querying RDF models. Among them are: [0032] RDQL, see RDQL--A Query Language for RDF, W3C Member Submission 9 Jan. 2004,; [0033] RDFQL, see RDFQL, Database Command Reference,- ence/db/default.rsp; [0034] RQL, see G. Karvounarakis, S. Alexaki, V. Christophides, D. Plexousakis, M. Scholl. RQL: A Declarative Query Language for RDF WWW2002, May 7-11, 2002, Honolulu, Hi., USA. [0035] SPARQL, see SPARQL Query Language for RDF, W3C Working Draft, 12 Oct. 2004,. [0036] SquishQL, see RDF Primer. W3C Recommendation, 10 Feb. 2004,.
[0037] The query languages described in the above references are declarative query languages with quite a few similarities to SQL, which is the query language used in standard relational database management systems. Indeed, systems using these query languages are typically implemented on top of relational database systems. However, because these systems are not standard relational database systems, they cannot take advantage of the decades of engineering that have been invested and continue to be invested in the standard relational database systems. Examples of the fruits of this engineering that are available in standard relational database systems are automatic optimization, powerful indexing mechanisms, facilities for the creation and automatic maintenance of materialized views and of indexes, and the automatic use of available materialized views and indexes by the optimizer.
[0038] US Published Patent Application 2006/0235823 A1 describes how an RDF querying system may be integrated into an RDBMS: for convenience, this will be referred to as the 2006/0235823 reference. An additional relevant prior art reference is the Oracle Database release 10G: for convenience, this will be referred to as the Oracle 10G reference (see: Oracle Database Documentation Library,).
Overview of an RDBMS into which RDF has been Integrated
[0039] The systems of the 2006/0235823 and Oracle 10G prior art references, and the system of this invention, are implemented in an RDBMS. FIG. 4 is a functional block diagram of a relational database management system 401 into which RDF has been integrated. RDBMS systems are characterized by the fact that the information they contain is organized into tables having rows and named columns. A row of data establishes a relationship between the items of data in the row and the SQL query language uses the relationships thus established to locate information in the tables. RDBMS system 401 may be any RDBMS in which RDF queries have been integrated into the SQL used in the RDBMS. In RDBMS 401, a built-in table function has been used to integrate the RDF queries into the SQL.
[0040] The main components of RDBMS system 401 are a processor 421, memory 403, which contains data and programs accessible to the processor, and persistent storage 423, which contains the information organized by system 401. Processor 421 further can provide information to and receive information from display and input devices 422, can provide information to and receive information from networks 424, and can provide information to and receive information from file system 426. Processor 421 creates RDBMS system 401 as the processor 421 executes programs in memory 403 using data contained in memory. The programs typically include an operating system 407, which manages the resources used by RDBMS 401, relational database program 409, which interprets the SQL language, and application programs 411, which provide queries to RDB program 409. Data used by these programs includes operating system data 419, used by the operating system RDBMS data 417, used by RDB program 409, and application program data 415, used by application programs 411.
[0041] The information that RDB program 409 maintains in persistent storage 423 is stored as objects that RDBMS system 401 is able to manipulate. Among the objects are fields, rows, and columns in the tables, the tables themselves, indexes to the tables, and functions written in the SQL language. The objects fall into two broad classes: user-defined objects 441, which are defined by users of the RDBMS, and system-defined objects 425, which are defined by the system. RDBMS 401 maintains definitions of all of the objects in the database system in data dictionary 427, which is part of DB system objects 425. For the present discussion, the most important definitions in data dictionary 427 are table definitions 429, which include definitions 431 of RDF tables 443, table function definitions 433, which define table functions including RDF_MATCH table function 435, which permits use of RDF patterns to query RDF models in RDBMS 401, and SQL function definitions 437, which includes RDF_GENMODEL function 439, which takes RDF triples and makes them into RDF tables 443.
[0042] The tables of interest in user objects 441 are RDF tables 443, which are tables in RDBMS 401 that are typically made from character-string representations of RDF models and their triples. The character-string representations are typically contained in files. Tables 443 fall into three groups: RDF triple tables 445, which represent the triples making up an RDF model 101, RDF rule tables 449, which contain the rule bases belonging to RDF information 313, and RDF optimization objects 447, which are tables and other objects which are used to speed up queries on the RDF models represented by RDF triple tables 445 and the RDF rules in rules tables 449. All of these tables and objects will be explained in more detail below.
Representations of RDF Triples
[0043] The 2006/0235823 reference discloses a normalized representation for RDF triples. The tables used to represent RDF triples are shown in detail in FIG. 6. There are two main tables: IdTriples 601, which is a list of models and their RDF triples, as represented by internal identifiers for lexical values of the triple, and UriMap 613, which maps each distinct lexical value to a distinct internal identifier and thus permits conversions between the URIs and literals and the internal identifiers. The internal identifiers are typically integers or other values having datatypes native to the database management system.
[0044] The relationship established between each distinct value in one set, to a distinct value in another set, as just described between the lexical values and unique internal identifiers, termed in the following UIDs, is referred to as a mapping between the first set and the second set. The mapping between the lexical values and the UIDs is also one-to-one: for each distinct lexical value, there is one particular UID, and for each UID, there is one distinct lexical value. Further, the mapping is bi-directional: for any given UID, it is always possible to know what the corresponding lexical value is, and vice versa. These properties of the mapping allow the UID to be used to "stand in" for the lexical value, and is an important technique used in RDBMS systems.
[0045] The process of mapping lexical values to UIDs is termed normalization, and a table like the IdTriples table 601 containing UIDs for the lexical values of RDF triples, with a second table like the UriMap table 613 mapping the UID values to the lexical values, is a normalized representation of a set of RDF models. Each distinct lexical value in the RDF triples belonging to the set of models in IdTriples table 601 must have a UID mapped to it in RDBMS 401. A table such as IdTriples table 601, in which the lexical values in the triples are represented by their UIDs, is said to contain normalized forms of the RDF triples. Advantages of normalization include the following: [0046] 1. URIs of RDF data tend to be large and are usually repeated many times in the data of an RDF model. Storing them as they are (typically as Strings) would be wasteful in storage, thereby making the table and dependent indices unnecessarily large and hence resulting in lower performance. Integer UID values generally require substantially less storage space than do strings: use of integer UID values instead of the original strings in the IdTriples table thus saves substantially on storage space. The reduction in storage space for the table and dependent indices further leads to performance improvements, such as by allowing more of the table and dependent indices to fit into available main memory for processing. [0047] 2. String comparisons are further much less efficient than integer comparisons. For this reason, operations such as tests for equality (sameness) of one triple to another, or queries to locate triples that have a particular value in them, execute more quickly if they are performed using the UIDs that represent the lexical values in the triples rather than the lexical values themselves.
[0048] In the prior art, the UIDs used for normalization are typically generated by the RDBMS. Generally, the RDBMS produces sequential values for the UIDs: 1, 2, 3, 4, etc., and maps these sequential values to the distinct lexical values. Because each value in the sequence of generated values is different from the others, a UID represents each distinct lexical. One limitation of such a scheme is that UID values must be generated serially, and must be generated by a single register, which precludes the possibility of generating UID values concurrently or on multiple systems for improved performance. The mechanism for getting the UID value for a particular lexical value, is to store each pair of lexical value with UID value in a table as each UID value is generated and related to the lexical value, and then to look up the lexical value in the table when the UID is needed. One limitation of this technique is the time required to look up the UIDs, especially when the resulting table becomes large,
RDBMS JOIN Operations and their Use with Mapping Tables:
[0049] An important functionality in RDBMS systems is the JOIN operation. The JOIN operation is used as an optimization and programming convenience, to combine two tables into a temporary or simulated table, when the two tables both contain common columns that refer to the same values. JOIN operations are frequently used to combine a mapping table, such as the URIMap table 613, with a table that has been created to take advantage of the mapping, such as the IdTriples table 601. The common columns in this case are the InternalId column 615 of the URIMap table 613, and the SubjectId 605, PropertyId 607, and ObjectId 609 columns of the IdTriples table 601. A JOIN operation performs the necessary lookup operations to combine the two tables. For example, a JOIN operation could be performed on the URIMap 613 mapping table, for each of the three columns in the IdTriples table 601, to produce a temporary or virtual table that appeared to have the full strings for Subject, Object, and Predicate, rather than the UID values of the IdTriples table 601.
[0050] JOIN operations in an RDBMS operation simplify the design and programming of many applications, and generally result in less storage space being used, because it is not necessary to repeat data in several different permanent tables. JOIN operations are also often more efficient than creating an actual table like the temporary or virtual table of the JOIN. JOIN operations are also a convenient way to establish and exploit relationships among several tables.
[0051] For further information about JOIN operations, see [0052] Oracle® Database SQL Language Reference, 11g Release 1 (11.1), Joins, download.oracle.com/docs/cd/B28359--01/server.111/b28286/queries006.- htm [0053] Join(SQL), en.wikipedia.org/wiki/Join_(SQL)
Using Hashing to Generate UIDs
[0054] Some systems for storing RDF data in an RDBMS use the technique of assigning a UID that is mathematically derived from the input data value alone.
[0055] The most common form of this technique is to derive the UID values for normalization mathematically using a hashing function, also referred to just as a hash function. For the purposes of this presentation, a hash function is a function or operation that [0056] takes a value as an input, and generates another value as an output [0057] always produces an output value for every valid input value. [0058] for a given input value, always generates the same output value. . . . and thus maps its input values to its output values. Such a mapping operation with a hash function is also referred to as hashing.
[0059] Flash functions as described here are widely used to speed up table lookup in data processing systems. The data value is referred to as the hashed value--the input to the hashing function--and the output of the function for a particular hashed value is referred to as the hash value. Many well-known hash function produce output values that are integers, or are a set of bits of a particular length such as 128 bits, or a set of bytes of a particular length such as two bytes or characters.
[0060] However, hash functions generally do not always generate unique values: a case where two different values that are hashed by the hash function result in the same hash value is known as a hash collision. The technique of computing a different hash value by re-doing the hash with modified input data or an equivalent operation, for one of the hashed values in a collision, so that the resulting hash values no longer collide, is generally referred to as hash collision resolution.
[0061] Prior art systems that use hashing functions to generate UIDs for normalization from lexical values in RDF triples are unable to resolve hash collisions. When a collision occurs, these prior art systems do one or more of the following: [0062] Reject the data that resulted in a hash collision: [0063] in this case, the system cannot handle all input data. [0064] Require that the input data be modified so that no collision occurs: [0065] in this case, the system is no longer a system that answers queries or does processing about the actual input data. [0066] Disregard the probability that hash collisions may occur: [0067] in this case, the system fails to operate correctly when a hash collision does occur.
[0068] None of these alternatives is acceptable in a production system for manipulating RDF models. An example of a prior art system which uses hashing for normalization but does not resolve collisions is 3Store: (see "3 store: Efficient Bulk RDF Storage", 1st International Workshop on Practical and Scalable Semantic Systems, Oct. 3, 2003, Sanibel Island, Fla., km.aifb.uni-karlsruhe.de/ws/psss03/proceedings/harris-et-al.pdf).
Details of IdTriples Table 601
[0069] Continuing in detail with IdTriples table 601, this table has a row 611 for every RDF triple in the RDF models that have been loaded into RDBMS. The table has four columns: [0070] ModelID 603, [0071] which contains the internal identifier of the model to which the RDF triple belongs; [0072] SubjectID 605, [0073] which contains the UID for the RDF triple's subject; [0074] PropertyID 607, [0075] which contains the UID for the RDF triple's predicate; and [0076] ObjectID 609, [0077] which contains the UID of the RDF triple's object.
[0078] As shown in FIG. 6, IdTriples table 601 shows the rows for the first four data triples of data triples 203. It would of course contain a row for every schema triple in table 201 and every data triple in table 203.
[0079] The IdTriples table is partitioned in the RDBMS on Model Id with each partition holding a separate RDF graph or model. This maintains locality of each model within the table. The rows for the model Reviewers are illustrated at 631. Further, the rows for a separate model Farmers are illustrated at 633.
[0080] In a typical RDBMS, when a table is a partitioned table, the different partitions of a table may be indexed, modified, and updated separately from each other. An operation of particular interest in the system of the Oracle 10G reference, is the EXCHANGE PARTITION operation that allows an entire partition of a table to be updated in a "zero cost" operation--that is, an operation in the RDBMS which does not involve moving or copying significant amounts of data. The operation changes the internal definition of the table so that a particular partition of the table now refers to a separate part of the RDBMS storage that is already prepared with appropriate data. Depending on the implementation in the particular DBMS, the different partitions of the table may be stored in different groups of blocks on disk, in separate files, in separate directories of a filesystem, or on physically separate filesystems or data servers. Techniques for supporting partitioned tables within an RDBMS are well known in the art.
[0081] As an example, a possible partitioning of a database table involving ZIP codes would be to partition the data into two separate sections, one named ZIPEAST for rows for ZIP codes less 50000, and another ZIPWEST for rows for ZIP codes greater than or equal to 50000.
[0082] Partitioning the IdTriples table 601 in the RDBMS brings advantages such as the following: [0083] A given RDF model may be updated, have its index rebuilt or disabled, or modified in other ways without affecting the data of other models. [0084] Indices can be defined for a table but categorized as local, and thus maintained separately for each partition, resulting in more efficient performance and smaller indices in each partition. Further, compression features of the RDBMS allow the model column to be substantially compressed and use less storage space. [0085] Inserting a row in one RDF model does not affect or involve the storage or indices of other models. [0086] It is easier to set and enforce access control on a per-model basis.
[0087] A uniqueness constraint in the RDBMS is defined on the (SubjectID, PropertyID, and ObjectID, ModelId) columns in the IdTriples table to ensure that no duplicate triples can be inserted into a model in error. Because the table is partitioned on Model Id, this constraint is enforced by an index categorized as local, which results in separate index storage for each partition. The separate index storage on the model/partition ReviewersId 631 is shown at 635, applying to the three columns SubjectID 605, PropertyId 607, and ObjectId 609. The separate index storage on the model/partition FarmersId 633 is shown at 637, and applies to the same columns, but within the FarmersId model/partition only.
[0088] Uniqueness constraints are defined on a table in an RDBMS such as Oracle by the CREATE UNIQUE INDEX operation. For further information on indices and constraints, see [0089] Oracle® Database SQL Reference 10 g Release 1 (10.1), download.oracle.com/docs/cd/B14117--01/server.101/b10759.pdf.
[0090] UriMap table 613 has a single row 619 for every UID that appears in IdTriples table 601. There are four columns that are of interest in the present context: [0091] InternalID 615, which contains the UID; and [0092] RDFVal 617, which specifies a lexical value corresponding to the UID; [0093] A flag that indicates for an RDFval 617 whether it is a literal value whether the value is in the canonical form; [0094] The type of RDFVal 617.
[0095] Uniqueness constraints in the RDBMS are defined on the InternalID 615 and RDFVal 617 columns respectively in the UriMap table, to ensure that all InternalID values and also all RDFVal values are distinct. The uniqueness index and constraint on InternalID 615 is shown at 641. The uniqueness index and constraint on RDFVal 617 is shown at 642.
[0096] The canonical form for a literal value is a standard form for writing the value. For example, the numeric value 24 may be written as 024, 24.00, 2.4×101, and so on. Depending on the application, any of these may be used as the canonical form, or a different form may be used as the canonical form. Canonicalization is the technique of translating different formats for the same information value to the standard form. In the 2006/0235823 reference, the form used for the value when the first entry is made for the value in UriMap 613 is treated as the canonical value. There is further an index, idx_num 627, which indexes a given numerical value to a row in UriMap 613 that contains the canonical representation
Ancillary Application Tables
[0097] The Oracle 10G reference also describes ancillary application tables. These ancillary application tables are per-model: a particular such application table only contains information relevant to a particular RDF model.
[0098] An application may involve additional information about RDF triples from a particular model that are not part of the RDF model. Depending on the application, this additional information may be included in the input data for the RDF model, or it may be input separately. For example, there may be information associated with the RDF triple giving the provenance of that triple, such as which organization or individual input that particular triple into the dataset. FIG. 8 illustrates an exemplary application table 801.
[0099] The application table 801 contains three initial columns: column ID shown at 812 holds an internal sequentially-generated MD value for each row of the application table 801, virtual column SDO_RDF_TRIPLE_S at 813 holds a data object which contains five values. The first of these values model_id, a,b,c consists of the ModelId identifier, SubjectId, PropertyId, and ObjectId UID values corresponding to the model, subject, predicate, and original object strings of the RDF triple. The additional link_id value is an identifier for the row in the model's partition in the IdTriples table 601 that holds the normalized form of the RDF triple--this normalized triple contains the UID of the canonical form of the original object of the triple, and not the UID of the original object string. Together, the model_id and link_id values could be used for the functionality of a foreign key from the application table into the IdTriples table.
[0100] Columns 814 source_db and further columns 815, etc. contain the additional information associated with that original RDF triple. The row at 811 shows example values for a triple (model-id, a,b,c, link_id) showing that this particular RDF triple in this particular model came from a source identified as SourceDB23.
Fidelity
[0101] An additional requirement of RDF databases is that the implementation not only translate input values to a canonical form, but also that it maintain fidelity or data integrity to the original form of the values. Fidelity is the ability to reproduce the original lexical values or data exactly as it was in the original data. Fidelity can required for a number of reasons in different application, for example, it may be necessary to be able to verify the information in the RDF triples by comparing it with original data exactly, or it may be necessary to produce the original value in order to export data back to the original source.
Limitations of Prior Systems in which RDF is Integrated into an RDBMS
[0102] Experience with prior-art systems such as system 401 of FIG. 4 has shown that improvements are needed in dealing with collisions when UIDs are produced by hashing, in bulk loading of RDF data into the RDBMS, and the compression of URIs.
Generation and Use of UIDs
[0103] Real-world RDF datasets tend to be quite large. For example, the UniProt RDF model is a well-known RDF representation of the data from the Universal Protein Resource (UniProt) database about biological proteins and related annotation data (see UniProt Database,˜ejain/rdf). This model currently (2008) contains about 207 million triples, referencing some 33 million or more lexical values, constituting approximately 12 Gigabytes of data in the character string format used to distribute the model. Systems for manipulating and querying large real-world RDF datasets need to be able to operate on datasets of a billion (1,000,000,000) triples and more.
[0104] FIG. 5 gives a summary of prior art normalization for RDF Triples (subject, predicate, object). Normalized triples are stored in two tables, a LexValues (lexval, id) table 521 and an IdTriples (subj-id, pred-id, obj-id) table 501.
[0105] For the purposes of this presentation, IdTriples table 501 is equivalent to IdTriples table 601, and the LexValues table 521 is equivalent to UriMap table 613.
[0106] The LexValues table 521 has two columns, lexval 533 for the lexical value, and id 538 for the normalized UID to which that lexical value has been mapped. The row at 531 shows that the lexical value string "John" will be represented by the UID value 100. The IdTriples table 521 has three columns subj-id 505, pred-id 507, and obj-id 509 for the normalized UIDs for the subject, predicate, and object parts respectively of the RDF triples. The row at 511 shows a normalized triple to represent the RDF triple ("John", "managerOf", "Mary") with the three UID values 100, 300, and 200 respectively.
[0107] The conversion of a set RDF triples to an IdTriples table such as table 501 requires that first, the LexValues table be constructed to establish the relationship of each distinct lexical value lexval 533 to a distinct id value 538. Further, the RDF triples are processed to translate each of the three lexical strings in each triple to the corresponding UID value. This involves three separate lookups of values in the LexValues table (once for each string in the triple), times the number of triples to be processed.
[0108] There are two factors that make this unwieldy and slow for large RDF models. In the example of the UniProt RDF model, each look-up is into a table with over 33 million entries: performing this look-up is a substantial burden. RDBMS systems provide special indexing means for speeding up look-up operations into tables, but with very large tables this is still a time-consuming operation. Further, there will be about 621 million such look-ups. RDBMS systems provide special capabilities that are useful in doing the type of look-ups used in this normalization process, such as doing multiple RDBMS JOINs of an unnormalized RDF triples table with three copies of the LexValues table. However, at the size of real-world RDF models, such as the single UniProt model, even the use of joins results in very slow processing. Thus, the prior art does not scale to the sizes required for very large real-world RDF models. For a system that is intended to support multiple RDF models, it is even more the case that the prior art does not scale.
[0109] A further prior art technique for dealing with this problem is the use of HASH JOIN operations. However. HASH JOIN operations only perform well when the join table fits completely into available main memory. Given the immense size of real-world RDF models, this means that RDBMS systems with the amount of main memory required for satisfactory performance with real-world RDF data systems will be both rare and expensive. Thus, this prior art also does not scale for very large real-world RDF models.
Bulk Loading
[0110] Bulk loading is a well-understood functionality that is provided by database management systems for loading large amounts or batches of data, into RDBMS tables from external files Support for bulk loading of DBMS data is included in all almost all commercial DBMS systems.
[0111] Bulk loading consists generally of dropping indices on the tables to which data will be added, and importing the additional data directly into the relevant tables with minimal processing on the data, followed by re-indexing the data. Bulk loading as just described does not, however, work well for bulk loading of RDF data, as it does not deal with the need to transform the RDF triples by normalizing lexical values, compressing URIs, and generating canonical forms for literal values.
[0112] In the preferred embodiment, RDF data to be bulk-loaded is contained in files. In these files, the RDF data may be represented in a number of standard formats. One of these is the N-Triple format. FIG. 19 shows examples of the N-Triple format for RDF data. In this format, each element of this triple is enclosed in angle brackets and the elements have the order subject, predicate, and object. For further information on the N-Triple format, see N-Triples,.
[0113] In N-Triple format, URIs and typed literals may employ delimiters within the value string: a delimiter is a character or specific sequence of characters that appear between two parts of the string, and thus delimit, or divide, the string into a first part and a second part. Unless specified otherwise, a delimiter can appear anywhere in the string: a delimiter which is at start of a string, for example, would "divide" the string into a second part, which is the rest of the string, and a first part, which would be nothing, also called an empty string. An internal delimiter is a delimiter which is not the first character or characters in the string, and also not the last character or characters of the string. The rearmost or final internal delimiter in a string would be the last such delimiter in the string, except for a delimiter which was at the end of the string. Another term for the first part of a string divided into two parts is the prefix, similarly a term for the second part of divided string is a suffix. [0114] 1904 shows a single triple representing that a Female is a subclass of the type Person. Each element of this triple is enclosed in angle brackets and the elements have the order subject 1942, predicate 1943, and object 1944. The object part of the triple is an example of an object value that is a URI, as shown at 1941. [0115] 1903 shows a triple representing that Tom was born at 8:10:56 P.M. on Dec. 10, 2004 (Greenwich Mean Time). The triple consists of the subject 1952, predicate 1953, and object 1954. The object part of the triple, shown at 1931 and 1932, is an example of a literal value in the typed literal format (see W3C RDF/XML Syntax Specification (Revised),): the value part of the typed literal string is at 1931, an internal delimiter consisting of two carets is at 1932, and the part of the string which states its type, including the strict syntax of the value part, is at 1933. [0116] 1902 shows a short except of another RDF file format based on XML: this example is excerpted from the UniProt database (see). The example 1902 states information about the location of a gene related to an organelle (a structure inside a cell) known as a chromatophore. [0117] 1901 shows three further examples of possible URI values, such as might be used in a triple in N-Triple format.
[0118] The need to make normalized and canonicalized forms of the RDF triples being loaded in bulk makes the problem of bulk-load for RDF data challenging. The challenges include: [0119] All input triples must be transformed and checked for conformance to the syntax and semantics of the standardized RDF format and data models. [0120] All lexical values must be normalized by mapping them to UIDs. This requires that the bulk load process further determine which lexical values being loaded already have been mapped to UIDs. [0121] Different representations in literal values of the same value in the input data must be translated reliably to a standardized and predictable canonical form, and further a UID must be assigned to the canonical form. This cationicalization requirement however is in tension with the need to preserve the original literal value, as required by the property of fidelity. [0122] Duplicate triples within a model must be eliminated, in order to ensure the set property of the RDF data for the model. The set property requires that no triple exists more than once in the model. [0123] There are further needs for reducing the storage required to hold the RDF data, as the datasets are quite large, and increased storage requirements result both in greater expense, and in reduced query performance.
OBJECTS OF THE INVENTION
[0124] It is an object of the present patent application to provide improved techniques for using hash values as UIDs for instances of distinct data values.
[0125] It is an object of the present patent application to provide improved techniques for the hulk loading of RDF databases into an internal representation of the RDF databases in an RDBMS.
[0126] It is an object of the present patent application to provide improved techniques for the compression and storage of URIs in internal representations of RDF databases in an RDBMS.
BRIEF SUMMARY OF THE INVENTION
[0127] The object of providing improved techniques for the use of hash values as UIDs for instances of distinct data values is attained by a method of generating hash values for such instances. In the method, each distinct data value is mapped to hash value generation information which describes how to generate a unique hash value for instances of the distinct data value. The method comprises the step performed for an instance of a distinct data value of generating the hash value for the instance according to the hash value generation information to which the instance's distinct data value has been mapped.
[0128] The object of providing improved techniques for the bulk loading of an RDF database into an internal representation of RDF databases in an RDBMS is attained by a general method of making normalized representations of a batch of instances of data values such as RDF lexical values in the RDBMS. The method makes an entry for each distinct data value belonging to the instances of the distinct data values in the batch in a first mapping table in the relational database system. The entry contains the distinct data value and a normalized representation that is generated by hashing the distinct data value according to either a default hashing method or to a collision resolution hashing method, and a hash method indication that indicates the method used to hash the distinct data value. The method further generates a second mapping table by querying the first mapping table. The second mapping table includes entries for distinct data values whose hash method indications indicate that the distinct data values' normalized representations were made according to the collision resolution method. The method hashes each instance of data in the batch, doing so according to the default method unless the instance's distinct data value has an entry in the second mapping table.
[0129] The object of providing improved techniques for the compression and storage of URIs in representations of RDF databases in RDBMS systems is attained by a general method that may be employed with any data value that includes at least one internal delimiter. The steps of the method are parsing the data value to locate the delimiter, using the delimiter to divide the data value into a prefix and a suffix, and placing the prefix and the suffix into separate fields of an entry in an object in the relational database management system, the object being specified in the relational database management system as employing compression for the separate field containing the prefix.
[0130] Other objects and advantages will be apparent to those skilled in the arts to which the invention pertains upon perusal of the following Detailed Description and drawings, wherein:
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0131] FIG. 1 is an exemplary illustration of RDF triples shown as a graph.
[0132] FIG. 2 is an exemplary illustration of RDF triples making up an RDF schema and RDF data according to that schema.
[0133] FIG. 3 illustrates bulk loading done concurrently.
[0134] FIG. 4 is a block diagram of an RDBMS into which an RDF database has been integrated.
[0135] FIG. 5 is an exemplary summary illustration of normalization of RDF triples of the prior art.
[0136] FIG. 6 shows the normalization of RDF triples in the prior art 2006/0235823 reference.
[0137] FIG. 7 shows RDBMS tables used globally for storing RDF triples in the preferred embodiment.
[0138] FIG. 8 is an exemplary illustration of ancillary application tables to store data about RDF triples that is not part of the RDF model.
[0139] FIG. 9 shows an API used in the preferred embodiment for creating tables used for bulk loading.
[0140] FIG. 10 shows RDBMS tables used locally for storing and processing data during bulk loading.
[0141] FIG. 11 shows a flowchart for creation, collision-detection, and collision-resolution of hash-based UIDs in the preferred embodiment when hash-based UIDs are mapped one-at-a-time to lexical values.
[0142] FIG. 12 shows the processing steps for bulk loading using UIDs that are not hash-based UIDs.
[0143] FIG. 13 shows the processing steps for bulk loading using UIDs that are hash-based UIDs.
[0144] FIG. 14 shows a flowchart for creation, collision-detection, and collision-resolution of hash-based UIDs in the preferred embodiment when hash-based UIDs are mapped to lexical values during bulk loading.
[0145] FIG. 15 shows examples of RDF triples that have the same predicate and object parts.
[0146] FIG. 16 shows a pseudo-code representation of the code to parse. URI values into a prefix and a suffix.
[0147] FIG. 17 shows details of the processing to collect information about resolved collisions for the AllCollExt table.
[0148] FIG. 18 shows a flowchart for canonicalizing lexical values.
[0149] FIG. 19 shows examples of RDF data in a standard format that is part of an RDF dataset to be bulk loaded.
[0150] FIG. 20 shows details of the processing of old collisions.
[0151] FIG. 21 shows details of the processing for new collisions that are local/global.
[0152] FIG. 22 shows details of the processing for new collisions that are local only.
DETAILED DESCRIPTION OF THE INVENTION
[0153] A presently-preferred embodiment of the techniques disclosed in the following Detailed Description is employed in a production system for implementing real-world RDF applications in an RDBMS like RDBMS 401.
RDBMS Tables for Storing RDF Models
[0154] The tables used in the RDBMS for storing RDF data in a preferred embodiment are shown in FIG. 7. For clarity, a brief overview is given here. The tables are subsequently described in more detail.
Overview of Tables for Storing RDF Models
[0155] There are two global tables used for storing RDF triples. In addition, there are four local tables used as working tables during bulk loading of RDF triples, referenced in FIG. 10.
Global Tables:
[0156] LexValues: [0157] The entries in LexValues table 721 hold data for mapping lexical values to UIDs.
[0158] This is done with two columns lexval and id. If the lexical value is a literal value that has a canonical form but is not in the canonical form, the entry also maps the literal value to its canonical form and then maps the canonical form to a UID and stores the UID in a column canon-id. In addition, the entry holds the additional input information needed for the hash function if either the id value and/or the canon-id value must be rehashed in two columns lexval-ext and canon-ext. A special feature is that the lexval column is a virtual column defined as a concatenation of two columns: the lexval-prefix column and the lexval-suffix column. [0159] Note that for clarity, in the rest of this Detailed Description the name lexval for the virtual column, and the names lexval-prefix and lexval-suffix for the two columns thus described, may be used interchangeably except where they must be distinguished. [0160] IdTriples: [0161] IdTriples table 701 holds the normalized representation for the RDF triples.
[0162] There are three columns holding the UID values: subj-id, pred-id, and obj-id. Further, the column canon-obj-id holds the UID for the canonical form of the literal value if the object value is a typed literal. In addition, there is a column model-id that identifies which RDF model this triple is in: the table is partitioned with separate partitions for each model-id value.
Local Tables:
[0163] StagingTable: [0164] StagingTable 1001 is a working table to hold all RDF triples being bulk-loaded. [0165] There are three columns subj, pred, and obj for the lexical values of the subject, predicate, and object of each RDF triple being bulk loaded. Reading all the RDF triples first into this working table, allows the system to operate on the data using powerful features of the RDBMS. [0166] BatchLexValues: [0167] BatchLexValues table 1021 holds the mapping of lexical values to Urns, before they are merged into the LexValues table. [0168] The lexval and id columns hold the lexical values and their corresponding UIDs. In addition, if the lexical value is a literal that is different from its canonical form, then the canon-lexval and canon-id columns hold the canonical form and the UID for the canonical form of the lexical value, respectively. All processing for collisions, collision resolutions, and canonicalization are done before the BatchLexValues data are merged into the LexValues table. Each row also holds the additional input information required for the hash function if either the id value and/or the canon-id value must be rehashed, in two columns lexval-ext and canon-ext. [0169] BatchIdTriples: [0170] BatchIdTriples table 1051 holds the normalized representation of the RDF triples from StagingTable 1001 before they are merged into the IdTriples table. The BatchIdTriples table has the same structure as the IdTriples table, but is not partitioned: it holds triples only for a single model. [0171] One column holds the model-id identifier value for the triples being bulk-loaded. [0172] There are four columns holding the UID values: subj-id, pred-id, canon-obj-id and obj-id. [0173] AllCollExt: [0174] AllCollExt table 1061 holds a list of every lexical value in LexValues table 721 whose UID was rehashed, or for whom the UID of the canonical value was rehashed. [0175] There are three columns: lexval for the lexical value, plus the columns collision-ext and canon-collision-ext for the extension values to be combined with the lexical or the canonical form of the lexical value respectively to produce the UID for the lexical value or its canonical form.
LexValues Table
[0176] FIG. 7 shows the LexValues table at 721. Each entry of this table relates one of the distinct lexical values in the set of RDF models represented by the IdTriples table 701 to the UID that represents the lexical value in the IdTriples table. In the preferred embodiment of the IdTriples table, the UIDs are produced by hashing the lexical values, and as will be explained in detail in the following, each entry contains not only the UID, but also a description of how the lexical value was hashed to produce the UID.
[0177] 741 shows a representative row that contains a lexical value that is a URI, namely <> in column lexval 735, and the corresponding UID value 100 in column id 737. Note that the lexval column at 735 is a virtual column computed by concatenating the lexval-prefix column 733, holding for example <, and the lexval-suffix column 734, holding for example John>. Breaking the lexical values that are URIs into a prefix and a suffix exploits special properties of RDF URI format, and allows the preferred embodiment to use table and index compression features of the RDBMS, as is explained below. A similar such representative row is shown at 743.
[0178] An RDBMS can enforce a uniqueness constraint using a unique index on a column or a set of columns together in a table. A unique index is an internal structure in the RDBMS, used to enforce, or guarantee, that the table contains no two values in that column, or no two sets of values in that set of columns, that are the same--that is, all the values in that column are unique, or distinct from each other. Further information on uniqueness constraints and indices can be found in: [0179] Oracle® Database Concepts 10g Release 1 (10.1), download.oracle.com/docs/cd/B14117--01/server.101/b10743/schema.htm.
[0180] Uniqueness constraint indices are defined for the LexValues table 721 on the combined (lexval-prefix, lexval-suffix) pair of columns as shown at 754, and on the id column as shown at 755. These RDBMS uniqueness constraints ensure that all lexval 735 values are distinct, and that all id 737 values are distinct.
[0181] The system of the preferred embodiment determines and stores canonicalized forms for literal lexical values that are not in canonical form. In the case where the literal value in StagingTable has a non-canonical form, there are rows in LexValues for both the non-canonical form of the literal value and the canonical form of the literal value. In the LexValues row for the non-canonical form, the field canon-id 739 is set to the value of id in the LexValues row where the canonical value is the value of the lexval field 735. For example, row 742 is the row for the original lexical value [0182] "024" <>.
[0183] In row 742 the id value of 400 for row 744 the row for the canonical form of the lexical value--is stored in the column canon-id 739. In rows other than those for lexval values that are non-canonical forms of canonicalized literal values, canon-id is set to NULL.
[0184] The description of how the entry's lexical value was hashed is contained in a preferred embodiment in the column lexval-ext 736. If the column has the value NULL, the default hash function was applied to the lexical value and no collision resulted. If lexval-ext's value is non NULL, the field's value is a value that was combined with the entry's literal value and then hashed to produce a non-colliding hash value. In a preferred embodiment, the value in lexval-ext is the colliding hash value. This permits repeated collisions: on each collision, lexval-ext is set to the colliding value. The process continues until a non-colliding hash value is generated. Column canon-ext 740 describes how the canonical form of the literal value is hashed in the LexValues row for the canonical form.
[0185] Variations on the above technique may be employed with other techniques for rehashing or producing a non-colliding value. For example, one way of doing rehashing would be to rehash the lexical value with a different hash function. In that case, a field in the entry could contain an indicator value for the hash function used to generate the value in id 737.
IdTriples Table
[0186] FIG. 7 shows the IdTriples table 701 of the preferred embodiment. The rows of this table contain normalized representations of all the RDF triples in a set of RDF models.
[0187] 711 shows a representative row with the UIDs for the triple [0188] (<>, <>, "024" <>), which states the (subject, predicate, object) relationship that John is 24 years old. In this example, the canonical form for the literal value [0189] "024" <> is [0190] "24" <>.
[0191] The column model-id at 703 contains a unique identifier for the RDF model to which the triple represented by the row belongs. Columns subj-id at 705, pred-id at 707, and obj-id at 709 contain the UIDs 100, 300 and 200 respectively: these are the normalized UIDs for the lexical values in the triple represented by the row. Column canon-obj-id 708 holds the UID for the canonicalized literal value from column canon-id 739, and column obj-id 709 holds the id value for the original literal value from column id at 737. In the preferred embodiment, these UIDs are produced by hashing the triple's lexical values, as will be set forth below.
[0192] Storing the UID for the canonical form of the object value is done to support the requirement for value equivalence. Storing the MD for the original object value is done to support the requirement to maintain fidelity.
[0193] Like its equivalent in FIG. 6, the IdTriples table of FIG. 7 is partitioned in the RDBMS on model-id with each partition holding a separate graph or model. Special use of the partitioning is made during bulk loading, as is described below.
[0194] A uniqueness constraint 714 in the RDBMS is defined on the combined (pred-id, canon-obj-id, subj-id, model-id) columns in the IdTriples table to ensure that no duplicate triples can be inserted into a model/partition in error. Because the table is partitioned on model-id, this constraint is enforced by a separate index on the same list of columns for each partition.
Using Hashing to Generate UIDs
[0195] In the preferred embodiment, UIDs are generated by hashing lexical values. Collisions are fully resolved, so that there is a distinct UID value corresponding to each distinct lexical value. Special care is taken for rare colliding values.
[0196] UIDs created by hashing depend only on the value being hashed and the hash function. A given value hashed with a given hash function always produces the same hash value. It is this property that makes it possible to hash the lexical values to produce the UIDs for the IdTriples table. Deriving the UID for a given lexical value is a calculation, and does not require a look-up operation into a table of lexical values and their associated UIDs. This leads to several advantages for UIDs made by hashing, over UIDs that are not produced mathematically by calculation. These include: [0197] Scalability to large datasets: [0198] In the prior art, the conversion of the lexical values in the StagingTable table to the UIDs that represent the lexical values in the IdTriples table has been done by means of multiple joins between the StagingTable table and the LexValues table. For large real-world RDF datasets, both the LexValues mapping table and the StagingTable table become very large, on the order of a billion (1,000,000,000) records, and hence the need for multiple joins results in significant degradation of performance. As just set forth, if the UIDs for the lexical values are produced by hashing, there is no need for the joins. [0199] System-independent UID generation: [0200] If the hash function and value used to generate a UID for a lexical value are known, the UID can always be regenerated, regardless of the system in which the UID is generated. They further have no dependence on the order in which values are encountered. These properties permit generation of UIDs in bulk, concurrently, or in a distributed fashion. It also renders hash-based UIDs transportable between systems. The occurrence of collisions has limited the ability to exploit the advantages provided by hash-based schemes.
[0201] However, techniques disclosed herein overcome these limitations of hash-based schemes and make it possible to obtain the advantages.
Details of Hashing in a Preferred Embodiment
Selection of Hash Function to Minimize Collisions
[0202] It is desirable to use a hash function that results in very few collisions. The selection of the hash function is a matter of design choice, and may be made based on knowledge of the application, which hashing functions are available as built-in functions of the underlying RDBMS, the characteristics of the application dataset, or other considerations. The hash function used in the presently-preferred embodiment is Lookup, described in B. Jenkins, "A hash function for hash table lookup" Dr. Dobb's Journal, September 1997. Factors relevant to the selection of a good hash algorithm are: hash size, computational cost and collision rate. It is generally desirable to use a hash algorithm that is has an optimal balance of being efficient, producing a small size hash value, and being close to collision free.
[0203] Other well-known hash functions include the following: [0204] SHA1: D. Eastlake and P. Jones. "US Secure Hash Algorithm 1 (SHA1)", IETF RFC 3174, September 2001, [0205] MD5: R. Rivest. "The MD5 message-digest algorithm", IETF RFC 1321, April 1992. [0206] MD4: R. Rivest, "The MD4 Message-Digest Algorithm", IETF RFC 1320, April 1992.
[0207] MD5 and SHA1 are almost collision free but produce long hash values of sizes of 128 bits and 160 bits respectively. Note that because both the MD5 and SHA1 functions cover their value space uniformly, it is feasible in program code to truncate their hash values to 48 or 64 bits, for better storage and query execution speed. If the hash values are truncated, collisions become more likely. The Lookup hash function on the other hand directly produces shorter hash values that are both storage and query execution friendly.
[0208] For the preferred embodiment, the 63-bit version of Lockup (built into the kernel of the Oracle RDBMS as a native function) is used: the choice was based on considerations of the speed and the hash quality as determined by experiment, and on hash id size.
Details of Hash Collision Resolution
[0209] The following principles are followed in generating hash-based UIDs in the preferred embodiment: [0210] There must be no collisions between hash-based UIDs contained in the LexValues table. [0211] When an entry for a new lexical value is to be added to the LexValues table and a collision results, the collision is resolved by rehashing the newly-added lexical value. Collisions may not be resolved by rehashing lexical values that are already in the LexValues table.
[0212] A result of these two principles is that any hash collisions must be resolved before a UID is added to the LexValues table.
[0213] For clarity, the techniques employed to detect and resolve hash collisions in an RDBMS are first described for a single lexical value for which an entry is being added to the LexValues table. Subsequently, techniques will be described for adding entries for many new lexical values to the LexValues table in a single operation.
[0214] Flashing when a single entry is being added to LexValues table 721
[0215] The steps of adding a single new entry to LexValues table 721 are shown in the flowchart of FIG. 11.
[0216] 1111 shows the start of the process for adding a new value to the LexValues table. For clarity, the new lexical value and the corresponding UID value are referred to as NR.lexval and NR.id respectively: NR is an abbreviation for "new record", as the result of this process may be that a new record is added to the LexValues table.
[0217] 1112 shows the first step of checking whether the new lexical value is already in the LexValues table. This test can be performed quickly by an SQL query. If the value is already in the LexValues table, then it already has been assigned a UID value in the LexValues table, and thus no new record should be added to the LexValues table, as shown at 1113, and the process is complete.
[0218] If the NR.lexval value is not in the LexValues table, then a hash value NR.id for the UID is calculated as shown at 1120 by executing the hash function with the NR.lexval value as the input to the hash function.
[0219] Before the new record of NR.lexval and NR.id can be added to the LexValues table, it is necessary to check for a hash collision, and to resolve any hash collision. These steps start at the section noted at 1121.
[0220] 1122 shows the test for checking whether the NR.id value is already in use in any entry in the LexValues table. This test is performed quickly by an SQL query. If the NR.id value is already present in the id column of any row in the LexValues table, then a new hash value must be obtained by rehashing to resolve the collision, as described at 1123. As shown by loop 1142, a rehashing may result in another collision, which then requires another rehashing. Given the rarity of collisions, more than a few iterations of loop 1142 indicates some kind of malfunction.
[0221] The test at 1124 checks whether too many iterations of loop 1142 have occurred. If so, the process of adding an entry to LexValues table 721 terminates (1124, 1127, 1128). In the preferred embodiment, only S iterations of loop 1142 are permitted.
[0222] If the result of the step at 1122 is that the NR.id value does not result in a collision with a UID value already in use in the LexValues table, then the new record is added or inserted into the LexValues table as shown at 1131. 1132 illustrates that this is done quickly with an SQL insert operation. In the new record, id is set to the UID resulting from the hash and lexval-ext is set to NULL.
[0223] The steps at 1125 and 1126 show the rehash calculation of a new hash value in a preferred embodiment. At 1125 the current NR.lexval and NR.id are combined, and at 1126 a hash value is calculated on this combined string using the same hash function. Well-chosen hash functions will produce a different hash value from this different input value to the function. After step 1126, the processing continues back to step 1121, which is the processing to determine whether the NR.id value would result in a collision.
[0224] In the preferred embodiment, the lexical value NR.lexval and the previous hash value NR.id are combined by converting the NR.id value to a standardized string representation, and concatenating it to the end of the NR.lexval string. For example, the lexval string "John" for NR.lexval, concatenated with a hash value 24 for NR.id would be combined to produce the string "John24": other methods of combining the lexical value with the hash value may be employed as a matter of design choice. Other methods of re-hashing may be employed as a matter of design choice. One example is the use of different hash functions for rehashing.
[0225] Once the record has been added to the LexValues table, the process is complete, as shown at 1141.
Bulk Loading of RDF Data in a Preferred Embodiment
[0226] The challenge of bulk loading RDF data is the many transformations involved in getting from the lexical values of the RDF triples that are being loaded to the normalized representation. In the preferred embodiment, the need to resolve any collisions resulting from the generation of the hash-based UIDs in ways that do not affect the currently-existing LexValues and IdTriples tables is particularly challenging.
[0227] In the preferred embodiment, the necessary transformations, including collision resolution, are done in a set of what are termed in the following local tables. When the transformations are finished, entries from the local tables are merged into the LexValues table and the IdTriples table. The local tables are shown in FIG. 10.
Details of the Local Tables
[0228] In the preferred embodiment, the bulk loading process supports loading for one model at a time. The model name is a parameter to the APIs for the bulk loading process, and the model-id used in partitioning the IdTriples table is generated by the RDBMS. However, multiple instances of the bulk loading process can execute for loading into a single model or into different models concurrently, as noted previously.
StagingTable Table
[0229] 1011 in FIG. 10 shows the StagingTable table. This table has three columns, and holds the lexical values for the subjects, predicates, and objects of the RDF triples for the model currently being bulk-loaded. The sub column at 1012 holds the subject lexical value, the pred column at 1013 holds the predicate lexical value, and the obj column 1014 holds the object lexical value. 1011 shows a representative row with the UIDs for the example triple (<>, <>, "024" <>).
BatchLexValues Table
[0230] The BatchLexValues table is used to hold new lexical values from the model being loaded and to map the new lexical values to UIDs. Where the UIDs are hash-based, the mapping process involves dealing with collisions between UIDs for new values within BatchLexValues and collisions between UIDs for new values and UIDs in the LexValues table. All collisions are resolved before the new lexical values in the BatchLexValues table are added to the LexValues table.
[0231] 1041 shows a representative row holding a distinct lexical value <> and the corresponding UID value 100. The lexval column 1032 holds the lexical value. The id column shown at 1034 holds the corresponding UID value. If the lexical value is a literal value, and the literal value is different from the canonical version of the literal value, the canonical version of the lexical value is stored in the column canon-lexval at 1035, and the UID value in the column canon-id 1037 for the id 1034 for the entry in BatchLexValues for the canonical version of the value. Row 1043 shows such a representative row. The literal value "024" <> in the lexval column 1043 has associated UID value 200 in the id column 1034, and the UID value 400 of the canonical form "24" <> of the literal value is stored in the canon-id column. Further, the canonical value is stored in its own row in the BatchLexValues table, as shown at row 1044.
[0232] The columns lexval-ext 1033 and canon-lexval-ext 1036 are used to store the information needed to generate hash values for the entry in the manner described in the discussion of the LexValues table.
BatchIdTriples Table
[0233] The BatchIdTriples table, shown at 1051 in FIG. 10, is used to hold new RDF triples from the StagingTable as they are normalized, before the new triples are added to the IdTriples table. The table's entries include an identifier corresponding to the model for which the RDF triples are being loaded, and UIDs corresponding to the subject, predicate, and object lexical values of the RDF triples.
[0234] 1057 shows an example for the UID values for the triple (<>, <>, "24" <>). The model-id column 1052 holds the identifier for the model, subj-id column 1053 holds the UID value for the subject of the triple, the pred-id column 1054 holds the UID value for the predicate, the obj column 1055 holds the UID value for the object of the triple, and the canon-obj-id column 1056 holds the UID value for the canonical form of the object lexical value.
AllCollExt Table
[0235] The AllCollExt table contains an entry for each lexical value in the LexValues table whose UID or whose canonical value's UID was generated by rehashing. The entry contains the information needed to again generate the UID.
[0236] FIG. 10 shows the AllCollExt table 1061. Column lexval at 1071 holds the lexical value. If mapping the lexical value to a hash-based UID involved a collision, the UID that was combined with the lexical value to resolve the collision is contained in collision-ext field 1072. If the lexical value is a literal value which is not in canonical form but has a canonical form and the mapping of the UID for the canonical form's entry involved a collision, the collision-ext value for the canonical form's entry which was combined with the canonical form of the literal value is contained in canon-collision-ext field 1073. A representative row is shown at 1075: the literal value "Joan" had a collision for the UID of the literal value itself; as shown by the non NULL value 500 in the field collision-ext 1073.
[0237] Because hash collisions are rare, AllCollExt is always small and often empty or non-existent. It is used as follows: [0238] If AllCollExt is empty or does not exist, [0239] hashing the lexical values in the StagingTable produced no collisions, and the UIDs needed for the IdTriples table can be made by simply hashing the lexical values in the StagingTable. [0240] If AllCollExt does exist, [0241] all that is necessary to make a UID from a given lexical value in the StagingTable is to check whether there is an entry in AllCollExt for the lexical value. [0242] If there is no such entry, [0243] then there was no collision involving that lexical value and no collision involving its canonical form: the UID values are computed by the default method of simply hashing, as just described. [0244] If there is an entry, then: [0245] If the collision-ext entry is NULL, [0246] then there was no collision involving the original lexical value, and its UID is computed by simply hashing, as just described. [0247] If the canon-collision-ext entry is NULL, [0248] then there was no collision involving the canonical value, and its UID is computed by simply hashing, as just described. [0249] If the collision-ext entry is non-NULL, [0250] then there was a collision involving the original lexical value: the collision-ext value is to be combined with the original lexical value and simply hashed, to produce the UID for the lexical value. [0251] If the canon-collision--ext entry is non-NULL, [0252] then there was a collision involving the canonical form of the lexical value: the canon-collision-ext value is to be combined with the canonical value and combined value hashed, to produce the UID for the canonical form of the lexical value.
[0253] The AllCollExt table 1061 thus contains collision hash value generation information that indicates how the hash value is to be generated for those lexical values for which hash value generation resulted in a collision: in the preferred embodiment, the collision hash value generation information is the extension that must be combined with the lexical value to resolve the collision. AllCollExt thus maps lexical values whose hashing resulted in a collision to the collision hash value generation information needed to resolve the collision. Because there are only two kinds of UIDs, namely those produced from lexical values using the default hashing method and those produced using the collision hash value generation information, AllCollExt table 1061 in fact indicates for all lexical values how the UID for the lexical value is to be generated. As set forth above, if there is no AllCollExt table 1061, or if there is no entry for the lexical value in the table, the UID is generated using the default method: otherwise, it is generated using the collision hash value generation information for the lexical value in AllCollExt.
[0254] As with the entries in LexValues, many different techniques can be used in AllCollExt to indicate how the hash-based UID corresponding to the lexical value and the hash-based MD corresponding to the canonical form of the value are to be generated.
[0255] It should be pointed out here that the techniques embodied in the AllCollExt table are very general and can be used in any situation where it is necessary to regenerate hash-based UID values that include hash values resulting from collision resolutions.
Overview of Bulk Loading and the Use of Local Tables
[0256] The following is an overview of bulk loading and the use of the local tables. The bulk loading process is described in greater detail later. [0257] 1) All the triples to be bulk-loaded are loaded into the StagingTable table. [0258] 2) All the lexical values used in the StagingTable table are collected in the BatchLexValues table, and assigned initial UIDs by simply hashing. [0259] a. For literal values, the canonical form of the value is computed. If the canonical form of a value is different from the original value, an entry for the canonical form of the value is also made in BatchLexValues. [0260] b. For blank nodes--nodes for which the scope of the identifier associated with a node is only the current RDF model--an augmented string is generated consisting of the identifier of the model, plus special characters to so that it can be distinguished from other blank notes in other RDF models. For example, the blank node label _: xyz when inserted into an RDF model with model-id 5, would be augmented to make it _:m5mxyz. This is to make the resulting triples distinguishable from any use of the same blank node labels in a different RDF model. [0261] 3) Bulk operations using the RDBMS check for collisions en masse, and resolve all of them in the BatchLexValues table. [0262] 4) With all collisions resolved, the new lexical values and UIDs in the BatchLexValues table are merged into the LexValues table. [0263] Values that are URIs are parsed, and the value split across two columns in the LexValues table, for compression. [0264] 5) Information indicating how to generate the UIDs that resulted from collision resolution is put into the AllCollExt table. [0265] 6) The BatchIdTriples table is filled in by substituting the correct Urns for each lexical value in the StagingTable. The UIDs for the ID triples are computed by hashing the lexical values from the staging table. If the lexical value does not have an entry in AllCollExt, the default hashing method is used to compute the UID. If the lexical value does have an entry in AllCollExt, the MD is computed as specified in AllCollExt. [0266] 7) Any duplicated rows are removed from the BatchIdTriples table. [0267] 8) The BatchIdTriples table is merged into the IdTriples Table in a bulk operation.
[0268] The bulk loading techniques described in the following ensure that RDF triples are loaded in normalized form into the IdTriples and LexValues tables. The techniques may be employed with normalized tables that use UIDs produced by hashing the lexical values in combination with hash collision detection and resolution, and also with normalized tables using UIDs produced in other ways. In the latter case, there are no collisions between UIDs, and the AllCollExt table 1061 shown in FIG. 10 is not needed, the collision extension columns lexval-ext 736 and canon-ext 740 are not needed and may be eliminated from the LexValues table 721 shown in FIG. 7, and similarly the collision extension columns lexval-ext 1033 and canon-lexval-ext 1036 are not needed and thus may be eliminated from the BatchLexValues table 1021 shown in FIG. 10.
Details of Bulk Loading
[0269] For clarity, bulk loading is first described as it is done when UIDs that are not produced by hashing are used for normalization. Next, bulk loading is described with the inclusion of hash-based Urns.
Bulk Loading without Unique Hash UIDs
[0270] FIG. 12 illustrates the techniques for bulk loading without generating UIDs by hashing. FIG. 12 will now be described in detail. In FIG. 12, the local tables are shown at 1251, and the global tables are shown at 1252. [0271] 1) Create local tables: [0272] As an initial set-up step for bulk loading, the local tables are created or initialized for the bulk, loading process. In the preferred embodiment, the StagingTable table 1001, BatchLexValues table 1021, and BatchIdTriples table 1051 as described for FIG. 10 are created by means of an SQL statement or API such as that shown in FIG. 9 at 901: FIG. 9 is further described below. Initially, each of these tables has no rows at the start of the bulk load process. [0273] 2) Load data into StagingTable table: [0274] 1201 in FIG. 12, labeled "Step 1", shows that the RDF data is first loaded from an external file 1221, into a StagingTable local table 1001 in the RDBMS. This is accomplished by using the bulk-loading facilities of the RDBMS. Optionally, as part of this operation, the data from the external file is checked by a parsing operation to confirm that the input data is in correct RDF triple format, and that all RDF terms used in the input data are valid. Any erroneous rows are reported. The user may then correct the data that is in error, and resubmit the data for bulk load. If the input data is known already to be in the correct form without error, the parsing operation may be skipped, thus speeding up the overall bulk loading process. [0275] FIG. 10 shows the StagingTable table at 1001. The table consists of three columns, as previously described, for the subject, predicate, and object parts of the input data triples. [0276] 3) Process StagingTable, collect lexical values in BatchLexValues, assign UIDs: [0277] Subsequently as shown at 1202, labeled "Step 2", the distinct lexical values used in the RDF triples stored in the StagingTable table 1001 are inserted into a BatchLexValues local table 1021. [0278] a) A UID value is assigned to normalize each lexical value. The UID value is stored along with the lexical value in the same row of the BatchLexValues table 1021 as each lexical value, in column id 1034. [0279] b) FIG. 18 shows the steps of computing canonical values: FIG. 18 is described below. If the lexical value and the canonical form of the lexical value are not identical, the canonical value is also assigned a UID and stored in the BatchLexValues table 1021 in a similar fashion as other lexical values: further, the canonical form and the UID for the canonical form are stored in the canon-lexval column 1035 and the canon-id column 1037, respectively, in the row for the original lexical value. [0280] Details for the processing of literal values that are not in canonical form are described further below. [0281] 4) Merge BatchLexValues table with LexValues table: [0282] At shown at 1203, labeled "Step 3", the accumulated rows from the BatchLexValues table 1021 are then merged into the LexValues global table 721. The merging is done by means of an SQL MERGE statement. The SQL MERGE operation only adds rows for which the lexval value is not already in the LexValues table, thus rows for lexical values that are already in the LexValues table are not added to the LexValues table. [0283] 5) Create normalized BatchIdTriples table: [0284] As shown at 1205, labeled "Step 5", the StagingTable is then processed to generate normalized RDF triples, in which each RDF triple from the StagingTable is converted to a normalized form by replacing for each lexical value, the corresponding UID value from the BatchLexValues table, and inserting the id-based triple into the BatchIdTriples local table 1051, [0285] 6) Remove duplicate rows from BatchIdTriples table: [0286] As shown at 1206, labeled "Step 6", any duplicate rows in the BatchIdTriples table 1051 are removed by an SQL operation. [0287] 7) Merge BatchIdTriples table with IdTriples table: [0288] Subsequently as shown at 1207, labeled "Step 7", the data in the BatchIdTriples table 1051 are either inserted or appended to the IdTriples table 701. [0289] a. If the relevant model in the IdTriples table is empty, the data is inserted very efficiently by: [0290] i. Building a new index on the BatchIdTriples table from bottom up. [0291] ii. Performing a zero-cost insert of the data and the index into the partition of the IdTriples table. The zero-cost insertion is done by an SQL operation such as an EXCHANGE PARTITION operation. [0292] b. If the relevant model is not empty, the data is appended by an SQL operation such as: [0293] i. a direct-path MERGE operation, which does either an UPDATE or an INSERT if a given row is already present in the IdTriples table.
[0294] Note that FIG. 12 does not have a step labeled "Step 4".
Bulk Loading with Hash-Based UIDs.
[0295] This description refers to FIG. 13. In FIG. 13, the local tables are shown at 1351, and the global tables are shown at 1352.
[0296] The bulk loading process is as follows: [0297] 1) Load triples into the StagingTable table: [0298] 1301 in FIG. 13, labeled "Step 1", shows that the RDF triples are first loaded from an external file 1221, into a StagingTable local table 1001 in the RDBMS. This is done using the bulk-loading facility of the RDBMS. [0299] 2) Collect lexical values in BatchLexValues table, assign UIDs: [0300] Subsequently as shown at 1302, labeled "Step 2", the distinct lexical values used in the RDF triples stored in the StagingTable table 1001 are inserted into a BatchLexValues local table 1021. For each lexical value, an initial UID value is calculated by a hash function. The lexical value and the corresponding UID are stored in a row of the BatchLexValues table 1021 in the lexval 1032 and id 1034 columns. [0301] If the lexical value is a typed literal, plain literal with language tag, or other type of lexical value to be normalized, the canonical form for the lexical value is computed, as illustrated in FIG. 18. If it is not identical to the original lexical value, then the canonical value and an initial hash-based UID are also added to the BatchLexValues table. Also, the canonical value and the UID for the canonical value are stored in the row for the original lexical value in the canon-lexval 1035 and canon-id 1037 columns, respectively. [0302] Details for the processing of literal values that are not in canonical form are described further below. [0303] 3) Detect and resolve all hash collisions in the BatchLexValues table. [0304] As shown at 1113, labeled "Step 3a", a bulk operation detects all collisions by any UID value in the BatchLexValues table with any other UIDs in the BatchLexValues or LexValues tables. All collisions are then resolved in the BatchLexValues table: the collision detection and collision resolution process is described further below. For collisions that were already resolved in the LexValues table (Old collisions), the row in the BatchLexValues table is dropped, so that the mapping in the LexValues table will be what is still used. New collisions are resolved by rehashing. The hash generation information for the rehashing is stored in the BatchLexValues table in lexval-ext column 1033 for a rehashing of the lexical value, and the canon-lexval-ext column 1036 for a rehashing of the canonical value. [0305] Further details of rehashing are given below. [0306] 4) Merge BatchLexValues table into the LexValues table: [0307] At this point, all the new lexical values and their UIDs are in the BatchLexValues table, and there are no unresolved collisions. At shown at 1303, labeled "Step 3b", the accumulated rows from the BatchLexValues table 1021 are then merged into the LexValues global table 721. [0308] In the merge, the rows for lexical values in the BatchLexValues table that are already in the LexValues table are not added to the LexValues table. This aspect of a MERGE operation in the RDBMS is faster and more efficient than first deleting the duplicated rows from the BatchLexValues table, as there may be a great many such rows. [0309] In the merge operation, values in the lexval column 1032 of the BatchLexValues table that are URIs are also parsed into a prefix, or first part, and suffix, or last part. The two parts are stored in separate columns lexval-prefix 733 and lexval-suffix 734 of the LexValues table. This allows for compression of the lexval virtual column 735 and associated indices in the LexValues table. Further details of this are given below. [0310] 5) Collect information about resolved collisions into the AllCollExt table. [0311] Generally, there will have been very few collisions to resolve, in many cases none. As shown at 1304, labeled "Step 4", a query is done on the LexValues table 721 to collect all rows that were given a rehashed UID: these are the rows that have a non-NULL value set for the lexval-ext 736 or canon-ext 740 columns. If there are any such rows, the AllCollExt table 1061 is created. From each of these rows, the lexical value and the two extension values are entered into a new row in the AllCollExt table. This is explained further below. [0312] 6) Create normalized BatchIdTriples table: [0313] As shown at 1305, labeled "Step 5", the triples from the StagingTable 1001 are then converted to a normalized form by replacing each lexical value in the triple and the canonical form computed for the object lexical value with the hash-based UIDs to which they have each been mapped. [0314] The UID is generated by recomputing the hash value. Where the mapped hash-based UID was generated without collision, the UID is generated by simply again hashing the lexical value. Where the UID to which the lexical value or the canonical form of the lexical value has been mapped was generated with a collision, there is an entry for the lexical value in AllCollExt. The extension value or canonical extension value in the entry is combined with the lexical value or the canonical form of the lexical value respectively, and the combined Value is rehashed to produce the UID. [0315] Finally in this step, a triple record for the BatchIdTriples table 1051 is created with the UIDs for the subject, predicated, object, and canonical-object in the subj-id 1053, pied-id 1054, obj-id 1055, and canon-obj-id 1056 columns respectively, the identifier for the model in the model-id column, and the triple record is added to the BatchIdTriples local table 1051. [0316] 7) Remove duplicate rows from Batch IdTriples table: [0317] As shown at 1306, labeled "Step 6", any duplicate rows in the BatchIdTriples table 1051 are removed. This is shown in more detail below. [0318] 8) Merge BatchIdTriples table with IdTriples Table: [0319] Subsequently as shown at 1307, labeled "Step 7", the data in the BatchIdTriples table 1051 are either inserted into the IdTriples global table 701, or appended to the IdTriples table 701. [0320] a. If the relevant model in the IdTriples table is empty, the data is inserted very efficiently by [0321] i. first building the index or indices bottom-up, and then [0322] ii. performing a zero-cost insert+index build SQL operation, such as by an EXCHANGE PARTITION operation. [0323] b. If the relevant model is not empty, the data is appended by an SQL operation that includes removal of any duplicated rows, or in other words rows in the BatchIdTriples table that are already in the IdTriples table. This is explained in more detail below.
Hash Collision Detection and Resolution During Bulk Loading
[0324] In the techniques of the bulk-loading process, collisions are detected and resolved collectively on the large "batch" of values that are being bulk-loaded, rather than singly. Among other benefits, the technique achieves improved performance by implementing transformations in the RDBMS using SQL code: for example, the optimizer of the RDBMS selects an optimal execution plan based on relative row counts and access paths for the tables involved.
Processing of Old and New Collisions
[0325] A collision set is the set of all those lexical values that hash to a given hash value, where there are two or more distinct lexical values in the set. There may be more than one collision set in a batch of values, such as a collision set of two or more distinct lexical values that all hash to the value 96, and another collision set of two or more distinct lexical values that hash to the value 105.
[0326] There are two types of collisions to be resolved during batch loading: Old collisions, and New collisions. New collisions may be local or local/global. [0327] 1. Old Collisions: collisions that have already been resolved in the LexValues table prior to bulk-loading new data. These are indicated in the LexValues table by an entry with a non-NULL collision-ext UID. The presence of the non-NULL collision-ext UID indicates that the entry's lexical value was already rehashed. [0328] 2. New Collisions: collisions that occur due to the arrival of a new value in the BatchLexValues table. New collisions are either local or local/global. [0329] a. In a local collision, which may also be called a local-only collision, the collisions are only among new lexical values being loaded, and thus all the values in the collision set are in the BatchLexValues table. [0330] b. In a local/global collision, one of the values in the collision set is in the data already in the LexValues table: since any new collisions are always resolved before a record is added to the LexValues table, there will never be more than one value in the LexValues table which belongs to a particular collision set.
[0331] Presence or absences of collisions can be determined very efficiently by an SQL "group by" or "count" bulk operation, which is very fast. Further, as collisions are rare, usually the further steps for hash resolution will be skipped. The steps are shown in the flowchart of FIG. 14. Details of certain steps are shown in a pseudo-code form of SQL in additional figures. For further information on SQL, see [0332] Oracle® Database SQL Reference 10 g Release 1 (10.1), download.oracle.com/docs/cd/B14117--01/server.101/b10759.pdf.
[0333] The processing starts at 1412 in FIG. 14, and completes at 1460.
[0334] The steps below for Old/New collisions and rehashing require that the content of the LexValues table does not change during hash resolution. In the preferred embodiment, a locking protocol is used to prevent concurrent updates to the LexValues table until the merge of the BatchLexValues table into the LexValues table is complete. Note that storing canonical values in their own rows of the LexValues and BatchLexValues tables simplifies collision-handling code, since it is no longer necessary to do collision-handling for the canonical values separately. Once a lexical value in the BatchLexValues table has been rehashed to a UID that is not involved in any collision, that rehashed MD is used to update the id value, and the corresponding collision-ext to update the canon-id and canon-collision-ext of any entry that has that lexical value as its canonical value.
Old Collisions
[0335] Old collisions are collisions that were detected and resolved already, and have already been assigned a rehashed UID in the LexValues table.
[0336] 1426 shows the processing for Old collisions. First at 1414, a check is done for whether there are any Old collisions to be processed. If not, the further steps for Old collisions are skipped, and thus involve no overhead. Processing then continues to the steps for New collisions, as shown at 1416.
[0337] If there are Old collisions 1418, the next step at 1420 is to get a list of all the Old collisions in a working table Old_Collisions from the LexValues table. The next step is to delete the entries in the BatchLexValues table that match the entries in the Old_Collisions table, as shown at 1422.
[0338] 1422 is the step to delete from the BatchLexValues table, all rows that reference a lexical value that is already used in a mapping in the Old_Collisions table. These lexical values already have a mapping for that lexical value in the LexValues table, and the UID that is mapped to the lexical value should not be changed. The next step is to update the BatchLexValues, as shown at 1424.
[0339] 1424 shows the step to update the canon-id and canon-collision-ext columns in the BatchLexValues table for any row whose canon-lexval matches a lexical value in the Old_Collisions table, with the UID and extension in the matching row of the Old_Collisions table, so that canon-lexval, canon-id, and canon-collision-ext in the BatchLexValues table have the same mappings as in the LexValues table for any canonical value that is already in the LexValues table. This is done by scanning the Old_Collisions table for records referencing the same canonical value.
[0340] Next, the processing continues to the steps for New collisions.
Pseudo-Code Details of Old Collision Processing
[0341] FIG. 20 shows a pseudo-code representation of the SQL for the processing for Old collisions. FIG. 20 is described below.
Determining Whether there are any Old Collisions
[0342] 2001 in FIG. 20 shows the check for whether there are any Old collisions, for step 1414. The count (*) operation at 2011 returns the total number of rows in the LexValues table, for which the lexval-ext field is not NULL 2013. In other words, this is the count of rows for which the lexval-ext field is set to a value: if this count is zero, then there are no Old collisions. This check is done in a single query. RDBMS systems are particularly efficient at queries that do counts and check simple filter conditions such as whether fields are or are not NULL.
[0343] Equivalent operations for determining whether or not there are any Old collisions may also be used as a matter of design choice. For example, a running summary table can be maintained during all LOAD and INSERT operations that tracks whether any records with the lexval-ext field set to a value were added to the LexValues table: this summary table could then be queried, rather than querying the LexValues table as described at 2001.
Processing for Old Collisions
[0344] 2003 shows a pseudo-code representation of the SQL for step 1420. A working table Old_Collisions is created 2031 with the rows in LexValues for which the lexval-ext field is not NULL 2033. The Old_Collisions table will have three columns val, vid, and ext for the lexval, id, and lexval-ext columns respectively in the rows from the LexValues table, as shown in the pseudo-code at 2032.
[0345] Continuing, 2005 shows a pseudo-code representation of the SQL for step 1422. All the rows for Old collisions are deleted 2051 from the BatchLexValues table, where the lexval column in the row--as shown at 2052--matches any val entry in the Old_Collisions table 2053.
[0346] 2007 shows the pseudo-code processing for step 1424. At 2007, any canonical values in the BatchLexValues table that are also Old collisions, are updated to have the same rehashed UID used to resolve the particular Old collision in the LexValues table. [0347] 2071 shows the pseudo-code for doing a MERGE with UPDATE on the BatchLexValues table as x, and the Old_Collisions table as y. [0348] 2072 shows the pseudo-code for selecting the rows in the BatchLexValues table where the canonical value canon-lexval, matches the lexical value val in a row in the Old_Collisions table. [0349] 2073 shows the pseudo-code for the UPDATE operation on the canon-id and canon-lexval-ext columns in the BatchLexValues row, to be the vid and ext values from the matching row from the Old_Collisions table.
New Collisions
[0350] New collisions are collisions resulting from the hashing of lexical values in the BatchLexValues table, which were not previously resolved. New collisions will be resolved by rehashing all but one of the lexical values belonging to a collision set, so that no collisions remain. 1448 shows the processing steps for New collisions.
[0351] First at 1430, there is a test for whether there are any new collisions. If there are not, processing continues at 1432 to the processing steps for collecting a list of any rehashed entries in the LexValues table.
[0352] If there are New collisions 1434, processing continues to the step shown at 1436.
[0353] At 1436, we collect a list of all the UID values from the LexValues and BatchLexValues table that are involved in any of the new collisions. The list, stored in a working table New_Coll_IDs, also has an indicator in each entry about the size of the collision set--that is, how many records contained that UID value--and an indicator for each such UID whether all the lexical values hashing to that UID are from the BatchLexValues table, or one of those lexical values is from the LexValues table.
[0354] At 1438, the next step is to get a list of all the colliding records from the BatchLexValues table. A working table New_Collisions is set up to hold data from the records in the BatchLexValues table that contain a UID value that is also found in the New_Coll_IDs table. Processing then continues to 1440.
[0355] Steps 1440 and 1442 determine which New collision records will be rehash to resolve the collisions. One value in each collision set will not be rehashed. At 1440, a query is done to determine the collision sets for local collisions in the New_Collisions table. Then, for each of the collision sets, one of the records in the set is picked and deleted from the New_Collisions table. It is the remaining records in the collision set that will be rehashed to resolve the collisions for that set. Processing continues to 1442.
[0356] At 1442, a query is done to determine the collision sets for local/global collisions in the New_Collisions table. In each such set, if an entry matches the lexical value from the LexValues table that was involved in that local/global collision set, then that entry is removed from the New_Collisions table, because that lexical value must not be rehashed. The remaining records in the collision set will be rehashed to resolve the collisions for that set. Processing continues to 1444.
Pseudo-Code Details of New Collision Processing
[0357] Pseudo-code for the details of processing for New collisions is shown in FIG. 21 and FIG. 22.
Determining Whether there are any New Collisions
[0358] The test shown at 1430 for whether there are any new collisions is done in the preferred embodiment in two steps: [0359] A test whether there are any New collisions that form a local-only collision set. [0360] A test whether there are any New collisions that form a local/global collision set.
[0361] 2201 in FIG. 22 shows the test for whether there are any New collisions that are local only. [0362] The count (distinct (id)) operation 2211 determines how many distinct id values are in the BatchLexValues table--in other words, how many values not counting duplicates. [0363] The count (id) operation 2212 returns how many id values are in the table, including duplicates. [0364] If these two counts returned by the SELECT operation at 2213 are equal, this indicates that there are no local-only collisions. [0365] If the two counts are not equal, then there are local-only collisions, and the steps for processing local-only collisions must be executed.
[0366] 2101 in FIG. 21 shows the test for whether there are any New collisions that are local/global. [0367] The FROM clause at 2112 combines rows from the BatchLexValues and LexValues tables. [0368] The WHERE clause at 2113 restricts the combination of, rows to cases where there are rows with the same MD value id in the BatchLexValues and LexValues table, but the lexical values are not the same. These are thus New collisions, in which an entry in BatchLexValues collides with an entry in the global LexValues table. [0369] The count (*) operation 2111 determines how many rows are found by the WHERE clause. [0370] If the SELECT operation 2114 returns a count of zero, then there are no local/global collisions. [0371] If the SELECT operation 2114 returns a non-zero count, then there are local/global collisions, and the steps for processing local/global collisions must be executed.
Getting a List of all New Collisions
[0372] The pseudo-code details for step 1436 are shown at 2203 in FIG. 22.
[0373] 2203 shows the pseudo-code for gathering a list of all UIDs involved in new collisions: in other words, a list of all UIDs that have multiple values hashing to them. These are collected into a working table New_Coll_IDs. [0374] A working table New_Coll_IDs is created at 2231, to hold the list of all UIDs involved in New collisions. [0375] 2237 shows the SELECT statement that each row of the table will have three columns: vid, rain_src, and val_cnt: vid will be the particular UID for a collision set, min_src will be an indicator for whether it is a LOCAL only, or LOCAL/GLOBAL collision, and val_cnt will be the number of records in the collision set.
[0376] First, values are collected from the relevant rows in the BatchLexValues and LexValues tables: [0377] 2232 collects the values from the BatchLexValues table. The values from each row in BatchLexValues are the id value in a working column vid, the lexval lexical value in a working column val, and an identifier `LOCAL` in a working column src, indicating that this working row came from the BatchLexValues table. [0378] 2233 collects the values from the LexValues table. The values from each row in LexValues are the id value in a working column vid, the lexval lexical value in a working column val, and an identifier `GLOBAL` in a working column src, indicating that this working row came from the LexValues table. [0379] The `GLOBAL` identifier shown at 2238 also includes the RDBMS's internal row identifier for the row from the LexValues table. This row identifier is used in a later step of processing. [0380] The UNION ALL operation at 2235 combines both sets of working rows into one working table. [0381] The GROUP BY operation at 2236 creates a working row for each distinct UID value vid, representing the group of rows in the working table resulting from the UNION ALL operation at 2235, each of which contains that same distinct UID as the value vid. The HAVING count (distinct val)>1 clause at 2236 selects only the working rows, for which there are multiple different lexical values with that UID. These are the working rows for UIDs that are involved in collisions that have not been resolved yet. [0382] Finally, the SELECT clause at 2237 fills in the New_Coll_IDs table. The New_Coll_IDs table now has rows, each row with the columns vid of a UID that has collisions, the min_src indicator LOCAL or GLOBAL that indicates whether the collision set for this UID was local-only, or local/global, and a count val_cnt of how many lexical values from the combined BatchLexValues and LexValues tables hashed to that UID value.
[0383] The pseudo-code for step 1438 is shown at 2205 in FIG. 22.
[0384] 2205 shows the pseudo-code for the first step for creating at 2251a list of New collisions in a working table New_Collisions, one row for each pair of UID and lexical value in the BatchLexValues table involved in a new collision. [0385] 2252 indicates that information will be collected from a combination of the New_Coll_IDs and BatchLexValues tables. [0386] The WHERE clause at 2253 shows that the information will be combined for the rows from the two tables, where the UID vid in the New_Coll_IDs table row is the same as the UID id in the BatchLexValues table row. [0387] As shown at 2253, the columns in the working table. New_Collisions are: [0388] the UID vid for the collision pair, taken from the New_Coll_IDs table. [0389] the indicator min_src for whether it was a LOCAL local only or GLOBAL local/global collision, taken from the New_Coll_IDs table [0390] the lexical value val, taken from the BatchLexValues table [0391] the lexval-ext value from BatchLexValues row for that lexical value. [0392] the internal row identifier rid that the RDBMS used for that row in the BatchLexValues table. Determining which Entries Will be Rehashed
[0393] In each collision set, one value will be left unchanged, and all other colliding values will be rehashed to resolve the collisions.
[0394] The pseudo-code for step 1440 is shown at 2207 in FIG. 22.
[0395] The pseudo-code for step 1442 is shown at 2107 in FIG. 21.
[0396] When resolving a collision set for New collisions which are local-only, the UID for one of the lexical values in the collision set will be left as it is, and all the other lexical values in the collision set will be rehashed to get new hash UIDs. This is accomplished by deleting the row for one of the lexical values in the collision set--in the presently-preferred embodiment, which one is deleted is chosen arbitrarily to be the one with the lowest-valued internal row id assigned by the RDBMS. This is shown at 2207 in FIG. 22. [0397] The GROUP BY clause at 2271 divides up the New_Collisions table by groups of rows with the same UID vid--that is, the rows for each collision set--where the collisions are LOCAL only. [0398] The SELECT clause at 2272 returns, for each such group of rows, the minimum of the internal row identifiers for the group of rows. [0399] The WHERE clause at 2272 applies a filter to select only those rows in the New_Collisions table that match a row identifier returned by the SELECT clause at 2272. [0400] The DELETE operation at 2273 deletes all those rows from the New_Collisions table selected out of each group of rows, one row per collision set or group.
[0401] The remaining rows or LOCAL collisions will be rehashed, thus resolving these local collisions.
[0402] When resolving a collision set for New collisions that are local/global, the UID for the only lexical value in the collision set that came from the LexValues table is left as-is. Matching entries in the BatchLexValues table are also left as-is, because they represent the same mapping. All the other lexical values are rehashed to obtain new UIDs. In the ease that the lexical value in the collision set that came from the LexValues table is also present in the BatchLexValues table, then it is also present in the New_Collisions table, and thus must also be deleted from the New_Collisions table so that it is not rehashed. This is achieved by checking for relevant rows with the GLOBAL indicator from the New_Collisions table, and removing any that are found, before we rehash the rows in the New Collisions table. 2107 shows the pseudo-code for removing such rows, if present, from the New_Collisions table. [0403] The WHERE clause at 2171 indicates only rows from the New_Collisions table with the GLOBAL min_src indicator. For simplicity, these will be referred to here as global row entries. [0404] Note that the GLOBAL indicator here at 2174 was set previously in the SELECT statement at 2238. [0405] The WHERE clause and substr expression at 2172 determine the rowid value from the global row of the New_Collisions table. [0406] The SELECT clause at 2172 selects the lexval value from the LexValues table, for the row in the LexValues table that has the internal row identifier matching the one gotten from the global row entry of the New_Collisions table. [0407] The AND condition at 2172 states that the lexical values in the row of the New_Collisions and LexValues tables for those rows must also be the same. [0408] The DELETE operation at 2173 deletes all these selected rows from the New_Collisions table.
Rehashing and Merging of Lexical Value Mappings
[0409] At 1444, the rehashing is done to resolve the collisions listed in the New_Collisions table--these entries are all the collision cases that need to be rehashed to resolve collisions. Rehashing is done as described earlier.
[0410] Rehashing is done iteratively on UIDs listed in the final New_Collisions table. Only UIDs in the rows in the BatchLexValues table are ever rehashed. Because the number of collisions is generally very few in the preferred embodiment, the rehashing process takes very little execution time.
[0411] At 1446, the records in the BatchLexValues table are merged into the LexValues table, without adding any duplicated records. Merging of tables is a basic operation of an RDBMS, and is done as described previously.
[0412] Processing continues with the steps to create the AllCollExt table, as shown in 1456.
Collecting Collision/Resolution Information into the AllCollExt Table
[0413] As noted earlier, the AllCollExt table holds the information needed to generate the hash-based UIDs for all of the lexical values in LexValues whose UIDs were rehashed. The processing to create this table is only done it in fact, there are collisions, and only after the previous steps for Old and New collisions are completed and BatchLexValues has been merged with LexValues.
[0414] The steps for this are shown in 1456. First, as shown at 1450, the AllCollExt table is created and populated with data by querying the LexValues table for all the entries that indicate that a UID was rehashed to resolve a collision.
[0415] However, the LexValues table contains records for all the models stored in the system, and the bulk-loading process is only loading data for one model. The records for blank nodes for other models are not relevant to hash collision resolutions for the model being loaded. At 1452, this is addressed by removing from the AllCollExt table any blank node records that are not for the model being loaded.
[0416] Finally in FIG. 14, 1454 shows the step of removing the augmentation added to the string for blank nodes, so that the AllCollExt table can be used more easily for its intended purpose.
[0417] FIG. 17 shows the pseudo-code the steps in 1456 for collecting a list of all resolved collisions in the AllCollExt table. 1703 shows the pseudo-code for 1450, creating the AllCollExt table and filling it with the relevant data. [0418] The CREATE operation at 1731 creates the AllCollExt table with three columns: lexval, collision-ext, and canon-collision ext. [0419] At 1733, the SELECT statement fills in these three columns of the AllCollExt table with values from the LexValues table, namely lexval, lexval-ext, and canon-ext, respectively. [0420] The WHERE clause at 1732 states that data is read from the LexValues table only for the rows where either lexval-ext is not NULL, or canon-ext is not NULL. These are the rows in the LexValues table for UIDs that had been rehashed to resolve a collision.
[0421] As noted earlier for 1452, blank nodes are filled in with an augmented lexical value based on the model identifier, so that blank nodes from different models will not be confused in the LexValues table. Thus, the operations shown at 1703 may have picked up some records from models that are not the model being bulk-loaded. These are now removed from the AllCollExt table with the operations shown at 1705. [0422] The WHERE clause in 1751 states which rows are to be deleted from the AllCollExt table. [0423] There are two conditions in the WHERE clause, saying that only rows that meet both of these two conditions will be deleted: [0424] The lexval value in the row starts with the characters "_:", as shown at 1752. These are the starting characters for the special augmentation used for blank nodes--these characters make these lexical values different from other lexical values, such as URI and typed literal strings. [0425] The lexval value does not have the name of the model model_id that is currently being loaded, as shown at 1753. [0426] Thus, with the DELETE operation in 1751, all rows for blank nodes for other models will be deleted from the list in AllCollExt.
[0427] As a further step, 1707 shows SQL pseudo-code for 1454. Those rows for blank nodes--the rows with the special lexical value strings--are converted to remove the special augmentation with the model-id for the RDF model. This makes it possible subsequently to match this blank node lexical value with the occurrences of the same blank node in the StagingTable. [0428] At 1773 is the WHERE clause, stating that only the rows in AllCollExt are to be updated, where the lexval value starts with the characters "_:". [0429] The replace operation at 1772 replaces the special value string--which consists of the two characters "_:", an `m` character, the model identifier, another `m` character, followed by the non-zero-length alphanumeric string--with the two characters `_:` followed by the non-zero-length alphanumeric string. [0430] The UPDATE operation at 1771 then updates all the selected rows.
Ancillary Application Tables in Bulk Loading
[0431] As noted in the discussion of prior art, it is useful in many applications to support optional application tables for each RDF model, for holding information that is not inference data, but which is associated with particular RDF triples in the model. FIG. 9 shows an SQL statement or API 901 for the preferred embodiment for creating the StagingTable table: as can be seen, it creates a StagingTable table with three columns for the subject, predicate, and object values of the RDF triples: each value may be up to 4000 bytes in size. None of these columns may be NULL, as each RDF triple must be well-formed and complete. API 902 in FIG. 9 is a similar API for creating the StagingTable table with additional columns for the bulk loading of this ancillary information as part of the bulk loading process: the additional columns are for an internal row UID, and a column source for optional information about the source or provenance of the row: these columns may contain NULL values, as in this example it is not required that all RDF triples have this ancillary information.
[0432] In the preferred embodiment, an ancillary application table for a model is created as part of the set-up step of creating the local StagingTable table used during bulk loading, by means of the API 902. The application table for the given model is populated with data as each triple is added to the IdTriples previously described.
Locality and Partitioning in Bulk Loading
[0433] As noted in the discussion of prior art, the global IdTriples table is partitioned on the model-id column, so that each model is stored in a separate partition in the RDBMS. The bulk-loading techniques described for this system make use of this partitioning to provide performance advantages in various situations, which include: [0434] Independent/concurrent bulk loading of separate models [0435] Index building in bulk loading
Independent/Concurrent Bulk Loading of Separate Models
[0436] Because the partitions can be accessed and updated independently, a new model can be bulk loaded concurrently to queries and operations on to other models: one model can be updated via bulk load, or a new model can be bulk loaded, while other applications and users continue to make use of other RDF models stored in the RDBMS.
[0437] The hash-based UIDs disclosed herein play a part in this concurrent access. The LexValues table is shared among all models: e.g. a particular lexical value and its associated hash-based UID may be used in more than one model. The hash-collision-resolution techniques ensure that data once placed in the LexValues table will not be changed for a rehash: if this were not the case, then a bulk load of model data could require that data in other models be changed to take account of change in the LexValues table, and thus interfere with attempts to query or access those other models.
Index Building in Bulk Loading
[0438] As noted in the previous discussion of simple bulk loading of the prior art, for reasons of efficiency in this bulk loading, any indices on the table to be bulk-loaded may first be dropped/deleted, the new data loaded, and then the necessary indices re-constructed from the bottom up; this is generally faster than updating the existing indices as each row is loaded. Bulk loading of the prior art is generally applied to an entire table.
[0439] In the techniques described here, all RDF triples are stored in a single table IdTriples. However, the bulk-loading techniques disclosed here load only one model at a time, and thus are able to exploit the partitioning to achieve some of the same performance benefits as if the models were stored in separate tables. In addition to other efficiencies, the indices can be dropped/deleted for one partition, and reconstructed for one partition, without requiring that all indexing on the RDF store be updated or reconstructed.
Computing the Canonical Form for a Lexical Value
[0440] The following describes the steps in FIG. 18, showing how canonical forms of values are calculated in the preferred embodiment. In the preferred embodiment, canonicalization is only done for literal values, and literal values may only be used in the object value of an RDF triple. Thus, canonicalization need only be considered for lexical values in the obj column 1014 of the StagingTable table 1001.
[0441] For clarity, FIG. 18 illustrates the canonicalization of typed literal values, and uses exemplary names for built-in functions and internal data types. As is readily apparent, canonicalization for other data formats can easily be implemented in a similar fashion. For example, plain literals with language tags can be processed similarly. Examples of plain literals with language tags include following: [0442] "red"@en-US [0443] "chartreuse"@en-US
[0444] These examples consist of a value part, such as the word red, followed by an internal delimiter @, and by a language tag part, such as en-US or en-us, which indicate that the language is American Standard English. Canonicalization for color values may be done, for example, by changing the language tags to all lowercase.
[0445] FIG. 18 shows how the canonical form of a typed literal value is calculated in a preferred embodiment, starting at 1801.
[0446] 1803 shows the start of the steps to determine whether the value lexval is a typed literal. The canonical form will be stored in the variable canon_value. [0447] At 1805, the string of lexval is parsed to determine whether it contains the character sequence " ", or two carets. [0448] This character sequence is an internal delimiter in a typed literal format, between the first value part of the typed literal string, and the final type part that indicates the data type of the typed literal. [0449] The " " character sequence must be internal to the value suing, that is, it may not start at the first character of the string, and may not include the last character of the string. [0450] 1807 shows a test for whether there is such an internal delimiter. [0451] If not, the processing continues to 1809, where the canon_value variable is set to be the same as the lexval value, indicated that there is no special canonical form. The steps are now complete, and continue to 1860. [0452] As shown at 1811, if there is such a delimiter, the variable Type is set to the last part of the parsed lexval string, namely the type part of the typed literal.
[0453] 1812 shows the start of processing to perform the correct canonicalization for the typed literal. The value of the Type variable is checked to select the particular canonicalization code that is appropriate. [0454] 1820 shows a test to determine whether the Type string the particular string used to indicate a DATETIME type. [0455] If it is the string for a DATETIME type, the branch is taken to 1822. [0456] If it is not the string for a DATETIME type, processing continues to 1830. [0457] At 1822, a standard built-in function or other function, here shown as ConvertToInternalDate, converts the lexval suing to the internal RDBMS representation for a date, and stores it in a variable internal_value. [0458] At 1824, a second standard built-in function or other function, here shown as DateToString, converts the value of the variable internal_value to a string format suitable for printing or other use. This function DateToString is a standard function, and will always produce values in the same format and the same form. [0459] The steps are now complete, and continue to 1860.
[0460] Continuing at 1830, we have the steps for the next type of typed literal that may be canonicalized, namely TIME values. The processing steps are analogous to those for DATETIME values. [0461] 1830 shows a test to determine whether the Type string the particular string used to indicate a TIME type. [0462] If it is the string for a TIME type, the branch is taken to 1832. [0463] If it is not the string for a TIME type, processing continues further, as shown. [0464] At 1832, a standard built-in function or other function, here shown as ConvertToInternalTime, converts the lexval string to the internal RDBMS representation for a time, and stores it in a variable internal_value. [0465] At 1834, a second standard built-in function or other function, here shown as TimeToString, converts the value of the variable internal_value to a string format suitable for printing or other use. This function TimeToString is a standard function, and will always produce values in the same format and the same form, [0466] The steps are now complete, and continue to 1860.
[0467] At the dotted line from the "No" branch of 1830 to element 1850, canonicalization of other data types is done. These steps are analogous to those already shown, and as they are readily apparent, they are omitted for clarity.
[0468] As shown at 1850, if the lexval value is determined to be a typed literal, but is not a typed literal of any type for which canonicalization is done in the particular implementation, the canon_value variable is set to be the same as the lexical value lexval.
[0469] Processing is now complete, as shown at 1860.
[0470] Other techniques and variations for canonicalizing values may be employed as a matter of design choice. For example, an RDBMS system may have standardized functions such as ToInternal and ToString that respectively both employ inspection, and can convert any known typed literal to the appropriate internal representation, and an internal value to an appropriate output string. In this case, these functions may be employed, thus eliminating the need to test for specific values of the Type string variable and call distinct functions. Canonicalization can also be done for other data formats, depending on the particular implementation, or as a matter of design choice.
Processing of Literal Values that are not in Canonical Form
[0471] Details of the processing for literal values which are determined not to be in canonical form, and for which a canonical form is determined, will now be described.
[0472] As noted previously, literal values are permitted in the object position of the RDF triples in the StagingTable. When records are first added to the BatchLexValues table: [0473] If a lexical value is known not to be a literal, or if the canonical form of a literal value is identical to the original value, then [0474] the canon-lexval and canon-Id fields in the BatchLexValues table are set to NULL. [0475] the lexval and id fields are set to the original value and to the hash value computed for the original value, respectively. [0476] If a canonical value for a literal is computed, and it is different from the original value, then [0477] the canonical form is stored in the canon-lexval field, and the hash value computed for the canonical form is stored in the canon-id field. [0478] the lexval and id fields are set to the original value and to the hash value computed for the original value, respectively. [0479] The lexval-ext and canon-lexval-ext fields are set initially to NULL.
[0480] Thus, a non-NULL value in the canon-id field in the BatchLexValues table indicates that there is a canonical form for the lexval value, and the values of the canon-lexval and canon-id fields give the canonical form, and the UID for the canonical form, respectively.
[0481] Once all lexical values have been entered into the BatchLexValues table, [0482] A query is done to determine whether there are any entries in the BatchLexValues table with a non-NULL value in the canon-id field. [0483] If so, then a further query obtains a list of all the distinct values in the canon-lexval column and corresponding canon-id values where canon-id is non-NULL, and these values are added as records to the BatchLexValues table. [0484] In the new records, the lexval field is set to the canon-lexval value and the id field is set to the corresponding canon-id value from the list entry, and the canon-lexval, canon-id, lexval-ext and canon-lexval-ext fields are set to NULL.
[0485] Thus, any canonical values that were not already in the BatchLexValues table as lexval values, are added with their initial MD values as additional records.
[0486] Subsequently, when resolving collisions, [0487] All resolved collisions will have been rehashed for the colliding lexval values in the BatchLexValues table, and the lexval-ext field for those records will be non-NULL. [0488] For all resolved collisions, a query is done to identify all records in the BatchLexValues table for which the canon-lexval value matches a lexval value for a record that was rehashed. [0489] For each such record, the canon-lexval-ext and canon-id values are set to the lexval-ext and id values from the rehashed lexval record.
[0490] When the BatchIdTriples table is populated, no reference is made to the BatchLexValues or LexValues tables: to determine the UIDs for any lexical values that were involved in a resolved hash collision, the AllCollExt table is referenced. [0491] As this table is very small or possibly even empty, this is generally much more efficient than a lookup or join to the BatchLexValues or LexValues tables, as has been described. [0492] Further, the canon-obj-id UID value 1056 for the canonical form for each lexical value present in the StagingTable, is determined by computing the canonical form again from the lexical value, and then computing the corresponding UID by the previously described mapping by means of the default hash function and if necessary the hash value generation information in the AllCollExt table. [0493] This computation is generally much more efficient than the alternative of looking up or doing a join with the BatchLexValues or LexValues tables to determine the canon-obj-id UID value.
[0494] Note that the LexValues table stores the UID for a canonical value in the canon-id field in the same fashion as does the BatchLexValues table. Thus [0495] If the canon-id field is non-NULL, it is a flag that indicates that the lexical value has a canonical form, and the canonical form is different from the original lexical value. [0496] If the canon-id field is NULL, then there is no differing canonical form for the original lexical value.
Table Compression
[0497] URI values constitute a significant portion of the data in RDF triples, thus improvements in the compression of URI data and indices on columns storing this data can significantly reduce the amount of storage required overall, and also lead to improved performance.
Characteristics of URI Data
[0498] In the preferred embodiment, certain special properties of the URI data format are exploited so that the techniques of index key prefix compression and of table column compression of the underlying RDBMS can be used to achieve a substantial reduction in the amount of storage required: index key prefix compression and table column compression in an RDBMS are described further below.
[0499] The special properties of the URI format include the following: [0500] One special property of the URI format used in breaking the string into the two parts, is that URIs contain a number of "/" and/or "#" internal delimiter characters, and that the strings can be divided into a prefix and suffix at the last such internal delimiter in each string. [0501] Another special property exploited in the techniques disclosed here is that in real-world RDF data, typically many of the distinct URI values used will have the same first part or prefix. [0502] Further, another special property is that in real-world RDF data, generally many of the prefix parts will be longer than the suffix parts.
[0503] In the preferred embodiment the prefix is the first part of a URI string, and the suffix is a last part of the URI string: together the prefix and the suffix make up the entire URI string. An example of the URI format is shown in FIG. 7. 741 shows a representative row of the LexValues table that contains a lexical value that is a URI, namely <> and the corresponding UID value 100. Note that the lexval column at 735 is a virtual column computed by concatenating the lexval-prefix column 733, holding for example <, and the lexval-suffix column 734, holding for example John>.
[0504] In FIG. 19, 1901 shows three further examples of possible URI values, such as might be used in a triple in N-Triple format. For the purposes of this system, URI values may employ internal delimiters consisting of a single "/" slash-mark character, or a single "#" number-sign character. [0505] 1911 and 1912 show the prefix and suffix of a URI value divided by a final internal delimiter "#" [0506] 1915 and 1916 show the prefix and suffix of a URI value divided by a final internal delimiter "/": 1913 contains other instances of a "/" delimiter, but they are not final internal delimiters in the URI value. [0507] 1913 and 1914 show the prefix and suffix of a URI value divided by a final internal delimiter "/": there is a "#" character delimiter at the end of 1914, but it is not an internal delimiter because it is at the end.
[0508] Note that prefix compression and parsing may be performed in a variety of manners. For example, depending on the format of the data, it may be appropriate to reverse the role of prefix and suffix: the last part of the value can be stored as a prefix in the RDBMS, and the first part stored as the suffix, in order to take advantage of optimizations and features of the RDBMS, while combining the two parts in the original order when reading them from the RDBMS.
Parsing URI Values into a Prefix and Suffix
[0509] In the preferred embodiment, URI values are parsed by scanning the URI string to locate the rightmost internal delimiter character in the string value. The part of the URI string up to and including this delimiter character is the prefix part of the string, and the remaining part of the string is the suffix part of the string. This operation is performed in SQL, as illustrated in the pseudo-code example in FIG. 16. In the description of FIG. 16, a URI delimiter is either of the characters "/" or "#".
[0510] 1601 in FIG. 16 shows pseudo-code for an initial SQL statement, executed before the other steps: [0511] An expression vname_expr is defined that is a copy of the URI value lex_value, as shown at 1611.
[0512] 1602 shows pseudo-code for the SQL statement to extract the prefix part of the URI value, and store it in a variable prefix_val. [0513] 1622 shows an initial test to check whether the URI value is too long to fit into the defined length MAX_SUFFIX_LENGTH, as calculated at 1621. MAX_SUFFIX_LENGTH is the site of the lexval-suffix column 734. [0514] If the URI value is not too long, the THEN clause of the statement is executed. [0515] If it is too long, the ELSE clause is executed. [0516] The THEN clause consists of a call to the NVL operation, which executes on two values. This operation checks whether the first value is NULL: if it is not, then NVL returns the first value. If however the first value is NULL, then NVL returns the second value. [0517] The first value is a nested function substr (Instr( )) expression at 1623. This expression determines the location of the final internal URI delimiter character in the vname_expr value. [0518] If there is such a URI delimiter, the NVL operation returns the results of the substr expression, which is the first part of the string up to and including the final delimiter character. [0519] If there is no such delimiter, the substr function returns NULL: the NVL operation will then return the second value, which is just the entire string. [0520] The second value for the NVL operation is shown at 1624: it is the original URI value. [0521] Thus, if the THEN clause is executed, the value returned for the prefix is either the URI string up to the final internal URI delimiter, or else the entire string if there is no such delimiter. [0522] At 1626, we have the ELSE clause, which is executed if the URI string may be too long for the lexval-suffix column 734. The substr expression 1625 returns as much of the URI string as will leave MAX_SUFFIX_LENGTH characters remaining. This will be concatenated with the return value from the NVL expression at 1627. [0523] 1627 shows the first value for the NVL expression. This first value is a substr expression on the last MAX_SUFFIX_LEN part of the URI string, where it uses an instr expression to locate a final internal URI delimiter. [0524] If there is a final URI delimiter, it returns the first part of this section of the URI string. [0525] If there is no such delimiter found, the second expression at 1628 is returned, which is the rest of the URI string not returned at 1626. [0526] The second value for the NVL operation is the end of the URI string consisting of the last MAX_SUFFIX_LEN characters. [0527] Thus, if the ELSE clause is executed, the value returned for the prefix is either the URI string up to the final internal URI delimiter that will not leave the suffix part too long, or else the entire string,
[0528] 1603 shows in pseudo-code the computation for the suffix suing. [0529] The substr expression at 1631 returns whatever part of the URI string follows the part returned for the prefix. [0530] If the prefix is the entire URI string, then the suffix string is empty.
[0531] As shown in FIG. 7, these two parts are stored in separate columns in the LexValues table in the lexval-prefix 733 and lexval-suffix 734 columns.
[0532] As noted above, in RDF data the prefix part of the URI strings can be considered a less variable part, as many URI values in real-world data share the same first part of the string, or prefix, and differ in the suffix part.
[0533] This parsing is powerful and efficient. It functions for any type of URI data, without requiring that there be a list of known prefix strings.
[0534] A further property of RDF triple data, is that in real-world RDF triple data, many triples will describe facts about different objects, but a significant number of triples will have the same object and predicate--more so in fact if the object values are canonicalized--while relating to different subjects. For example, there are often a number of triples about many different subjects, stating that they are each members of the same RDF class. An example is shown in FIG. 15. [0535] 1501 in FIG. 15 shows 5 triples from an exemplary RDF model describing in part the employees in a company. [0536] The three triples shown at 1503 represent that David, Gertrude, and Ship-Lin are all managed by Charlotte. [0537] The two triples shown at 1505 represent that Charlotte and Albert are managed by Pat [0538] As is shown, the three triples at 1503 all have the same predicate and the same object, namely "managedBy" and "Charlotte", respectively. [0539] Similarly, the two triples shown at 1505 have the same predicate and object, namely "managedBy" and "Pat".
[0540] The following short overviews of index key prefix and table column compressions in an RDBMS are provided for reasons of clarity. For further information on RDBMS techniques for compression of data and compression of indices, see
Oracle Database Objects 10g release 1 (10.1): Chapter 5 Schema Objects, download.oracle.com/docs/cd/B14117--01/server.101/b10743/schema.htm,
Index Key Prefix Compression.
[0541] Index key prefix compression is a feature of many RDBMS systems applicable to uniqueness constraints or indices.
[0542] If a uniqueness constraint or index is defined on multiple columns--one or more columns which are the prefix, and additional columns which are the suffix--and further, if there are several instances of the prefix part in the data which have the same value, then the indexing mechanisms of the RDBMS internally store the index in a more concise and efficient form. Internally, the RDBMS index is sorted by the prefix: the order in which records are added to the database thus does not affect the amount of compression achieved by index'key prefix compression.
[0543] In the preferred embodiment, a key prefix of length one is defined for the lexval uniqueness constraint and enforced by a unique index on the LexValues table for the lexval-prefix and lexval-suffix columns. As noted, in real-world RDF data many URIs will share the same prefix part, when parsed according to the technique described above. Thus, the storage of the LexValues table achieves substantial compression and increased performance in the uniqueness index constraint on the lexval virtual column.
[0544] Further, as noted previously, in real-world RDF data, generally a number of RDF triples will involve the same object and predicate, especially when the object value has been canonicalized.
[0545] In the preferred embodiment, key prefix compression of length 2 is also defined on the (pred-id, canon-obj-id, subj-id, model-id) columns of the IdTriples table.
Table Column Compression
[0546] Table column compression is a feature of certain RDBMS systems, by which repeating values in a column, repeating values in disparate columns, and repeating sequences of values in a sequence of columns are compressed when the values are stored in the same database block. One factor affecting the extent of actual compression is the order of arrival of data, as that affects which values are stored in which database blocks.
[0547] By parsing out the less variable prefix part of URI strings, into a separate column, there is a substantial probability that values in that column will be repeated in a database block. This in turn results in greater compression of the data in the RDBMS, and often improved performance as well.
Background of Table Column Compression in an RDBMS
[0548] There are several known techniques employed in RDBMS systems for compressing table columns. Of interest here is that many RDBMS systems can store repeated data in a database block more compactly, by use of techniques like the following: [0549] creating a symbol table of the repeated values used in the rows stored in the block, along with a numeric ID for each value. [0550] storing this symbol table internally in the database block (where it occupies a small amount of space). [0551] replacing actual values in the records in that block, with the numeric IDs.
[0552] Generally the numeric IDs require less storage than the original values, thus the records occupy less storage space.
[0553] A further known technique employed in many RDBMS systems relates to the storage when a value is NULL: this will be the case for the lexval-ext and other columns in the LexValues tables. [0554] A NULL is not a value, instead it means that there is no value in the column at all. [0555] An RDBMS system allocates no space for columns in a row that are NULL, merely an indicator of the column's existence. [0556] Thus, tables that contain many NULL values, require less storage space than if the columns contained a special value such as zero.
Background of Index and Index Key Compression in an RDBMS
[0557] There are also known techniques employed in RDBMS systems for compressing indices. One of those of interest has to do with prefix key compression.
[0558] If the key value used in an index is a string, and if the keys can be broken into two parts--one part called the prefix that generally does not change often in the index, and a second part called the suffix that does--then prefix key compression can result in the index being stored in less space. Somewhat similar in concept to column compression for repeated values, one aspect is that [0559] the less varying parts with the same value will be stored once, and [0560] the index changed so that the entire key (with both parts) is not stored.
[0561] This results in less space being required for storing the index.
[0562] Indices for which the keys have no suffix part can sometimes still be stored more efficiently using prefix key compression, as the RDBMS can use the internal row number of the table in the RDBS to replace the suffix.
[0563] A further set of known techniques employed in RDBMS systems relates to queries as to whether a particular column is or is not NULL. This will be the case for the lexval-ext and other columns in the LexValues tables.
[0564] The property of being NULL--of having no value at all--can occur quite often for data stored in an RDBMS. On technique employed in RDBMS systems is to [0565] store a special indicator in a database block, [0566] if for a particular column, all the records stored in that block have only NULL for the value in that column.
[0567] With this or a similar technique, a query that tests whether a given column is or is not NULL can first check the special indicator on the block, and thus frequently avoid having to process any of the actual records stored in the block. This can greatly speed up such types of queries.
Concurrent Bulk Loading
[0568] In the preferred embodiment, the bulk loading process loads data for one model. However, multiple instances of the bulk loading process can execute for different models concurrently. This is because in the preferred embodiment [0569] Separate local tables are created for each bulk loading process. Operations by each bulk loading process on its own StagingTable, BatchLexValues, BatchIdTriples, and AllCollExt tables--described below--and other local tables employed in a particular implementation, can thus be done concurrently and independently of other bulk loading processes. [0570] Models are partitioned in the global IdTriples table. One model/partition can be updated by locking the one partition when the BatchIdTriples data is merged with the data for that model, performing all updates, and then releasing the lock, [0571] Accesses to the global LexValues tables is interlocked: only one bulk loading process at a time is thus able to detect collisions with and to update the LexValues table, and it does not unlock the LexValues table until it has completed all updates to the table. [0572] Initial UID values are computed by default hashing which does not involve access to the LexValues table, thus in this step the LexValues need not be locked to prevent concurrency. Further, collision detection and resolution is done without updating the LexValues table, and no update to the LexValues table changes any existing entry: thus the LexValues table does not need to be read-locked during these steps to prevent read-concurrency. [0573] Once a process has resolved the collisions in its bulk-load batch, its BatchIdTriples table can be normalized without accessing the LexValues table. This is the case for two reasons: once an entry is added to Lexvalue, the entry never changes, and a process's AllCollExt table, which is made after the process has updated the Lexvalue table, preserves the state of LexValue as regards hash methods as of the time the process made the AllCollExt table. Thus, a process need not lock the LexValues table to prevent concurrency while normalizing its BatchIdTriples table.
[0574] Further, a single model can be updated by multiple bulk loading process instances: for example, different parts of a model, such as the first half of the triples and the second half of the triples, by two different instances. The locking techniques above serialize access to the object being locked: one instance waits for the other to finish the particular step before starting those steps that involve updates to the object, such as to the LexValues table and to the IdTriples table. Other operations in the bulk loading process operate only on the local tables for the particular instance of the bulk loading process--thus in these operations the two different instances do not interfere with each other and may execute concurrently.
[0575] FIG. 3 illustrates the operation of concurrent bulk loading FIG. 3 shows an exemplary first and second instance of a bulk loading process running, at 303 and 305, respectively. They share and coordinate access to the global tables, as shown at 301. Each process runs the same bulk loading program, and has its own set of working tables and local data. The two processes coordinate their access to the global tables.
[0576] The first instance 303 of a bulk loading process consists of the executing program for bulk loading 311, and its working tables, shown here as the StagingTable 313, the BatchLexValues table 315, the BatchIdTriples table 317, and the AllCollExt table 319. The executing program 311 is the only program that uses its working tables and local data: thus no coordination with other programs is required for these tables and data.
[0577] The second instance 305 of a bulk loading process consists of the executing program for bulk loading 321, and its working tables, shown here as the StagingTable 323, the BatchLexValues table 325, the BatchIdTriples table 327, and the AllCollExt table 329. The executing program 321 is the only program that uses its working tables and local: thus no coordination with other programs is required for these tables and data.
[0578] The global IdTriples table is shown at 341. As illustrated at 343, each of the two instances of the bulk loading program can access the IdTriples table, however the access is interlocked: by means of a locking function of the RDBMS or its operating system, each bulk loading process, such as process 311, will first attempt to lock the access to the specific partition of the IdTriples table for the model being bulk-loaded; if no other process has access to this partition of the table locked, the bulk loading process (311 in this example) gets to lock the access, and access the partition. Once the process has completed its use of or updating to the table, it unlocks access. Alternatively, the lock could be done on the entire IdTriples table, resulting in a somewhat lesser degree of concurrency.
[0579] If access to the table is already locked, the process attempting to lock waits, either automatically or in a loop, until the table is unlocked. At that point the process gets to lock the table for itself, and access the table.
[0580] Similarly, access to the global LexValues table 331 is locked, as shown at 333.
[0581] Thus, the use of each global table by the various instances, such as 311 and 321, of the bulk loading program are synchronized: only one process may access the locked table or resource at a time, and processes wait on each other for access.
[0582] While only one process at a time can thus access or update the IdTriples or the LexValues table, the other processes can be in other steps of processing concurrently, such as the steps for reading in files to their StagingTables, assigning initial UID values and canonicalizing values in their BatchLexValues tables, or filling in their BatchIdTriples tables with normalized UIDs. Further, one process can have locked and be accessing the global IdTriples table, while another has locked and is accessing the global LexValues table.
CONCLUSION
[0583] The foregoing Detailed Description has disclosed to those skilled in the relevant technologies how to generate hash values for instances of distinct data values according to the inventive techniques, how to make normalized representations of a batch of instances of data in a relational database management system according to those techniques, and how to compress data values which contain an internal delimiter according to those techniques. The Detailed Description has also disclosed the best mode presently known to the inventors of practicing their inventive techniques.
[0584] As disclosed in the Detailed Description, the inventive techniques are implemented in a relational database management system that includes tables that provide a normalized representation of one or more RDF models. The instances of distinct data values are lexical values from the RDF models and the lexical values are hashed according to the techniques of the invention to generate the UIDs for the normalized representation. Similarly, it is lexical values that contain URIs that are parsed according to the compression techniques of the invention. However, as has been already pointed out and as will be readily apparent to those skilled in the relevant technologies, the inventive techniques may be employed in any situation in which distinct data values need to be mapped to UIDs, and thus for any situation in which normalized representations of data items are required. In the database context, the techniques permit normalization of data items without the use of JOIN operations to determine which UID corresponds to the data item being normalized. Determination of a prefix for compression by parsing may be employed not only with URIs, but also with any kind of data that includes internal delimiters.
[0585] As is apparent from the foregoing, aspects of the inventive techniques may be applied in environments other than those provided by RDBMS or other database systems. Details of the embodiment of the Detailed Description further depend on characteristics of the RDMS in which it is embodied and will differ for embodiments implemented using other RDBMS or database systems and for embodiments in environments other than database systems. Finally, as is well understood by those skilled in the relevant technologies, software allows countless ways of implementing the principles of the inventive techniques and the implementations will vary according to the purpose for which the inventive techniques are being used and the system upon which they are being implemented.
[0586].
Patent applications by Eugene Inseok Chong, Concord, MA US
Patent applications by Melliyal Annamalai, Nashua, NH US
Patent applications by Souripriya Das, Nashua, NH US
Patent applications by Zhe Wu, Westford, MA US
Patent applications by ORACLE INTERNATIONAL CORPORATION
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20120084271 | CC-MAIN-2014-10 | refinedweb | 26,571 | 55.47 |
I need help fixing this! I just keep getting a million errors urgh
/* The formula for computing the number of ways of choosing r different things from a set of n things is the following: C(n, r) = n! / (r! * (n-r)!) Write a recursive program (name the class that contains the main method Activity2) that executes C(n, r) three times and prints the results. Your execution should use the (n, r) pairs as follows: (2, 4), (5, 3), and (24, 12). Hard-code these values and calls to the recursive method into your program. (Do not prompt the user to enter them.) Note that you will need to devise and program a recursive method that calculates the factorial of a value. In the event any of the pairs perform a calculation that throws any exception, catch the exception, print a suitable informative message and continue processing. */ import java.math.BigInteger; public class Activity2 { public static void main(String[] args) { try { int n=2; int r=4; System.out.print("C("+n+","+r+"): "); int fn=factorial(n); int fr=factorial(r); int fnr=factorial(n-r); int ncr=fn/(fr*fnr); System.out.println(ncr); } catch (Exception e) {} } static int factorial(int n){ if (n==0) return 1; else return(n*factorial(n-1)); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/17117-basic-recursion-problem-code-needs-modified.html | CC-MAIN-2015-35 | refinedweb | 215 | 54.73 |
UTILITIES AND SETTINGS
System Monitor and System Resource Meter
System Monitor is an
Figure 24.2: System Monitor.
Many other useful tools, including the Windows 9x Resource Meter, which is used to monitor system,
Task Manager
The Task Manager is by far one of the most useful tools available in most Microsoft operating systems. Pressing Ctrl+Alt+Delete displays the Task Manager and programs that are currently running on the operating system. If a process or program is stalled or not responding, you can select the program and press the End Task radio button to remove it. Pressing Ctrl+Alt+Delete twice restarts the system. Figure 24.3 displays some of the many system processes in the Task Manager of Windows 2000.
Figure 24.3: Windows Task Manager.
Dr. Watson
Dr. Watson is a program error debugger tool used in Windows 9x, NT, and 2000 to detect and log critical error information pertaining to system halts. Dr. Watson also attempts to point you in the right direction by offering possible tips for problem and error resolution. The question at hand does not seem to be “Dr. Watson, I presume?” Instead, the question seems to be, “Dr. Watson, where are you storing your log files?” The Dr. Watson tool stores its information in log files located in various places. In Windows 9x, the Dr. Watson log file is called WATSONXXX.WLG and is stored in C:\WINDOWS\DRWATSON. In Windows NT, Dr. Watson creates two log files named DRWTSN32.LOG and USER.DMP that are stored in C:\WINNT. When Dr. Watson intercepts a program fault in Windows 2000, the file DRWTSN32.LOG is produced and is stored in the C:\DOCUMENTS AND SETTINGS\ALL USERS\DOCUMENTS\DRWATSON. Dr. Watson offers a standard view and an advanced view for diagnostic reporting purposes.
Device Manager
The Device Manager is probably the most useful utility ever created for viewing, troubleshooting, and installing devices that are attached to a computer system. As mentioned earlier in this book, the Device Manager can be used to view or change system resources, such as IRQs, I/Os, and DMAs. The Device Manager is available in Windows 9x and Windows 2000. It is not available in Windows NT.
There are two easy ways to navigate to the Device Manager utility in Windows 9x. A quick way to access it is to right-click the Desktop icon My Computer, select Properties, and choose Device Manager. You can also access Device Manager by clicking Start > Settings > Control Panel, double-clicking the System icon, and choosing the Device Manager tab. The Device Manager opens, and you see a display similar to that shown in Figure 24.4.
Figure 24.4: Windows 9x Device Manager.
The Properties button depicted in Figure 24.4 allows you to view more information about the device that you select. If you click Computer and select Properties, you can view information regarding the IRQs, DMAs, I/Os, and memory of your particular system. The Refresh button forces the system to refresh the device through a process called enumeration. This means the system will simply start the plug-and-play process for the device. The Remove button forces the device to be deleted from the operating system’s Registry. The Print button prints out a report based on the devices listed in Device Manager. You can expand and view more information for a particular device by clicking on the plus (+) sign that is located to the right of it.
The Device Manager places specific symbols on its list of devices to notify you if a particular device is having a problem or has been disabled. The most common Device Manager symbols and their meanings are as
A black exclamation point (!) in a yellow circle represents a device in a problem state. The device in question may still be operational; the error may be
relatedto the system’s ability to detect the device, or it may be a device driver issue.
A red “X” means that the device in question has been disabled by the system. A resource conflict or a damaged device usually causes this error.
A blue “I” on a white background is used to show that a device’s system resources have been manually configured. It is the least common symbol of the three and is used for informational purposes only.
What do you do if you are having trouble with devices in the Device Manager? Here are a few important tips:
It is important to understand that the Device Manager may not always be able to list a device’s properties. If you have a device that seems to be running properly but you cannot list its properties in the Device Manager, you are most likely using a CONFIG.SYS file to load older Real Mode drivers for the device. This often occurs when running older CD-ROM devices with Windows 9x.
If a device is displaying a black exclamation point (!) on a yellow circle, you should check the properties of the device and identify any resource conflicts. You may have to
reassignan IRQ for the device in question before the system can utilize it. You can also troubleshoot this error by starting the Hardware Conflict Troubleshooter that is located in Windows Help. To practice using the Windows 98 troubleshooters, select Start > Help and select Troubleshooting.
The Windows 95 installation CD-ROM contains several useful hardware diagnostic tools, such as MSD.EXE and HWDIAG.EXE. MSD.EXE is based on the old DOS diagnostic reporting tool. HWDIAG.EXE is a more robust diagnostic tool that will provide detailed information about hardware devices. Neither of these tools loads by default; your best bet is to use the Device Manager.
Windows Update
All current versions of Windows operating systems include the Windows update feature known as Windows Update Manager. This utility is used to keep your operating system up to date with current patches, fixes, security updates, service releases, and other information
Here are some important notes regarding Windows Update:
It is recommended that you view the Web pages on the Windows Update site at an 800 x 600 or higher screen resolution.
When you enter the Microsoft Windows update site, ActiveX controls are downloaded to your system. These controls are used to check your system for specific updates that your system may require. After checking your system, Windows update provides you with a list of updates, software, and drivers that are suggested in order to keep your system up to date.
If you reinstall windows or upgrade you system, it is recommended that you reinstall any
componentsyou had previously installed using Windows Update.
Display Settings
When you experience display-related problems, start by navigating to the Windows 9x Display Properties Window. Select Start > Settings > Control Panel, and double-click on the Display icon. The Display Properties Settings Window should appear by default (Figure 24.5). You can also get to this window by right-clicking the Windows Desktop, selecting Properties, and clicking the Settings tab.
Figure 24.5: Windows 9x Display Properties Settings Window.
Figure 24.5 shows the Display Properties Settings window for the second time in this bookand for good reason. The current as well as past A+ Operating System exams focus on your ability to resolve display-related issues with this graphic. For example, If you are running a 640 4 80-pixel display, as shown in Figure 24.5, and you are unable to view entire Web pages through your Internet browser, or you want to fit more icons on your Desktop, simply use the mouse to move the “Screen area” bar to 800 600 pixels or more. If your video card supports a higher resolution, you will be able to fit more into the viewable screen area with this method. If you select the Advanced > Performance button, you will have the option of changing the Graphics Hardware acceleration settings.{% if main.adsdop %}{% include 'adsenceinline.tpl' %}{% endif %}
Changing these settings can be useful if you are having trouble with how fast your system is handling graphics. If you select Advanced > Adapter, the system’s video Adapter/Driver information will look like that shown in Figure 24.6. If you select the Change radio button on this screen, the Update Device Driver Wizard window will appear and lead you through the process of updating the video adapter driver. Remember, there are usually several ways to achieve a single goal in a Windows operating system. You can update your video adapter driver, as well as many other device drivers, through Device Manager.
Figure 24.6: Graphics Adapter Properties Window .
Virtual Memory Settings
In Chapter 23, we discussed virtual memory and swap files. You should recall that Windows 9x has the ability to use a portion of free hard disk space as a temporary storage area or memory buffer area for programs that need more memory than is available in RAM. This temporary hard disk memory area is called virtual memory or swap file. The actual name for this memory in Windows 9x is called WIN386.SWP.
To view or change your virtual memory settings in Windows 9x, select Start > Settings > Control Panel > System > Performance > Virtual Memory. You will be presented with a window similar to that shown in Figure 24.7. As stated in the Virtual Memory window, you should be very cautious when changing your system’s virtual memory settings. If you are not sure of what these settings should be, you should obviously let Windows manage your settings for you. If Windows is managing your virtual memory settings for you, it
Figure 24.7: Windows 9x Virtual Memory Settings.
The Recycle Bin
The Windows 9x Recycle Bin is a Desktop icon that represents a directory in which files are stored on a temporary basis. When you delete a file from a computer system’s hard drive, it is moved to the Recycle Bin. To restore a file that has been deleted to its original location on the hard drive, right-click on the file in the Recycle Bin and select Restore, or select File from the menu bar and then select Restore. If you want to remove a file from the Recycle Bin to free up hard drive space, select File from the menu bar, and select Empty Recycle Bin. You can also delete entries in the Recycle Bin by selecting File from the menu bar and selecting Delete. When you delete a file from the Recycle Bin, its associated entry in the hard drive’s FAT is removed. It is still possible to recover the deleted file with many available third-party utility programs.
Disk Cleanup Utilities and More
Your hard drive can get bogged down after a while with unnecessary files. Applications and programs can leave temporary files
Windows 98 has an excellent tool known as Disk Cleanup. This built-in utility allows you to get rid of those unnecessary files and free up space. To access the Disk Cleanup in Windows 98, click Start > Programs > Accessories > System Tools > Disk Cleanup. A window appears that asks you to select the drive you want to clean up. The default is C:. Select the OK radio button, and the window shown in Figure 24.8 will be displayed. You can select the options you wish to have removed from your drive by inserting a check mark
Figure 24.8: The Disk Cleanup Window.
If you select the More Options tab in the Disk Cleanup window, you will have the options of removing optional Windows components and other programs that you do not often use. If you choose to remove Windows components from within this window, you will automatically be directed to the Add/Remove Programs Properties/Windows Setup Window shown in Figure 24.9. This window lets you add or remove a Windows component. Notice the Install/Uninstall tab. The Install/Uninstall window lets you install applications and programs from a floppy or CD-ROM, or uninstall registered software that you do not use. The Add/Remove Programs Properties window can also be accessed by clicking Start > Settings > Control Panel > Add/Remove Programs.
You should also take note of the Startup Disk tab in Figure 24.9. With the Startup Disk window, you can create a Windows 9x bootable troubleshooting floppy disk that you can use later for diagnostic purposes.
Figure 24.9: Add/Remove Programs Properties Window.
Windows 9x allows you to schedule routine maintenance jobs easily through the use of the Maintenance Wizard. This utility is a handy tool that can be used to automatically run utilities such as Defrag, ScanDisk, or Disk Cleanup at times that are
Backup Utility
Windows operating systems come with a Backup utility program that is used to backup information to a tape storage device for future restoration. It is of utmost importance that you backup your critical information in the event of an operating system failure or accidental deletion of files.
A good backup program consists of a backup schedule that can be created using the Backup utility or a third-party backup utility program. In Windows 9x, the Backup utility can be accessed by selecting Start > Programs > Accessories > System Tools > Backup. If the Backup utility is not installed, you can install it through the Add/Remove programs applet located in the Control Panel.
There are several backup types and strategies that you can implement. The backup type and strategy that you use depends on the amount of storage capacity you have, the time it takes to back up files, and the time it takes to restore files. The following types of
Copy backs up only selected files. The Copy backup turns the Backup archive bit off or resets it.
Full backup backs up everything on your hard drive. If you have to restore an entire system, it is the best backup to have. During this backup, the backup archive bit is turned off. This simply means that every file will be
backedup again whether or not its contents have changed. The disadvantages of this type of backup are that it takes longer to run and it is often redundant because most system files do not change.
Incremental backup backs up all the files that have changed or been created since the last backup job and have their archive bits set to on. This backup type uses less tape storage space and
spreadsthe storage of files across several tapes. With an incremental backup, files are backed up much faster than with a differential backup, but they take much longer to restore. To do a proper incremental restore, you will need the last full backup tape and multiple incremental tapes.
Differential backup backs up all files that have been created or have changed since the last backup and does not reset the archive bit. The archive bit is left on. Differential backup takes much longer to do, but is much faster to restore. To do a proper restore, you will need the last full backup tape and the last differential backup tape.
A Zip drive is a popular information storage device. The average Zip drive can hold about 100MB of data, which can be very useful for the daily storage and backup of important files. Users of Windows can use a third-party utility, such as WinZip, to store files in compressed form and then back them up. Windows stores compressed files in .ZIP format. | http://flylib.com/books/en/2.182.1.288/1/ | CC-MAIN-2014-42 | refinedweb | 2,570 | 63.29 |
c# Where can find System.Windows.Controls.dll
I have problem with dll file and have project which need this file System.Windows.Controls.dll for
listBox1.ItemsSource
error fix , and add reference with this dll to fix error. Where i can find this dll file? Is there any download link ? Share please ! Thanks !
In "Add Reference" it doesn't exist !
Solution:
Answers
Here are the steps:
- Right click on References in the Solutions Explorer (Solutions explorer is on the right of your IDE)
- Select Add Reference
- In the window that opens, Select Assemblies > Framework
- Check the PresentationFramework component box and click ok
This should be in the PresentationFramework.dll but that control is in the System.Windows.Controls namespace.
You can add it by going to your project, Right clicking on References > Add Reference > .Net Tab > And selecting this DLL
I was able to find the dll file by searching my computer for "System.Windows.Controls.dll". I found it under the following file location... "C:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Libraries\Client\System.Windows.Controls.dll"
Hope this helps!
Need Your Help
Lists of data types: "could not deduce (a ~ SomeType) from the context (SomeTypeclass a)"
Running java with JAVA_OPTS env variable has no effect
java shell remote-debugging jvm-argumentsIn a shell script, I have set the JAVA_OPTS environment variable (to enable remote debugging and increase memory), and then I execute the jar file as follows: | http://www.brokencontrollers.com/faq/11678794.shtml | CC-MAIN-2019-51 | refinedweb | 242 | 56.96 |
Xmonad/Frequently asked questions
From HaskellWiki
Revision as of 23:19, 23 February 2010 -- 0.9 main: main = xmonad =<< dzen defaultConfig -- 0.8.1 main:.
3.10 Floating a window or sending it to a specific workspace by default
See General xmonad.hs config tips regarding manageHook, and the section here about 'xprop' for this.
3.11,.12.
3. }.)
3.15 I need to find the class title or |..
3.17.18 .from
XMonad.Hooks.DynamicLog(was added after 0.6 release), which can also display current workspace, window name, layout, and even arbitrary
3.19 How can I make xmonad use UTF8?
TODO: is this still accurate? Doesn't xmonad-0.8 and greater always use UTF8 with no extra imports or configuration changes? its output ppOutput like his:
import qualified System.IO.UTF8 -- lots of other stuff config archive.
3.20" Option ., them and instead use the new"
4.12 Problems with Java applications, Applet java console
- Using the free blackdown java runtime also seems to work correctly.
4.12.5 Use JDK 7
- Using JDK 7 also seems to work well..
4.17.
4.18 OpenOffice looks bad
OpenOffice won't use (strangely) the GTK look, unless the following environment variable is set:
OOO_FORCE_DESKTOP=gnome
Use this if you don't like the default look of OpenOffice in xmonad.
4.19:.20: | http://www.haskell.org/haskellwiki/index.php?title=Xmonad/Frequently_asked_questions&diff=33826&oldid=33818 | CC-MAIN-2013-48 | refinedweb | 225 | 59.3 |
Created 01-16-2017 10:03 AM
Hello All,
I require to import and parse xml files in Hadoop.
I have an old pig 'REGEX_EXTRACT' script parser that works fine but takes a sometime to run, arround 10-15mins.
In the last 6 months, I have started to use spark, with large success in improving run time. So I am trying to move the old pig script into spark using databricks xml parser. Mentioned in the following posts: The version used is;
The script I try to run is similar to:
import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.sql.hive.orc._ import org.apache.spark.sql._ import org.apache.hadoop.fs._ import com.databricks.spark import com.databricks.spark.xml import org.apache.spark.sql.functions._ import org.apache.spark.sql.SQLContext import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType} // drop table val dfremove = hiveContext.sql("DROP TABLE FileExtract") // Create schema val xmlSchema = StructType(Array( StructField("Text1", StringType, nullable = false), StructField("Text2", StringType, nullable = false), StructField("Text3", StringType, nullable = false), StructField("Text4", StringType ,nullable = false), StructField("Text5", StringType, nullable = false), StructField("Num1", IntegerType, nullable = false), StructField("Num2", IntegerType, nullable = false), StructField("Num3", IntegerType, nullable = false), StructField("Num4", IntegerType, nullable = false), StructField("Num5", IntegerType, nullable = false), StructField("Num6", IntegerType, nullable = false), StructField("AnotherText1", StringType, nullable = false), StructField("Num7", IntegerType, nullable = false), StructField("Num8", IntegerType, nullable = false), StructField("Num9", IntegerType, nullable = false), StructField("AnotherText2", StringType, nullable = false) )) // Read file val df = hiveContext.read.format("com.databricks.spark.xml").option("rootTag", "File").option("rowTag", "row").schema(xmlSchema).load("hdfs://MyCluster/RawXMLData/RecievedToday/File/Files.tar.gz") // select val selectedData = df.select("Text1", "Text2", "Text3", "Text4", "Text5", "Num1", "Num2", "Num3", "Num4", "Num5", "Num6", "AnotherText1", "Num7", "Num8", "Num9", "AnotherText2" ) selectedData.write.format("orc").mode(SaveMode.Overwrite).saveAsTable("FileExtract")
The xml file looks similar to:
<?xml version="1.0"?> <File> <row> <Text1>something here</Text1> <Text2>something here</Text2> <Text3>something> <row> <Text1>something here</Text1> <Text2>something else here</Text2> <Text3>something new> ... ... </File>
Many xml files are zipped together. Hence the tar.gz file.
This runs. However for a 400MB file it takes 50mins to finish.
Does anyone have an idea why it is so slow, or how I may speed it up? I am running on a 7 machine cluster with about 120GB Yarn memory, with hortonworks HDP-2.5.3.0 and spark 1.6.2.
Many thanks in Advance!
One problem may be partioning: the spark app may not know how to divide processing the .tar.gz amongst many workers, so is handing it off to one. That's a problem with .gz files in general.
I haven't done any XML/tar processing work in spark myself, so am not confident about where to begin. You could look at the history server to see how work was split up. Otherwise: try throwing the work at spark as a directory full of XML files (maybe .gz individually), rather than a single .tar.gz. If that speeds up, then it could be a sign that partitioning is the problem. It would then become a matter of working out how to split up those original 400MB source fies into a smaller set (e.g. 20 x 20MB files), & see if that parallelized better
Please see my post below
I gave it a quick try and created 50 xml files according to your structure each having 60MB. Tested on 3 workers (each 7 core 26GB per worker)
1) The tar.gz file had 450MB and took 14min with 1 (!) executor. Since it is a tar file, only one executor reads the file.
2) Putting all files as single xml.gz in one folder and starting the job again I had 3 executors involved and the job got done in under 5 min (roughly the 14 min / 3 since no shuffle required)
So I see two issues here:
1) Don't use tar.gz
2) 50 min compared to 14 min: How fast is your machine (cores, ...)?
Please see my post below
Created 01-18-2017 10:04 AM
Thanks for a quick reply.
I am using a mixed environment (for dev):
The reason why we tar.gz the files is because we receive may small xml files, 25,000. Loading these files into hadoop will take over 4 hours. tar.gz reduces the load time to around 10mins as well as reducing the size from 14GB to 0.4GB.
I have tried removing the tar.gz, the speed becomes 1h45. This is likely to be the result of many small files.
To add, the pig parser maybe faster because the XML structure is hardcoded. This wants to be avoided because we have experienced machines changing the way the xml is produced so the Spark parsing is more robust.
Ideally, we would like to use the more robust spark parser but have the load into hadoop at around 10min and the processing time at around 10mins.
Any ideas? One idea is to tar.gz into multiple files, i.e. 25,000 into 10 files. the load time would be ~10mins, processing time somewhere in between 10mins and 50mins.
Does anyone have:
Created 01-25-2017 11:12 AM
So I have changed the way I tar.gz the files.
At first I tried to create files of the size of 128mb (about 4 files), then 64mb (about 8-10 files), and then 1mb (100+).
Obviously, this alters the amount of tasks that run. The task run faster the smaller the file, except one!
One task always takes ~50mins.
Why does this happen? How do I speed up this task?
Created
Created 03-20-2017 02:45 PM
Thanks Mark. I have looked into your suggestions.
Which has lead me to LZO Compression;
I think this may be something I try next. Do you have any suggestions with this? Doesn't HDP already comes with LZO? The link is a good few years old. should I try something else before I spend a few hows with this? My company is not keen on me spending a few hours writing Java sequenceFile jar. | https://community.cloudera.com/t5/Support-Questions/com-databricks-spark-xml-parsing-xml-takes-a-very-long-time/m-p/130449 | CC-MAIN-2021-25 | refinedweb | 1,025 | 67.76 |
When submitting papers to scientific journals one quite frequently needs to enumerate the different subplots of a figure with A, B, ... .
This sounds like a very common problem and I was trying to find an elegant way to do that automatically with matplotlib, but I was surprised to find nothing on it. But maybe I am not using the right search terms. Ideally, I am searching for a way to annotate such that the letters stay in place relative to the subplot if the figure is resized or the subplot is moved via
fig.subplots_adjust
fig.tight_layout
If you want the annotation relative to the subplot then plotting it using
ax.text seems the most convenient way to me.
Consider something like:
import numpy as np import matplotlib.pyplot as plt import string fig, axs = plt.subplots(2,2,figsize=(8,8)) axs = axs.flat for n, ax in enumerate(axs): ax.imshow(np.random.randn(10,10), interpolation='none') ax.text(-0.1, 1.1, string.ascii_uppercase[n], transform=ax.transAxes, size=20, weight='bold') | https://codedump.io/share/NajPFgAMlwBm/1/matplotlib-annotate-subplots-in-a-figure-with-a-b-c | CC-MAIN-2017-26 | refinedweb | 176 | 58.99 |
I'm trying to set up Trac to test out it's functionality, and the only guides I can find online talk about setting up a VirtualHost. Right now I am under the impression that I need access to a DNS server to properly use the VirtualHost directive, and for various reasons I don't have access to one. Is it possible to set up Trac without setting up a VirtualHost? I haven't had any luck. If I run the site with tracd, it works - which means that at least part of it is set up properly.
Right now all I have is an Apache Directory directive pointing to /pathToTracSite/htdocs/, and when I visit the trac location, all I get when viewing the site from a browser is an empty directory (which makes sense, because htdocs/ is empty).
My server is running Apache2
I know I'm missing a lot here, because I don't understand Apache the Trac system very well - any help would be appreciated.
If you want trac to run faster, use mod_wsgi (which is faster than mod_python, both of which are faster than CGI). This can be installed as an apache module from source or from a binary package (see yum or apt-get). When install MoinMoin, I found the different between mod_python and wsgi to be significant. Just noticed, your stumbling block is that python web apps have to be configured in Apache before they will run (it doesn't work like PHP or CGI application).
Trac
To setup trac for WSGI:
Apache conf
WSGIScriptAlias /trac /trac/apache/trac.wsgi
## This is required if you plan to use HTTP authorization. Without it the
## user name won't be passed
WSGIPassAuthorization On
<Directory /trac/apache >
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
AuthName "Trac at My Company"
AuthType Basic
AuthUserFile /var/secure/authfiles/trac-authfile
Require valid-user
</Directory >
trac.wsgi
import sys
sys.stdout = sys.stderr
import os
os.environ['TRAC_ENV'] = '/trac'
os.environ['PYTHON_EGG_CACHE'] = '/trac/eggs'
import trac.web.main
application = trac.web.main.dispatch_request
To setup Trac for mod_python, you could follow the instructions at TracModPython, copied here for your reading pleasure:
<Location /projects/myproject>
SetHandler mod_python
PythonInterpreter main_interpreter
PythonHandler trac.web.modpython_frontend
PythonOption TracEnv /var/trac/myproject
PythonOption TracUriRoot /projects/myproject
</Location>
Trac also works fine in a /Location
You don't need to make DNS changes to use a virtualhost, an /etc/hosts entry on your client machine will also work fine. You also don't need to run Apache; tracd will do the job just fine for testing purposes.
/etc/hosts
Great tip!! I had problems running Trac on mod_python and I used these instructions to change it to mod_wsgi. Shall add some corrections though
By posting your answer, you agree to the privacy policy and terms of service.
asked
6 years ago
viewed
3119 times
active | http://serverfault.com/questions/33012/install-trac-without-setting-up-a-virtualhost-in-apache/126847 | CC-MAIN-2016-18 | refinedweb | 480 | 54.42 |
Type: Posts; User: abeginner
The location pointed by "ptr" may be getting written by another thread. But why left-shift "k"?
What's the purpose of the code:
void wait ( int *ptr, int k) { while ((1 << k) & (*ptr) == 0) {} }
I get a syntax error for referring to a previously defined class.
file first.cpp
---------
namespace NS {
class C : public B {
int m;
}
I have placed both short/long versions of my story/questions here.
Short story:
I have a job offer from a top 10 semiconductor companies in SF bay area, and after about 10 years of experience in...
Fine but in a common compiler such as gcc implements it as vptr/vtbl mechanism. For gcc, does anyone know why it should not be 1 vptr but 2?
Shouldn't it be a just 1 vptr, not 2?
Where are non-virtual non-static member functions stored?
where are static member functions stored?
Why is sizeof (ABCDerived) be 24 and not 20 (if iMem do not override then 16 bytes for iMem...
thanks ninja9578.
I read somewhere that non-virtual non-static member functions do not add to the size of a class object, but can someone tell me where they are stored in memory. Static members functions, which also...
I overloaded op delete in the following way, but from the output I get it seems op delete does not get called for "delete a2". Anyone knows why? Running on visual studio.
---
#include <iostream>...
I understand that, however, my original concern is different. If someone asks you a design question, I suspect that starting out like " these are the classes and the member function", is not the...
what do you mean by order, order, order? and data and processes?
Any link I can refer to quickly absorb this methodology/key steps?
How is one supposed to go about thinking and doing object-oriented design for a system? If someone asks me to design some utitlity in a OO language, I would start by defining classes that I deem...
I have the following statement in my vc++ code, so I could use npos instead of string::npos
--
using namespace std::string;
--
But I get the following error. Similar error is issued if I do...
same logic: change m2 to m3
sorry not sure i follow. If a new non-POD object is added as a member say std::vector, why does memset break the code - would not std::vector member's size get included in sizeof(*this)
Paul Mckenzie,
Whether a ctor is correct or not depends upon what it is expected to do per design. it is not clear whether the designer expected ctor to allocate memory for long *p. If yes, then...
nuzzle,
(d) would recompile by the same logic as (a).
class A {
void a(int i, int j ) {...} // m1
void a(int i) {..} // m2
}
If you add m3 to A:
In (a) - the newly added constructor may be a candidate for constructor call among many overloaded constructors, thus becoming a viable candidate function for some constructor calls.
class A {
...
Any help, please!
The other one was:
whats wrong with the code..
class T {
public:
T();
private:;
class T {
public:
T();
private:;
char c[7];
long *p;
};
T::T() { memset(this, 0, sizeof(*this)); }
I ran into following question:
You have a class that many libraries depend on. Now you need to modify the class for one application. Which of the following changes require recompiling all...
Hi,
I was trying to write overloaded == operator for string class. When I use only "s" instead of "&s" in the statement "this == &s", I get the following error. My question is: Is not "s" a... | http://forums.codeguru.com/search.php?s=d4c198a17a741f87fd81fdad2ddf0af3&searchid=8139025 | CC-MAIN-2015-48 | refinedweb | 614 | 73.88 |
>> sort the objects in a list.
example
my_list = [1, 5, 2, 6, 0] my_list.sort() print(my_list) my_list.sort(reverse=True) print(my_list)
Output
This will give the output −
[0, 1, 2, 5, 6] [6, 5, 2, 1, 0]
Since tuples are immutable, they don't have an in-place sort function that can be called directly on them. You have to use the sorted function, which returns a sorted list. If you dont want to sort the list in place, use sorted in place of the list class method sort.
example
my_list = [1, 5, 2, 6, 0] print(sorted(my_list)) print(sorted(my_list, reverse=True))
Output
This will give the output −
[0, 1, 2, 5, 6] [6, 5, 2, 1, 0]
If you have a list of objects without __cmp__ method implemented in the class, you can use the key argument to specify how to compare 2 elements. For example, if you have dictionaries in a list and want to sort them based on a key size, you could do the following:
Example
def get_my_key(obj): return obj['size'] my_list = [{'name': "foo", 'size': 5}, {'name': "bar", 'size': 3}, {'name': "baz", 'size': 7}] my_list.sort(key=get_my_key) print(my_list)
Output
This will give the output −
[{'name': 'bar', 'size': 3}, {'name': 'foo', 'size': 5}, {'name': 'baz', 'size': 7}]
It would call the function specified for each entry and sort based on this value for each entry. You could also specify the same function for an object as well, by returning an attribute of the object.
- Related Questions & Answers
- How to reverse the objects in a list in Python?
- How to append objects in a list in Python?
- How to shuffle a list of objects in Python?
- How to sort a list of strings in Python?
- How to sort a Python date string list?
- Python How to sort a list of strings
- How to sort a list in C#?
- Sort a list according to the Length of the Elements in Python program
- Python program to sort a list according to the second element in the sublist.
- How to access Python objects within objects in Python?
- Python program to sort a list according to the second element in sublist
- Python program to sort a List according to the Length of the Elements?
- Python – Sort by Units Digit in a List
- How to use LINQ to sort a list in C#?
- Python program to sort a list of tuples alphabetically | https://www.tutorialspoint.com/How-to-sort-the-objects-in-a-list-in-Python | CC-MAIN-2022-33 | refinedweb | 406 | 78.18 |
Hash Tables—Theory and Practice
The first time I heard about hash tables was after taking a compilers course during my BSc. The truth is, I was not able to understand and appreciate their usefulness fully back then. Now that I know more about hash tables, I decided to write about them so others will see their importance as well.
Hash tables can be implemented in any programming language, including Awk. However, the choice of programming language is not the most important thing compared to other critical choices. Hash tables are used in compilers, databases, caching, associative arrays and so on. Hash tables are one of the most important data structures in computer science.
The Problem
The problem that will serve as an example for this article is finding out how many words from one text file appear in another text file. All programs in this article use a text file (Pride and Prejudice) for populating the hash table. Another text file (The Adventures of Tom Sawyer) will be used for testing the performance of the hash table. You can download both text files from Project Gutenberg.
The following output shows how many words each file contains:
$ wc AofTS.txt 9206 73845 421884 AofTS.txt $ wc PandP.txt 13426 124589 717573 PandP.txt
As you can see, both text files are relatively large, which is good for benchmarking. Your real-life hash tables may not be as big. In order to remove various control characters, as well as punctuation marks and numbers, both text files were processed further:
$ strings PandP.txt > temp.INPUT $ awk '{for (i = 1; i <= NF; i++) print $i}' temp.INPUT > new.INPUT $ cat new.INPUT | tr -cd '![a-zA-Z]\n' > INPUT $ strings AofTS.txt > temp.CHECK $ awk '{for (i = 1; i <= NF; i++) print $i}' temp.CHECK > new.CHECK $ cat new.CHECK | tr -cd '![a-zA-Z]\n' > empty.CHECK $ sed '/!/d' empty.CHECK > temp.CHECK $ sed '/^\s*$/d' temp.CHECK > CHECK
The reason for simplifying both files is that some control characters made the C programs crash. As the purpose of this article is to showcase hash tables, I decided to simplify the input instead of spending time trying to figure out the problem and modifying the C code.
After constructing the hash table using the first file (INPUT) as input, the second one (CHECK) will be used for testing the hash table. This will be the actual use of the hash table.
Theory
Let me start with the definition of a hash table. A hash table is a data structure that stores one or more key and value pairs. A hash table can store keys of any type.
A hash table uses a hash function to compute an index into an array of buckets or slots, from which the correct value can be found. Ideally, the hash function will assign each key to a unique bucket. Unfortunately, this rarely happens. In practice, more than one of the keys will hash to the same bucket. The most important characteristic of a hash table is the number of buckets. The number of buckets is used by the hashing function. The second most important characteristic is the hash function used. The most crucial feature of the hash function is that it should produce a uniform distribution of the hash values.
You can say that the search time is now O(n/k), where n is the number of keys, and k is the size of the hash array. Although the improvement looks small, you should realize that for a hash array with 20 buckets, the search time is now 20 times smaller.
It is important for the hash function to behave consistently and output the same hash value for identical keys. A collision happens when two keys are hashing to the same index—that's not an unusual situation. There are many ways to deal with a collision.
A good solution is to use separate chaining. The hash table is an array of pointers, each one pointing to the next key with the same hash value. When a collision occurs, the key will be inserted in constant time to the head of a linked list. The problem now is that when you have to search a hash value for a given key, you will have to search the whole linked list for this key. In the worst case, you might need to traverse the entire linked list—that's the main reason the linked list should be moderately small, giving the requirement for a large number of buckets.
As you can imagine, resolving collisions involves some kind of linear search; therefore, you need a hash function that minimizes collisions as much as possible. Other techniques for resolving collisions include open addressing, Robin Hood hashing and 2-choice hashing.
Hash tables are good at the following:
In a hash table with the "correct" number of buckets, the average cost for each lookup is independent of the number of elements stored in the table.), you can reduce the average lookup cost by a careful choice of the hash function, bucket table size and internal data structures.
Hash tables also have some disadvantages:
They are not good at keeping sorted data. It is not efficient to use a hash table if you want your data sorted.
Hash tables are not effective when the number of entries is very small, because despite the fact that operations on a hash table take constant time on average, the cost of a good hash function can be significantly higher than the inner loop of the lookup algorithm for a sequential list or search tree.
For certain string processing applications, such as spell-checking, hash tables may be less efficient than trees or finite automata.
Although the average cost per operation is constant and fairly small, the cost of a single operation may be fairly high. In particular, if the hash table uses dynamic resizing, inserting or deleting a key may, once in a while, take time proportional to the number of entries. This can be a serious drawback in applications where you want to get results fast.
Hash tables become quite inefficient when there are many collisions.
As I'm sure you understand, not every problem can be solved equally well with the help of a hash table. You always should consider and examine all your options before deciding what to use.
Figure 1 shows a simple hash table with keys and values shown. The hash function is the modulo 10 function; therefore, ten buckets are needed because only ten results can come from a modulo 10 calculation. Having only ten buckets is not considered very good, especially if the number of values grows large, but it is fine for illustrative purposes.
Figure 1. A Simple Hash Table
To summarize, a hash table should follow these principles:
Do not have too many buckets, just as many as needed.
It is good for the hash function to take into account as much information provided by the key as possible. This is not a trivial task.
The hash function must be able to hash similar keys to different hash values.
Each bucket should have the same number of keys or at least as close to being equal as possible (this is a very desirable property).
Following some principles will make collisions less likely. First, you should use a prime number of buckets. Second, the bigger the size of the array, the smaller the probability of collisions. Finally, you should make sure that the hash function is smart enough to distribute its return values as evenly as possible.
Delete, Insert and Lookup
The main operations on a hash table are insertion, deletion and lookup. You use the hash value to determine where in the hash table to store a key. Later, you use the same hash function to determine where in the hash table to search for a given key.
Once the hash table is populated, searching is the same as doing an insertion. You hash the data you are searching for, go to that place in the array, look down the list that starts from that location, and see if what you are looking for is in the list. The number of steps is O(1). The worst-case search time for a hash table is O(n), which can happen when all keys are stored in the same bucket. Nevertheless, the probability of that happening is so small that both the best and average cases are considered to be O(1).
You can find many hash table implementations on the Internet or in several books on the topic. The tricky part is using the right number of buckets and choosing an efficient hash function that will distribute values as uniformly as possible. A distribution that is not uniform definitely will increase the number of collisions and the cost of resolving them.
A C Implementation
The first implementation will be stored in a file named ht1.c. The implementation uses separate chaining, because separate chaining is a reasonable choice. For simplicity, both input and output filenames are hard-coded inside the program. After finishing with the input and building the hash table, the program starts reading the second file, word by word, and starts checking whether a word can be found in the hash table.
Listing 1 shows the full C code of the ht1.c file.
Listing 1. ht1.c
#include #include #include #include #define TABLESIZE 5 // Linked List typedef struct node { char *data; struct node *next; } node; // A Hash Function: the returned hash value will be the // ASCII value of the first character of the string // modulo the size of the table. unsigned int hash(const char *str, int tablesize) { int value; // Get the first letter of the string value = toupper(str[0]) - 'A'; return value % tablesize; } static int lookup(node *table[], const char *key) { unsigned index = hash(key, TABLESIZE); const node *it = table[index]; // Try to find if a matching key in the list exists while(it != NULL && strcmp(it->data, key) != 0) { it = it->next; } return it != NULL; } int insert(node *table[], char *key) { if( !lookup(table, key) ) { // Find the desired linked list unsigned index = hash(key, TABLESIZE); node *new_node = malloc(sizeof *new_node); if(new_node == NULL) return 0; new_node->data = malloc(strlen(key)+1); if(new_node->data == NULL) return 0; // Add the new key and link to the front of the list strcpy(new_node->data, key); new_node->next = table[index]; table[index] = new_node; return 1; } return 0; } // Populate Hash Table // First parameter: The hash table variable // Second parameter: The name of the text file with the words int populate_hash(node *table[], FILE *file) { char word[50]; char c; do { c = fscanf(file, "%s", word); // IMPORTANT: remove newline character size_t ln = strlen(word) - 1; if (word[ln] == '\n') word[ln] = '\0'; insert(table, word); } while (c != EOF); return 1; } int main(int argc, char **argv) { char word[50]; char c; int found = 0; // Initialize the hash table node *table[TABLESIZE] = {0}; FILE *INPUT; INPUT = fopen("INPUT", "r"); // Populate hash table populate_hash(table, INPUT); fclose(INPUT); printf("The hash table is ready!\n"); int line = 0; FILE *CHECK; CHECK = fopen("CHECK", "r"); do { c = fscanf(CHECK, "%s", word); // IMPORTANT: remove newline character size_t ln = strlen(word) - 1; if (word[ln] == '\n') word[ln] = '\0'; line++; if( lookup(table, word) ) { found++; } } while (c != EOF); printf("Found %d words in the hash table!\n", found); fclose(CHECK); return 0; }
An Even Better C Implementation
The second implementation will be stored in a file named ht2.c. This implementation uses separate chaining as well. Most of the C code is the same as in ht1.c except for the hash function. The C code for the modified hash function is the following:
int hash(char *str, int tablesize) { int sum = 0; // Is it a valid string? if(str == NULL) { return -1; } // Calculate the sum of all characters in the string for( ; *str; str++) { sum += *str; } // Return the sum mod the table size return (sum % tablesize); }
What this hash function does better than the other one is that it takes into account all the letters of the string instead of just the first one. Therefore, the produced number, which corresponds to the position of the key in the hash table, is bigger, and this results in being able to take advantage of hash tables with a larger number of buckets.
Benchmarks
The presented benchmarks are far from accurate or scientific. They are just an indication of what is better, what works and what doesn't and so on. Keep in mind that finding the optimal hash table size is not always easy.
All programs were compiled as follows:
$ gcc -Wall program.c -o program
The trusty
time command produced the following output after executing ht1
with four different hash table sizes:
$ grep define ht1.c #define TABLESIZE 101 $ time ./ht1 The hash table is ready! Found 59843 words in the hash table! real 0m0.401s user 0m0.395s sys 0m0.004s $ grep define ht1.c #define TABLESIZE 10 $ time ./ht1 The hash table is ready! Found 59843 words in the hash table! real 0m0.794s user 0m0.788s sys 0m0.004s $ grep define ht1.c #define TABLESIZE 1001 $ time ./ht1 The hash table is ready! Found 59843 words in the hash table! real 0m0.410s user 0m0.404s sys 0m0.004s $ grep define ht1.c #define TABLESIZE 5 $ time ./ht1 The hash table is ready! Found 59843 words in the hash table! real 0m1.454s user 0m1.447s sys 0m0.004s
Figure 2 shows a plot of the execution times from the four different values of the TABLESIZE variable of the ht1.c program. The bad thing about ht1.c is that its performance with a hash table of 101 buckets is almost the same as with one with 1,001 buckets!
Figure 2. Execution Times from the Four Different Vales of the TABLESIZE Variable of the ht1.c Program
Next, here are the results from the execution of the ht2.c program:
$ grep define ht2.c #define TABLESIZE 19 $ time ./ht2 INPUT CHECK The hash table is ready! Found 59843 words in the hash table! real 0m0.439s user 0m0.434s sys 0m0.003s $ grep define ht2.c #define TABLESIZE 97 $ time ./ht2 INPUT CHECK The hash table is ready! Found 59843 words in the hash table! real 0m0.116s user 0m0.111s sys 0m0.003s $ grep define ht2.c #define TABLESIZE 277 $ time ./ht2 INPUT CHECK The hash table is ready! Found 59843 words in the hash table! real 0m0.072s user 0m0.067s sys 0m0.003s $ grep define ht2.c #define TABLESIZE 997 $ time ./ht2 INPUT CHECK The hash table is ready! Found 59843 words in the hash table! real 0m0.051s user 0m0.044s sys 0m0.003s $ grep define ht2.c #define TABLESIZE 22397 $ time ./ht2 INPUT CHECK The hash table is ready! Found 59843 words in the hash table! real 0m0.049s user 0m0.044s sys 0m0.003s
Figure 3 shows a plot of the execution times from the five different values of the TABLESIZE variable used in the ht2.c program. All hash table sizes are prime numbers. The reason for using prime numbers is that they behave better with the modulo operation. This is because a prime number has no positive divisors other than one and itself. As a result, the product of a prime number with another integer has fewer positive divisors than the product of a non-prime number with another integer.
Figure 3. A Plot of the Execution Times from the Five Different Values of the TABLESIZE Variable Used in the ht2.c Program
As you can see, the new hash function performs much better than the hash function found in ht1.c. As a result, the use of more buckets greatly improves the performance of the hash table. Nevertheless, as long as the words in the text files are finite, there is no point in using more buckets than the number of unique words in the input file.
It is useful to examine the distribution of keys in the hash table for the ht2 implementation using two different number of buckets. The following C function prints the number of keys in each bucket:
void printHashTable(node *table[], const unsigned int tablesize) { node *e; int i; int length = tablesize; printf("Printing a hash table with %d buckets.\n", length); for(i = 0; i<length; i++) { // printf("Bucket: %d\n", i); // Get the first node of the linked list // for the given bucket. e = table[i]; int n = 0; if (e == NULL) { // printf("Null bucket %d\n", i); } else { while( e != NULL ) { n++; e = e->next; } } printf("Bucket %d has %d keys\n", i, n); } }
Figure 4 shows the number of keys in each bucket for two hash tables: one with 97 buckets and the other with 997 buckets. The hash table with 997 buckets appears to follow a pattern on how it fills its buckets, whereas the hash table with the 97 buckets is more evenly distributed. Nevertheless, the bigger hash table has a lower number of keys in each bucket which is what you really want because less keys in each linked list means less time searching it.
Figure 4. The Number of Keys in Each Bucket for Two Hash Tables with Different Numbers of Buckets
Summary
Hash tables are an important part of computer science and programming. I hope this article helps you understand their importance and clarifies some things about them. | https://www.linuxjournal.com/content/hash-tables%E2%80%94theory-and-practice | CC-MAIN-2022-05 | refinedweb | 2,927 | 73.58 |
Hi! I'm working on a Java assignment in my beginning Java class. The assignment is to write a program using sqrt() method in the Math class and use a for loop to product the output results. My output is correct according to the chart the instructor gave. My only problem is the extra carriage returns between the output. I don't want any spaces between the output.
This is how the assignment should look:
Number SquareRoot 0 0.0000 2 1.4142 4 2.0000 6 2.4495 8 2.8284 10 3.1623 12 3.4641 14 3.7417 16 4.0000 18 4.2426 20 4.4721
Which part of my program is causing the extra spaces? How do I fix it?
public class Week5Assignment { public static void main(String[] args) { // Print title of square root chart System.out.println("Number" + "\tSquare Root"); // Print square root of even numbers 0 - 20 for (int x = 0; x <= 20; x++){ if (x % 2 == 0) System.out.printf(" " + x + "\t " + "%.4f", x + Math.sqrt(x)); System.out.println("\n"); } } }
This is the part of the output from my code (I didn't post all through #20 since the length is long):
Number Square Root 0 0.0000 2 3.4142 4 6.0000 6 8.4495
I'm wondering why I'm getting all the spaces between each line. Which part of the code is doing that? | http://www.javaprogrammingforums.com/whats-wrong-my-code/16605-extra-spaces-output-why-how-fix.html | CC-MAIN-2014-10 | refinedweb | 237 | 89.34 |
May 4, 2016
Bioconductors:
We are pleased to announce Bioconductor 3.3, consisting of 1211 software packages, 293 experiment data packages, and 916 up-to-date annotation packages.
There are 107 new software packages, and many updates and improvements to existing packages; Bioconductor 3.3 is compatible with R 3.3, and is supported on Linux, 32- and 64-bit Windows, and Mac OS X. This release includes an updated Bioconductor Amazon Machine Image and Docker containers.
Visit for details and downloads.
To update to or install Bioconductor 3.3:
Install R 3.3. Bioconductor 3.3 has been designed expressly for this version of R.
Follow the instructions at .
There are 107 new packages in this release of Bioconductor.
AneuFinder - This package implements functions for CNV calling, plotting, export and analysis from whole-genome single cell sequencing data.
bacon - Bacon can be used to remove inflation and bias often observed in epigenome- and transcriptome-wide association studies. To this end bacon constructs an empirical null distribution using a Gibbs Sampling algorithm by fitting a three-component normal mixture on z-scores.
BadRegionFinder - BadRegionFinder is a package for identifying regions with a bad, acceptable and good coverage in sequence alignment data available as bam files. The whole genome may be considered as well as a set of target regions. Various visual and textual types of output are available.
BasicSTARRseq -.
BatchQC -.
BgeeDB - A package for the annotation and gene expression data download from Bgee database, and TopAnat analysis: GO-like enrichment of anatomical terms, mapped to genes by expression patterns.
biomformat -.
BioQC - BioQC performs quality control of high-throughput expression data based on tissue gene signatures
biosigner - ‘restricted’ models are returned, enabling future predictions on new datasets. A Galaxy implementation of the package is available within the Workflow4metabolomics.org online infrastructure for computational metabolomics.
cellity - A support vector machine approach to identifying and filtering low quality cells from single-cell RNA-seq datasets.
cellTree - This packages computes a Latent Dirichlet Allocation (LDA) model of single-cell RNA-seq data and builds a compact tree modelling the relationship between individual cells over time or space.
Chicago - A pipeline for analysing Capture Hi-C data.
chromPlot - Package designed to visualize genomic data along the chromosomes, where the vertical chromosomes are sorted by number, with sex chromosomes at the end.
CHRONOS - A package used for efficient unraveling of the inherent dynamic properties of pathways. MicroRNA-mediated subpathway topologies are extracted and evaluated by exploiting the temporal transition and the fold change activity of the linked genes/microRNAs.
CINdex -.
clustComp -.
ClusterSignificance - The ClusterSignificance package provides tools to assess if clusters have a separation different from random or permuted data. ClusterSignificance investigates clusters of two or more groups by first, projecting all points onto a one dimensional line. Cluster separations are then scored and the probability of the seen separation being due to chance is evaluated using a permutation method.
CONFESS - Single Cell Fluidigm Spot Detector.
consensusSeekeR - This package compares genomic positions and genomic ranges from multiple experiments to extract common regions. The size of the analyzed region is adjustable as well as the number of experiences in which a feature must be present in a potential region to tag this region as a consensus region.
contiBAIT - Using strand inheritance data from multiple single cells from the organism whose genome is to be assembled, contiBAIT can cluster unbridged contigs together into putative chromosomes, and order the contigs within those chromosomes.
CountClust - Fits grade of membership models (GoM, also known as admixture models) to cluster RNA-seq gene expression count data, identifies characteristic genes driving cluster memberships, and provides a visual summary of the cluster memberships.
CrispRVariants -.
dada2 - The dada2 package provides “OTU picking” functionality, but instead of picking OTUs the DADA2 algorithm exactly infers samples sequences. The dada2 pipeline starts from demultiplexed fastq files, and outputs inferred sample sequences and associated abundances after removing substitution and chimeric errors. Taxonomic classification is also available via a native implementation of the RDP classifier method.
dcGSA - Distance-correlation based Gene Set Analysis for longitudinal gene expression profiles. In longitudinal studies, the gene expression profiles were collected at each visit from each subject and hence there are multiple measurements of the gene expression profiles for each subject. The dcGSA package could be used to assess the associations between gene sets and clinical outcomes of interest by fully taking advantage of the longitudinal nature of both the gene expression profiles and clinical outcomes.
debrowser - Bioinformatics platform containing interactive plots and tables for differential gene and region expression studies. Allows visualizing expression data much more deeply in an interactive and faster way. By changing the parameters, user can easily discover different parts of the data that like never have been done before. Manually creating and looking these plots takes time. With this system users can prepare plots without writing any code. Differential expression, PCA and clustering analysis are made on site and the results are shown in various plots such as scatter, bar, box, volcano, ma plots and Heatmaps.
DEFormats - Covert between different data formats used by differential gene expression analysis tools.
diffloop - A suite of tools for subsetting, visualizing, annotating, and statistically analyzing the results of one or more ChIA-PET experiments.
DNAshapeR - DNAhapeR is an R/BioConductor package for ultra-fast, high-throughput predictions of DNA shape features. The package allows to predict, visualize and encode DNA shape features for statistical learning.
doppelgangR -).
DRIMSeq - The package.
EBSEA - Calculates differential expression of genes based on exon counts of genes obtained from RNA-seq sequencing data.
EGAD - The package implements a series of highly efficient tools to calculate functional properties of networks based on guilt by association methods.
EGSEA - This package implements the Ensemble of Gene Set Enrichment Analyses (EGSEA) method for gene set testing.
EmpiricalBrownsMethod -.
epivizrData - Serve data from Bioconductor Objects through a WebSocket connection.
epivizrServer - This package provides objects to manage WebSocket connections to epiviz apps. Other epivizr package use this infrastructure.
epivizrStandalone - This package imports the epiviz visualization JavaScript app for genomic data interactive visualization. The ‘epivizrServer’ package is used to provide a web server running completely within R. This standalone version allows to browse arbitrary genomes through genome annotations provided by Bioconductor packages..
ExpressionAtlas - This package is for searching for datasets in EMBL-EBI Expression Atlas, and downloading them into R for further analysis. Each Expression Atlas dataset is represented as a SimpleList object with one element per platform. Sequencing data is contained in a SummarizedExperiment object, while microarray data is contained in an ExpressionSet or MAList object.
FamAgg - Framework providing basic pedigree analysis and plotting utilities as well as a variety of methods to evaluate familial aggregation of traits in large pedigrees.
flowAI - The package is able to perform an automatic or interactive quality control on FCS data acquired using flow cytometry instruments. By evaluating three different properties: 1) flow rate, 2) signal acquisition, 3) dynamic range, the quality control enables the detection and removal of anomalies.
garfield -).
genbankr - Reads Genbank files.
GenoGAM - This package allows statistical analysis of genome-wide data with smooth functions using generalized additive models based on the implementation from the R-package ‘mg.
genphen - Given a set of genetic polymorphisms in the form of single nucleotide poylmorphisms or single amino acid polymorphisms and a corresponding phenotype data, often we are interested to quantify their association such that we can identify the causal polymorphisms. Using statistical learning techniques such as random forests and support vector machines, this tool provides the means to estimate genotype-phenotype associations. It also provides visualization functions which enable the user to visually inspect the results of such genetic association study and conveniently select the genotypes which have the highest strenght ofassociation with the phenotype.
GenRank - Methods for ranking genes based on convergent evidence obtained from multiple independent evidence layers. This package adapts three methods that are popular for meta-analysis.
GenVisR - Produce highly customizable publication quality graphics for genomic data primarily at the cohort level.
ggcyto - With the dedicated fority.
Glimma - This package generates interactive visualisations of RNA-sequencing data based on output from limma, edgeR or DESeq2. Interactions are built on top of popular static displays from the limma package, providing users with access to gene IDs and sample information. Plots are generated using d3.js and displayed in HTML pages.
globalSeq - The method may be conceptualised as a test of overall significance in regression analysis, where the response variable is overdispersed and the number of explanatory variables exceeds the sample size.
GMRP - Perform Mendelian randomization analysis of multiple SNPs to determine risk factors causing disease of study and to exclude confounding variabels and perform path analysis to construct path of risk factors to the disease.
GSALightning - GSALightning provides a fast implementation of permutation-based gene set analysis for two-sample problem. This package is particularly useful when testing simultaneously a large number of gene sets, or when a large number of permutations is necessary for more accurate p-values estimation.
Harman -.
HDF5Array - This package implements the HDF5Array class for convenient access and manipulation of HDF5 datasets. In order to reduce memory usage and optimize performance, operations on an HDF5Array object are either delayed or executed using a block processing mechanism. The delaying and block processing mechanisms are independent of the on-disk backend and implemented via the DelayedArray class. They even work on ordinary arrays where they can sometimes improve performance.
iCARE - An R package to compute Individualized Coherent Absolute Risk Estimators.
iCOBRA - This package provides functions for calculation and visualization of performance metrics for evaluation of ranking and binary classification (assignment) methods. It also contains a shiny application for interactive exploration of results.
IHW -.
ImmuneSpaceR - Provides a convenient API for accessing data sets within ImmuneSpace (), the data repository and analysis platform of the Human Immunology Project Consortium (HIPC).
InteractionSet - Provides the GInteractions, InteractionSet and ContactMatrix objects and associated methods for storing and manipulating genomic interaction data from Hi-C and ChIA-PET experiments.
ISoLDE - This package provides ISoLDE a new method for identifying imprinted genes. This method is dedicated to data arising from RNA sequencing technologies. The ISoLDE package implements original statistical methodology described in the publication below.
isomiRs - Characterization of miRNAs and isomiRs, clustering and differential expression.
JunctionSeq - A Utility for Detection and Visualization of Differential Exon or Splice-Junction Usage in RNA-Seq data.
kimod -.
Linnorm - Linnorm is an R package for the analysis of RNA-seq, scRNA-seq, ChIP-seq count data or any large scale count data. Its main function is to normalize and transform these datasets for parametric tests. Examples of parametric tests include using limma for differential expression analysis or differential peak detection, or calculating Pearson correlation coefficient for gene correlation study. Linnorm can work with raw count, CPM, RPKM, FPKM and TPM. Additionally, Linnorm provides the RnaXSim function for the simulation of RNA-seq raw counts for the evaluation of differential expression analysis methods. RnaXSim can simulate RNA-seq dataset in Gamma, Log Normal, Negative Binomial or Poisson distributions.
lpsymphony -.
LymphoSeq -.
MBttest - MB.
Mergeomics -aCCA -.
MethPed -).
miRNAmeConverter - Package containing an S4 class for translating mature miRNA names to different miRBase versions, checking names for validity and detecting miRBase version of a given set of names (data from).
MMDiff2 - This package detects statistically significant differences between read enrichment profiles in different ChIP-Seq samples. To take advantage of shape differences it uses Kernel methods (Maximum Mean Discrepancy, MMD).
multiClust - Whole: 1. A function to read in gene expression data and format appropriately for analysis in R. 2. Four different ways to select the number of genes a. Fixed b. Percent c. Poly d. GMM 3. Four gene ranking options that order genes based on different statistical criteria a. CV_Rank b. CV_Guided c. SD_Rank d. Poly 4. Two ways to determine the cluster number a. Fixed b. Gap Statistic 5. Two clustering algorithms a. Hierarchical clustering b. K-means clustering 6. A function to calculate average gene expression in each sample cluster 7. A function to correlate sample clusters with clinical outcome Order of Function use: 1. input_file, a function to read-in the gene expression file and assign gene probe names as the rownames. 2. number_probes, a function to determine the number of probes to select for in the gene feature selection process. 3. probe_ranking, a function to select for gene probes using one of the available gene probe ranking options. 4. number_clusters, a function to determine the number of clusters to be used to cluster genes and samples. 5. cluster_analysis, a function to perform Kmeans or Hierarchical clustering analysis of the selected gene expression data. 6. avg_probe_exp, a function to produce a matrix containing the average expression of each gene probe within each sample cluster. 7. surv_analysis, a function to produce Kaplan-Meier Survival Plots of selected gene expression data.
MultiDataSet -.
normalize450K - ‘.idat’ files. The normalization corrects for dye bias and biases related to signal intensity and methylation of probes using local regression. No adjustment for probe type bias is performed to avoid the trade-off of precision for accuracy of beta-values.
nucleoSim - This package can generate a synthetic map with reads covering the nucleosome regions as well as a synthetic map with forward and reverse reads emulating next-generation sequencing. The user has choice between three different distributions for the read positioning: Normal, Student and Uniform.
odseq -.
OncoScore - OncoScore is a tool to measure the association of genes to cancer based on citation frequency in biomedical literature. The score is evaluated from PubMed literature by dynamically updatable web queries.
oppar - The.
PanVizGenerator -.
pbcmc - The.
pcaExplorer - This package provides functionality for interactive visualization of RNA-seq datasets based on Principal Components Analysis. The methods provided allow for quick information extraction and effective data exploration. A Shiny application encapsulates the whole analysis.
PCAN - Phenotypes comparison based on a pathway consensus approach. Assess the relationship between candidate genes and a set of phenotypes based on additional genes related to the candidate (e.g. Pathways or network neighbors).
pqsfinder - The main functionality of.
profileScoreDist - Regularization and score distributions for position count matrices.
psygenet2r - Package to retrieve data from PsyGeNET database () and to perform comorbidity studies with PsyGeNET’s and user’s data.
PureCN - This package estimates tumor purity, copy number, loss of heterozygosity (LOH), and status of short nucleotide variants (SNVs). PureCN is designed for hybrid capture next generation sequencing (NGS) data, integrates well with standard somatic variant detection pipelines, and has support for tumor samples without matching normal samples.
QuaternaryProd - QuaternaryProd is an R package that performs causal reasoning on biological networks, including publicly available networks such as String-db. QuaternaryProd is a free alternative to commercial products such as Quiagen-db.
QUBIC -).
R4RNA -.
recoup -.
RGraph2js - Generator of web pages which display interactive network/graph visualizations with D3js, jQuery and Raphael.
RImmPort - The RImmPort package simplifies access to ImmPort data for analysis in the R environment. It provides a standards-based interface to the ImmPort study data that is in a proprietary format.
ROTS - Calculates the Reproducibility-Optimized Test Statistic (ROTS) for differential testing in omics data.
SC3 - Interactive tool for clustering and analysis of single cell RNA-Seq data.
scater - A collection of tools for doing various analyses of single-cell RNA-seq gene expression data, with a focus on quality control.
scde -).
scran - This package implements a variety of low-level analyses of single-cell RNA-seq data. Methods are provided for normalization of cell-specific biases, assignment of cell cycle phase, and detection of highly variable and significantly correlated genes.
SMITE -.
SpidermiR - The, miRandola,Pharmaco-miR,DIANA, Miranda, PicTar and TargetScan) and the use of standard analysis (igraph) and visualization methods (networkD3).
splineTimeR - This package provides functions for differential gene expression analysis of gene expression time-course data. Natural cubic spline regression models are used. Identified genes may further be used for pathway enrichment analysis and/or the reconstruction of time dependent gene regulatory association networks.
sscu - The package can calculate the selection in codon usage in bacteria species. First and most important, the package can calculate the strength of selected codon usage bias (sscu) based on Paul Sharp’s method. The method take into account of background mutation rate, and focus only on codons with universal translational advantages in all bacterial species. Thus the sscu index is comparable among different species. In addition, detainled optimal codons (selected codons) information can be calculated by optimal_codons function, so the users will have a more accurate selective scheme for each codons. Furthermore, we added one more function optimal_index in the package. The function has similar mathematical formula as s index, but focus on the estimates the amount of GC-ending optimal codon for the highly expressed genes in the four and six codon boxes. The function takes into account of background mutation rate, and it is comparable with the s index. However, since the set of GC-ending optimal codons are likely to be different among different species, the index can not be compared among different species.
SwathXtend - It contains utility functions for integrating spectral libraries for SWATH and statistical data analysis for SWATH generated data.
tofsims -.
transcriptR -.
tximport -.
Uniquorn -.
Package maintainers can add NEWS files describing changes to their packages since the last release. The following package NEWS is available:
Changes in version 2.11.2 (2016-04-29):
Changes in version 2.11.1 (2016-03-29):
Vignette changes to avoid Windoze build issues?
Changes in NAMESPACE
Changes in version 1.43.2:
Changes in version 1.43.1-9000 (2016-04-05):
Changes in version 1.43.1 (2016-02-28):
Changes in version 1.43.0 (2015-10-23):
Changes in version 1.10.0:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.34.0:
NEW FEATURES
export ALIAS2EG symbol in NAMESPACE for frog, mosquito, chimp and rhesus OrgDbs
add how to use EnsDb section to vignette
MODIFICATIONS
work on code base and exported functions - break up code in geneCentricDbs file - re-name and export functions used by select() in AnnotationForge and GenomicFeatures - re-name and export functions used by annotation package builders in AnnotationForge
remove library(RSQLite) from dbFileConnect()
modify unit tests for new PFAM identifiers in Bioconductor 3.3
use elementNROWS() instead of elementLengths()
modify mapIds() to preserve data type returned from select()
reduce time of AnnDbPkg-checker.Rd example
BUG FIXES
bugfix for select() and mapIds() when there are many:many mappings
bugfix in test_generateExtraRows unit test.
Changes in version 1.14.0:
NEW FEATURES
MODIFICATIONS
update AnnDbPkg template man to reference select() interface
work on makeOrgPackageFromNCBI: - error early when tax id is not found in gene_info.gz - add ‘rebuildCache’ arg to control repeated downloads - remove old code and comments - update man page
add PFAM and PROSITE man pages to NCBICHIP and NCBIORG package templates
allow passing of directory location in wrapBaseDBPackages()
change format of licence; report current version of AnnotationDbi
modify appendArabidopsisGenes() to check for null ‘gene_id’
add DBI to ‘Suggests’; load DBI in
_dbconn man page
load SQLite in vignettes; no longer free from AnnotationDbi::dbFileConnect()
Changes in version 2.4.0:
NEW FEATURES
add new status codes ‘4’ and ‘5’ to ‘statuses’ mysql table; change ‘status_id’ field to ‘4’ for all removed records to date
add getRecordStatus() generic
add package() generic
create ‘Hub’ VIRTUAL class - add new .Hub() base constructor for all hubs - add getAnnotationHubOption() and setAnnotationHubOption() - promote cache() to generic - add getHub() getter for AnnotationHubResource class - add getUrl(), getCache(), getDate() getters - export as few db helpers as possible
add ‘EpigenomeRoadmapNarrowAllPeaks’ and ‘EpigenomeRoadmapNarrowFDR’ classes
MODIFICATIONS
distinguish between broad and narrow peak files in EpigenomeRoadmapFileResource dispatch class
don’t use cache for AnnotationHub SQLite connection - originally introduced so could be closed if needed, but creates complexity - instead, open / close connection around individual queries (not a performance concern) - expose hub, cache, proxy in AnnotationHub constructor - document dbconn,Hub-method, dbfile,Hub-method, .db_close
snapshotDate now uses timestamp (last date any row was modified) instead of rdatadateadded
.require fails rather than emits warning - unit test on .require() - also, cache(hub[FALSE]) does not create spurious error
work on removed records and biocVersion - .uid0() was reorganized and no longer groups by record_id - metadata is returned for records with biocversion field <= current biocVersion instead of an exact match with the current version - metadata is not returned for removed records
BUG FIXES
Changes in version 1.2.0:
NEW FEATURES
add makeEnsemblTwoBit()
add hubError(), hubError<- generics and methods
create ‘HubMetadata’ class which ‘AnnotationHubMetadata’ inherits from
MODIFICATIONS
export ensemblFastaToTwoBitFile()
modifications due to changes in httr::HEAD(): - AFAICT httr::HEAD() 1.1.0 and higher accepts https only, not ftp - use xml2 instead of XML for parsing (httr >= 1.1.0 dependency change)
work on recipes: - clean up ChEA and Gencode - don’t export tracksToUpdate(); was broken and not used - reorg man pages; combine Ensembl Fasta and TwoBit on single man page
work on updateResources(): - push data to S3 before inserting metadata in db - isolate pushResources() and pushMetadata() from updateResources() - NOTE: Epigenome unit test is failing due to bad url. If not fixed by the host the recipe will need to change.
update makedbSNPVCF() to look in new clinvar location
BUG FIXES
Changes in version 1.11.1:
Changes in version 3.1.1 (2016-01-06):
Package requires R (>= 2.15.2).
CLEANUP: robustSmoothSpline() no longer generates messages that “.nknots.smspl() is now exported; use it instead of n.knots()” for R (>= 3.1.1).
Changes in version 3.1.0 (2015-10-23):
Changes in version 1.3:
Bioconductor Release (3.3) Version NEW FEATURES
Fragment Length Filter: Filtering of paired end read pairs by TLEN field with a minimum and maximum TLEN
SAMFLAG Filtering: Filter out reads with a certain SAMFLAG set, e.g. 1024 for marked optical duplicates BUG FIXES
Path extension for relative paths, e.g. resolve “~” to /home/$USER in UNIX
Changes in version 1.0.0:
Changes in version 1.0.0:
Summary and Sample Diagnostics
Differential Expression Plots and Analysis using LIMMA
Principal Component Analysis and plots to check batch effects
Heatmap plot of gene expressions
Median Correlation Plot
Circular Dendrogram clustered and colored by batch and condition
Shape Analysis for the distribution curve based on HTShape package
Batch Adjustment using ComBat
Surrogate Variable Analysis using sva package
Function to generate simulated RNA-Seq data
Changes in version 0.99.6 (2016-04-28):
Changes in version 0.99.4 (2016-04-26):
Changes in version 0.99.3 (2016-04-22):
Changes in version 0.0.4 (2016-03-20):
Changes in version 0.0.1 (2016-03-13):
Changes in version 1.17.0:
Changes in version 2.31:
NEW FEATURES
tab completion implemented for eSet classes
head and tail.AnnotatedDataFrame methods introduced
Changes in version 2.0.0:
Changes in version 0.3.13:
USER-VISIBLE CHANGES
Changes in version 0.3.12:
USER-VISIBLE CHANGES
BUG FIXES
Unit test changes to work with upcoming R release and new testthat version.
This solves Issue 4:
Changes in version 0.3.11:
USER-VISIBLE CHANGES
BUG FIXES
Clarified license and project in the README.md
Added TODO.html, README.html, and TODO.md to .Rbuildignore (requested by CRAN)
Moved
biom-demo.Rmd to
vignettes/
Updated
inst/NEWS (this) file to official format
Removed pre-built vignette HTML so that it is re-built during package build. This updates things like the build-date in the vignette, but also ensures that the user sees in the vignette the results of code that just worked with their copy of the package.
Changes in version 0.3.10:
USER-VISIBLE CHANGES
These changes should not affect any package behavior.
Some of the top-level documentation has been changed to reflect new development location on GitHub.
BUG FIXES
Minor fixes for CRAN compatibility
This addresses Issue 1:
Changes in version 0.3.9:
SIGNIFICANT USER-VISIBLE CHANGES
NEW FEATURES
The
biom_data parsing function now uses a vectorized
(matrix-indexed) assignment while parsing sparse matrices.
Unofficial benchmarks estimate a few 100X speedup.
Changes in version 0.3.8:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1-5:
Add C-level implementation of Wilcoxon-Mann-Whitney rank sum test
Documentation and vignettes updated to be ready for Bioconductor submission.
Changes in version 0.99.12:
PACKAGE MODIFICATION
Welcome.
Changes in version 0.99.11:
BUG FIXED
Changes in version 0.99.10:
PACKAGE MODIFICATION
Changes in version 0.99.9:
PACKAGE MODIFICATION
Changes in version 0.99.8:
PACKAGE MODIFICATION
adding the import of the following function in NAMESPACE: abline arrows axis box boxplot dev.new dev.off head image layout median mtext par pdf rect tail title var
defining the getAccuracyMN and getSignatureLs accessors
Changes in version 0.99.7:
PACKAGE MODIFICATION
Changes in version 0.99.6:
PACKAGE MODIFICATION
Changes in version 0.99.5:
PACKAGE MODIFICATION
importing of packages in NAMESPACE fixed
use of S4 methods (instead of S3)
Changes in version 0.99.4:
PACKAGE MODIFICATION
Changes in version 0.99.0:
PACKAGE MODIFICATION
Changes in version 1.19.1:
NEW FEATURES
Changes in version 3.1:
NEW FEATURES
BUG FIXES.7:
Changes in version 1.27.2:
BUG FIXES
Changes in version 1.3.3:
BUG FIXES
Subsetting the S4 part of Binmat objects by row is now an error
Providing non-positive m/z values to ‘readImzML’ is now an error
Elements of ‘imageData’ that fail to ‘combine’ or which are missing from one or more of the objects are now dropped from the result with warning rather than failing
Moved unit tests in ‘ints/tests’ to ‘tests/testthat’
Changes in version 1.3.2:
NEW FEATURES
Added ‘image3D’ method for plotting 3D images
Added ‘batchProcess’ method for batch pre-processing
Changes in version 1.3.1:
SIGNIFICANT USER-VISIBLE CHANGES
Added ‘mass.accuracy’ and ‘units.accuracy’ arguments for controlling the m/z accuracy when reading ‘processed’ imzML
Function ‘reduceDimension.bin’ now takes argument ‘units’ with value ‘ppm’ or ‘mz’, and new corresponding defaults
BUG FIXES
Fixed bug in reading ‘processed’ imzML format that caused mass spectra to be reconstructed in the wrong order
Improved speed accessing columns of Hashmat sparse matrices
In ‘pixelApply’ and ‘featureApply’, zero-length return values are no longer returned as a list when ‘.simplify=FALSE’
Function ‘peakAlign.diff’ should be more memory efficient now
Changes in version 1.3.0 (2015-12-16):
NEW FEATURES
Added experimental Binmat class for working with on-disk matrices
Added experimental support for 3D files from benchmark datasets
Added experimental support for plotting 3D images
Added experimental support for ‘processed’ imzML format
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Fixed bug in plotting 3D image slices in the z dimension
Fixed bug where large imzML files could not be read due to byte offsets being stored as ints; they are now stored as doubles.
Fixed bug with strip labels in 3D plotting and with mixed labels
Fixed bug with unique m/z feature names for high mass resolutions
Changes in version 0.99.2 (2016-02-22):
Changes in version 1.9.8:
Make ChAMP package suitable for both 450K array and EPIC array.
Updated BMIQ normalization to newest version.
Add champ.refbase function to do reference-based cell proportion detection and correction.
Add champ reffree function to do reference-free EWAS.
removed champ.lasso function, but added champ.DMR function, which combined bumphunter algorithm and probe lasso algorithm to detect DMRs.
Updated a new version of vignette.
Changes in version 3.5.18:
Changes in version 3.5.17:
Changes in version 3.5.16:
Changes in version 3.5.15:
Changes in version 3.5.14:
remove NA for featureAlignedDistribution
handle NA and infinite value for featureAlignedSingal
Changes in version 3.5.13:
Changes in version 3.5.12:
Changes in version 3.5.11:
Changes in version 3.5.10:
add new function findEnhancers
normalize the output of gene region for binOverFeature
change the documentation to avoid time-out error
Changes in version 3.5.9:
merge peaksNearBDP and bdp function.
improve oligoSummary.
update the documentations.
Changes in version 3.5.8:
change toGRanges from function to method
update the documentations.
Changes in version 3.5.7:
change test from RUnit to testthat
add new function addMetadata
change the output and parameters of annoPeaks
simple the parameter output of annotatePeakInBatch
allow bdp function to accept GRanges annotation
add error bar function for binOverFeature function
remove the log file after plot for makeVennDiagram function
add private function trimPeakList
update the documentation of annoPeaks and annotatePeakInBatch
Changes in version 3.5.5:
update the documentation to fix the typo in quickStart.
change the default value of annoPeaks.
update annoGR class to fix the error: identical(colnames(classinfo), colnames(out)) is not TRUE
Changes in version 3.5.2:
update the documentation to fix the error on windows (import bigwig error)
avoid the output of addGeneIDs as factors
Changes in version 3.5.1:
update the documentation NEW FEATURE
toGRanges can accept connection object
add annoPeaks function
add xget function
update the peakPermTest algorithm to make more reasonable result.
add oligoSummary function
add IDRfilter function
add reCenterPeaks function
Changes in version 1.7.2:
Changes in version 1.7.1:
Changes in version 1.7.15:
Changes in version 1.7.14:
Changes in version 1.7.13:
Changes in version 1.7.12:
fixed R check <2016-03-05, Sat>
implement list_to_dataframe that mimic ldply and remove ldply dependency <2016-03-05, Sat>
Changes in version 1.7.11:
Changes in version 1.7.10:
Changes in version 1.7.9:
covplot support GRangesList <2016-02-24, Wed>
update ReactomePA citation info <2016-02-17, Wed>
Changes in version 1.7.8:
Changes in version 1.7.7:
Changes in version 1.7.6:
introduce ‘overlap’ parameter in annotatePeak, by default overlap=”TSS” and only overlap with TSS will be reported as the nearest gene. if overlap=”all”, then gene overlap with peak will be reported as nearest gene, no matter the overlap is at TSS region or not. <2016-01-12, Tue>
bug fixed in find overlap with peaks have strand info. <2016-01-12, Tue> + see
Changes in version 1.7.5:
add paramters, sameStrand,ignoreOverlap, ignoreUpstream and ignoreDownstream in annotatePeak <2016-01-10, Sun> + see
bug fixed in peak orientation <2016-01-10, Sun> + see
Changes in version 1.7.4:
stop if input list of csAnno object has no name attribute + see + plotAnnoBar + plotDistToTSS
[covplot] xlim now not only restrict the window of data but also set the limit of the graphic object <2015-12-30, Wed> + see
Changes in version 1.7.3:
Changes in version 1.7.2:
use geom_rect instead of geom_segment in covplot <2015-11-30, Mon>
open lower parameter (by default =1) to specific lower cutoff of coverage signal <2015-11-29, Sun>
fixed covplot to work with None RleViews of specific chromosome <2015-11-29, Sun>
addFlankGeneInfo now works with level=”gene” <2015-11-19, Thu> + see
Changes in version 1.7.1:
fixed extracting ID type from TxDb object, since the change of metadata(TxDb). now using grep to extract. <2015-10-27, Tue>
add vp parameter to set viewport of vennpie on top of upsetplot by user request <2015-10-26, Mon> + see
getBioRegion function <2015-10-20, Tue> + see
Changes in version 1.7.0:
Changes in version 1.10.0:
OTHER CHANGES
Changes in version 2.99.2:
add keyType parameter in enrichKEGG, enrichMKEGG, gseKEGG and gseMKEGG <2016-05-03, Tue>
search_kegg_species function <2016-05-03, Tue>
bitr_kegg function <2016-05-03, Tue>
Changes in version 2.99.1:
go2ont function <2016-04-28, Thu> +
go2term function <2016-04-21, Thu>
export buildGOmap <2016-04-08, Fri>
fixed enrichDAVID according to <2016-04-08, Fri>
Changes in version 2.99.0:
Changes in version 2.5.6:
Changes in version 2.5.5:
maxGSSize paramter <2016-03-9, Wed>
update show method of compareClusterResult <2016-03-06, Sun>
fixed R check <2016-03-06, Sun>
update ReactomePA citation info <2016-02-17, Wed>
Changes in version 2.5.4:
add use_internal_data=FALSE parameter in gseKEGG <2016-01-19, Tue>
fixed bug in simplify for organism extracted from OrgDb is not consistent to GOSemSim. <2016-01-05, Tue>
update vignette <2015-12-29, Tue>
re-designed internal function <2015-12-20, Sun>
Changes in version 2.5.3:
Changes in version 2.5.2:
Changes in version 2.5.1:
read.gmt function for parsing gmt file for enricher and GSEA <2015-10-28, Wed>
gofilter to filt result at specific GO level <2015-10-23, Fri>
simplify function <2015-10-21, Wed> + see
Changes in version 2.5.0:
Changes in version 3.3:
BUG FIXES
NEW FEATURES
Add the pairwise whole genome alignment pipeline
Add a new class “GRangePairs”
Changes in version 1.5.2 (2016-03-31):
Changes in version 1.5.1 (2015-10-30):
Changes in version 1.5.0 (2015-10-14):
Changes in version 1.9.7:
Legend()function which is more flexible to generate different types of legends.
Changes in version 1.9.6:
color_mapping_legend(), there are ticks on continuous color bar
Changes in version 1.9.5:
add a section in the vignette to show how to adjust positions of column names when there are bottom annotations.
fixed a bug that character NA values can not to assigned with na_col
extra character ‘at’ and ‘labels’ in legends will be removed automatically
all arguments which are passed to
make_layout() are all explicitly
put in
draw() instead of using …
Changes in version 1.9.4:
Changes in version 1.9.3:
graphic parameters are correctly recycled in row annotations
if there is only one row after splitting, there will be no dendrogram
add
range option in
densityHeatmap()
when
gap is set for the main heatmap, other heatmps also adjust
their
gap values to it
fixed a bug that when rownames/colnames are not complete, dendrograms are corrupted
alter_fun now supports adding graphics grid by grid
add
show_pct option in
oncoPrint()
add
column_order in
densityHeatmap()
Changes in version 1.9.2:
anno_link()
Changes in version 1.9.1:
width of the heatmap body are calculated correctly if it is set as a fixed unit
there is no dendrogram is nrows in a row-slice is 1
add
anno_link() annotation function
bottom annotations are attached to the bottom edge of the heatmap if there are additional blank space
colors for NA can be set by “NA” in annotations
row_dend_reorder and
column_dend_reorder are set to
TRUE by
default again
optimize the way to specify na_col in heatmap annotations
correct wrong viewport names in decorate_* functions
Changes in version 1.0.0 (2016-04-28):
NEW FEATURES
Cell detection and signal estimation model for images produced using the Fluidigm C1 system.
We applied this model to a set of HeLa cell expressing fluorescence cell cycle reporters.
Accompanying dataset available in the CONFESSdata package.
Changes in version 1.5.1:
USER VISIBLE CHANGES
Changes in version 1.5.1:
Changes in version 1.5.10:
Restored normalize() as a S4 method returning a RangedSummarizedExperiment object.
Modified asDGEList() to use any available normalization data in the input object.
Generalized S4 methods to apply on SummarizedExperiment objects.
Removed the rescue.ext option for PE handling, to maintain consistent totals calculations.
Removed the fast.pe option for PE data handling, in favour of improved default processing.
Removed dumpPE(), which is not required without the fast.pe option.
Removed makeExtVector() in favour of list/DataFrame specification.
windowCounts() and regionCounts() now compute and store the mean PE size and read length.
Minor fix in correlateReads() for end-of-chromosome behaviour.
Modified checkBimodality() so that the width argument behaves like ext in windowCounts().
extractReads() with as.reads=TRUE for PE data now returns a GRangesList.
Added the controlClusterFDR(), clusterWindows() and consolidateClusters() functions to automate control of the cluster-level FDR.
Added protection against NA values in the cluster IDs for combineTests(), getBestTest(), upweightSummits().
All read extraction methods are now CIGAR-aware and will ignore soft-clipped parts of the alignment.
Changes in version 1.9.3:
Changes in version 1.9.2:
Changes in version 1.9.1:
Changes in version 0.99.0:
Changes in version 1.1.8:
Changes in version 1.1.7:
Changes in version 1.1.6:
added CITATION file containing the information on the PeerJ preprint BUG FIXES
added require(mgcv) to the examplse from from plotProfiles to pass the check() from devtools
also added mgcv to suggests
Changes in version 1.1.5:
BUG FIXES
Changes in version 1.1.4:
BUG FIXES
Changes in version 1.1.3:
BUG FIXES
fixed a wrong preview of the sample annotation table in the vignette
fixed a wrong reference in the runTesting function help
made small changes to the export statement in the roxygen docs
Changes in version 1.1.2:
BUG FIXES
Changes in version 1.1.1:
BUG FIXES
Changes in version 1.1.0.0:
INITIAL RELEASE OF THE PACKAGE
Converter functions between the ‘DESeqDataSet’ and ‘DGEList’ objects from the DESeq2 and edgeR packages, respectively.
S4 generic for the ‘DGEList’ constructor and a corresponding method for ‘RangedSummarizedExperiment’ objects.
Changes in version 1.5.39:
BUG FIXES
Changes in version 1.5.37:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.27:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.13:
BUG FIXES
Changes in version 1.5.11:
SIGNIFICANT USER-VISIBLE CHANGES
Now coverageToExon(), regionMatrix() and railMatrix() can take an ‘L’ argument of length equal to the number of samples in case not all samples have the same read length.
railMatrix() has a new argument called ‘file.cores’ for controlling how many cores are used for loading the BigWig files. In theory this allows using railMatrix() with ‘BPPARAM.custom’ equal to a BatchJobsParam() to submit 1 job per chromosome, then ‘file.cores’ determines the number of cores for reading the files. This is a highly experimental feature.
Changes in version 1.5.9:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.8:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.7:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.6:
NEW FEATURES
Changes in version 1.5.8:
BUG FIXES
plotRegionCoverage() used to take into account the strand of the regions for finding transcripts that overlapped the regions. This was not a problem with DERs from derfinder since they have strand * by default but it is a problem when using it with stranded regions.
plotCluster() will also now ignore strand for finding neighboring regions.
Changes in version 1.5.4:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.12.0:
Added DESeqDataSetFromTximport() to import counts using tximport.
Added vst() a fast wrapper for the VST.
Added support for IHW p-value adjustment.
Changes in version 1.11.42:
Update summary() to be IHW-results-aware.
Small change to fitted mu values to improve fit stability when counts are very low. Inference for high count genes is not affected.
Galaxy script inst/script/deseq2.R moves to Galaxy repo.
Changes in version 1.11.33:
Changes in version 1.11.18:
Changes in version 1.11.5:
Changes in version 1.1 (2015-11-27):
Implemented rank correlation distance and cosine distance
Added updateObject method
Changes in version 1.17.28:
Added support for estimating fold change rates with continous variables.
Reduced RAM usage when parallelizing
Changes in version 2.0.0:
Feature changes
Change default analysis method to DESeq2 for its more conservative normalization
Designate DESeq method as obsolete (in favor of DESeq2); alter documentation and vignette accordingly.
Change default FDR threshold to 0.05
Add bNot paramater to dba.contrast to remove ! contrasts by default
Remove bReturnPeaksets parameter from dba.plotVenn (does this by default)
Change bCorPlot default to FALSE (no more automatic clustering heatmaps)
Internal changes
Bump version number to 2.0
Update vignette
Remove $allvectors and $vectors; replace with $merged (without score matrix) and $binding
Upgrade peaksort to use inplace peakOrder
Optimize peak merging memory usage
Change PCA method from princomp() to prcomp()
maxGap implemented in merge
Include the beginnings of some unit tests using the testthat package.
Bug fixes
Fix bug in retrieving SummarizedExperiment
Fix bug when no peaks
Fix bugs in non-standard chromosome names and chromsome order
Fix bugs in Called logical vectors
Ensure loading sample sheet doesn’t treat hash as comment char.
Tildes in file paths now accepted.
Spaces trimmed from entries in sample sheets (with warning).
Functions added to importFrom stats to satisfy BiocCheck.
Changes in version 1.3.5:
Deprecated DIList objects and methods in favour of InteractionSet objects.
marginCounts() now returns a RangedSummarizedExperiment for all bins.
Added the max.height argument to the rotPlaid() and rotDI() functions.
Added the diClusters() function for post-hoc cluster-level FDR control.
Added the annotatePairs() function for convenient annotation of (clusters of) interactions.
Fixed a bug in plotPlaid() when the interaction space was empty.
Fixed a bug in preparePairs() where unmapped chimeric segments led to the loss of the entire pair.
Updated user’s guide, documentation and tests.
Changes in version 1.0.0:
Initial release to Bioconductor.
Added NEWS file.
Fixes to documentation.
Improved automated ploting including different colors
Enhanced loop annotation
Changes in version 1.7.2:
Major changes. WGBS pipeline is now implemented with DSS as a regression step instead of limma. 450K pipeline is the same, but with slight cosmetic changes in anticipation of the transition to the EPIC array.
DMR.plot() has been completely rewritten, now with Gviz and inbuilt transcript annotation for hg19, hg38 and mm10.
DMRs are now ranked by the Stouffer transformations of the limma- and DSS- derived FDRs of their constituent CpG sites.
Changes in version 0.99.7:
Changes in version 0.99.6:
Changes in version 0.99.5:
Upgraded R dependency version to 3.3.
Added importFrom methods in NAMESPACE.
Changes in version 0.99.4:
Changes in version 0.99.3:
Fixed warnings from R CMD Check.
Evaluated more vignette chunks.
Changes in version 0.99.1:
Added documentation for the normalizeShape function. A manual entry is now also available for most non-exported functions.
Added citation entry.
Changes in version 0.99.0:
normalization parameter in encodeSeqShape.R changed to normalize.
The y-axis plot range in plotShape can now be defined by the user.
Completed package documentation.
DNA shape feature matrix returned by encodeSeqShape is now normalized by default.
Changes in version 1.07:
NEW FEATURES
Added scanone.assoc to perform genome wide association mapping.
Added calc.genoprob2 to run forward/backward algorithm once cluster parameters have been estimated.
CHANGES
Fixed bug in assoc.map where the SDPs were not in the correct order.
Change assoc.map to use the Sanger VCF file rather than our own custom SNP file.
Changes in version 2.9.7:
barplot accepts x and colorBy parameters as in dotplot <2016-04-13, Wed>
gsfilter function for restricting result with minimal and maximal gene set size <2016-03-31, Thu> + see
Changes in version 2.9.6:
add maxGSSize parameter for hypergeometric test <2016-03-09, Wed>
500, its probability of being called significant by GSEA rises quite dramatically.
fixed R check <2016-03-05, Sat>
Changes in version 2.9.5:
Changes in version 2.9.4:
Changes in version 2.9.3:
update enrichMap to scale category sizes <2016-01-04, Mon>
update ‘show’ methods of enrichResult and gseaResult <2015-12-29, Tue>
Changes in version 2.9.2:
Changes in version 2.9.1:
GSEA: test bimodal separately <2015-10-28, Wed>
add NES column in GSEA result <2015-10-28, Wed>
use NES instead of ES in calculating p-values. <2015-10-28, Wed>
duplicated gene IDs in enrich.internal is not allow. add
unique to
remove duplicated ID. <2015-10-20, Tue> + see
Changes in version 2.9.0:
Changes in version 2.7.7:
Changes in version 2.7.6:
Changes in version 2.7.5:
Changes in version 2.7.4:
Changes in version 2.7.3:
Changes in version 2.7.2:
Changes in version 2.7.1:
Changes in version 2.7.0:
Changes in version 2.6.3:
Changes in version 2.6.2:
Fixed issues with the gff3 synthetic transcript generation. Several mRNA lines per mRNA were kept and the feature selection was failing for features other than mRNA (e.g. tRNA or miRNA)
Extended the SimpleRNASeq vignette
Changes in version 2.6.1:
Upgraded the dependencies
Introduced the new vignette (SimpleRNASeq) structure
Fixed a cosmetic issue
Corrected man pages
Fixed issues with the synthetic transcript generation from gtf file. Thanks to Sylvain Foisy for reporting this one.
Changes in version 4.14.0:
NEW FEATURES
‘boundary’ argument to ‘filter2()’ for specifying behaviour at image boundaries
the ‘hist’ method now returns a (list of) “histogram” object(s) ( and)
‘colormap()’ function for mapping a greyscale image to color using a color palette
PERFORMANCE IMPROVEMENTS
BUG FIXES
fixed the ‘log’ method for Image objects ()
‘affine’: fixed handling of images containing an alpha channel ()
retain NA’s in morphological operations: ‘dilate’, ‘erode’, ‘opening’, ‘closing’, ‘whiteTopHat’, ‘blackTopHat’, ‘selfComplementaryTopHat’ ()
fix to potential unsafe code in C function ‘affine()’ (thanks Tomas Kalibera)
medianFilter.c: use proper rounding rather than truncation during float to int coercion
Changes in version 1.11.1:
Changes in version 3.14.0:
estimateDisp(), estimateCommonDisp(), estimateTrendedDisp(), estimateTagwiseDisp(), splitIntoGroups() and equalizeLibSizes() are now S3 generic functions.
The default method of estimateGLMTrendedDisp() and estimateGLMTagwiseDisp() now only return dispersion estimates instead of a list.
Add fry method for DGEList objects.
Import R core packages explicitly.
New function gini() to compute Gini coefficients.
New argument poisson.bound for glmQLFTest(). If TRUE (default), the p-value returned by glmQLFTest() will never be less than what would be obtained for a likelihood ratio test with NB dispersion equal to zero.
New argument samples for DGEList(). It takes a data frame containing information for each sample.
glmFit() now protects against zero library sizes and infinite offset values.
glmQLFit.default() now avoids passing a NULL design to .residDF().
cpm.default() now outputs a matrix of the same dimensions as the input even when the input has 0 row or 0 column.
DGEList() pops up a warning message when zero lib.size is detected.
Bug fix to calcNormFactors(method=”TMM”) when two libraries have identical counts but the lib.sizes have been set unequal.
Add a CRISPR-Cas9 screen case study to the users’ guide and rename Nigerian case study to Yoruba.
Changes in version 0.99.2:
Changes in version 0.99.1:
Changes in version 0.99.0:
Changes in version 1.3.0:
NEW FEATURES
Added quiet param to the queryEncode and searchEncode functions.
Added the Roadmap datasets.
Updated the encode_df object with the latest changes in the ENCODE database.
BUG FIXES
Solved a bug that caused get_schemas function to fail because a directory was added in the ENCODE database schemas.
Solved a bug with the searchEncode function caused by changes in the ENCODE REST API.
Changes in version 1.7.5:
Changes in version 1.7.3:
Changes in version 1.6.0:
Improved code for parallel computing
Added a function freqpoly
Changes in version 1.1.7:
Changes in version 1.1.5:
normalizeToMatrix: modified default of
trim to 0
makeMatrix: w0 mode can deal with overlapping signals
Changes in version 1.1.4:
the color of error areas is set 25% light of the color of corresponding line.
support raster image for heatmap body
Changes in version 1.1.3:
parameter name was wrong when constructin
ht@layout$graphic_fun_list
add
row_order option in
EnrichedHeatmap()
normalizeToMatrix() supports self-defined smoothing function
there can be no upstream or/and downstream in the heatmap
smoothing function can be self-defined
Changes in version 1.1.2:
normalizeToMatrix(): more options can be passed to
locfit::locfit()
anno_enrich(): exclude NA values
Changes in version 1.1.1:
add
getSignalsFromList() which summarize signals from a list of
matrix
export
copyAttr()
Changes in version 2.1.15:
Changes in version 2.1.14:
Changes in version 2.1.12:
Changes in version 2.1.10:
Changes in version 1.3.20:
BUG FIXES
methods transcripts, genes etc don’t result in an error when columns are specified which are not present in the database and the return.type is GRanges.
Removed the transcriptLengths method implemented in ensembldb in favor of using the one from GenomicFeatures.
Changes in version 1.3.19:
BUG FIXES
Changes in version 1.3.18:
NEW FEATURES
Changes in version 1.3.17:
BUG FIXES
Changes in version 1.3.16:
BUG FIXES
Changes in version 1.3.15:
NEW FEATURES
GRangesFilter now supports GRanges of length > 1.
seqlevels method for GRangesFilter.
New methods exonsByOverlaps and transcriptsByOverlaps.
Changes in version 1.3.14:
NEW FEATURES
seqlevelsStyle getter and setter method to change the enable easier integration of EnsDb objects with UCSC based packages. supportedSeqlevelsStyle method to list possible values. Global option “ensembldb.seqnameNotFound” allows to adapt the behaviour of the mapping functions when a seqname can not be mapped properly.
Added a seqlevels method for EnsDb objects.
SIGNIFICANT USER-VISIBLE CHANGES
Add an example to extract transcript sequences directly from an EnsDb object to the vignette.
Add examples to use EnsDb objects with UCSC chromosome names to the vignette.
BUG FIXES
Changes in version 1.3.13:
NEW FEATURES
EnsDb: new “hidden” slot to store additional properties and a method updateEnsDb to update objects to the new implementation.
New method “transcriptLengths” for EnsDb that creates a similar data.frame than the same named function in the GenomicFeatures package.
BUG FIXES
Changes in version 1.3.12:
NEW FEATURES
ensDbFromGff and ensDbFromAH functions to build EnsDb objects from GFF3 files or directly from AnnotationHub ressources.
getGenomeFaFile does now also retrieve Fasta files for the “closest” Ensembl release if none is available for the matching version.
SIGNIFICANT USER-VISIBLE CHANGES
Removed argument ‘verbose’ in ensDbFromGRanges and ensDbFromGtf.
Updated parts of the vignette.
Removed method extractTranscriptSeqs again due to some compatibility problems with GenomicRanges.
BUG FIXES
Changes in version 1.3.11:
BUG FIXES
Changes in version 1.3.10:
NEW FEATURES
Implemented methods columns, keys, keytypes, mapIds and select from AnnotationDbi.
Methods condition<- and value<- for BasicFilter.
Changes in version 1.3.9:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.3.7:
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 1.3.6:
BUG FIXES
Changes in version 1.3.5:
NEW FEATURES
Added GRangesFilter enabling filtering using a (single!) GRanges object.
Better usability and compatibility with chromosome names: SeqnameFilter and GRangesFilter support both Ensembl and UCSC chromosome names, if option ucscChromosomeNames is set to TRUE returned chromosome/seqnames are in UCSC format.
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 1.3.4:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Added a section to the vignette describing the use of ensembldb in Gviz.
Fixed the vignette to conform the “Bioconductor style”.
Added argument use.names to exonsBy.
BUG FIXES
Fixed bug with getGeneRegionTrackForGviz with only chromosome specified.
Fixed an internal problem subsetting a seqinfo.
Changes in version 1.3.3:
NEW FEATURES
BUG FIXES
Changes in version 1.3.2:
NEW FEATURES
Changes in version 1.3.1:
BUG FIXES
Changes in version 1.12.0:
NEW FEATURES
add support for Ensembl release 82
add support for Ensembl relase 84
MODIFICATIONS
elementLengths was renamed -> elementNROWS in S4Vectors
mark as unsupported on windows; Ensembl release 84 requires tabix
Changes in version 2.0.0:
Move socket connection and data serving code outside of package to new packages.
Use new ‘epivizrServer’ and ‘epivizrData’ packages.
Move standalone to package ‘epivizrStandalone’.
Use simplified ‘plot’ and ‘visualize’ interface to add charts.
Changes in version 1.9.4:
Changes in version 999.999:
Changes in version 999.999:
Changes in version 0.99.9:
BUG FIXES
Fixes to the Vignette and addition of citation.
Small fixes mainly to the Vignette and one left over problem from the git/svn conflict merge.
Changes in version 0.99.8:
NEW FEATURES
FAData constructor recognizes *.ped and *.fam files and imports their pedigree information correctly.
export method to export pedigree information from a FAData to a ped or fam file.
Add methods getFounders and getSingletons.
SIGNIFICANT USER-VISIBLE CHANGES
Founders are now represented by NA in columns ‘father’ and ‘mother’. This fixed potential problems when IDs are character strings and not numeric.
Added column ‘family’ to the results of the probability test.
genealogicalIndexTest: renamed argument prune into rm.singletons.
Removed prune argument for methods calculating per-individual statistics.
Re-formated and re-structured the vignette.
BUG FIXES
Validation of pedigree information in FAData improved.
[ subsetting now ensures that father or mother IDs which are not available in column ‘id’ are set to NA.
clique names are no longer dropped when setting cliques(object) <- value.
Changes in version 0.99.6:
NEW FEATURES
Monte Carlo simulation to assess significance of familial incidence rate and familial standardized incidence rates.
New FAIncidenceRateResults object along with its methods.
New FAStdIncidenceRateResults object along with its methods.
New function factor2matrix to convert a factor into a matrix.
SIGNIFICANT USER-VISIBLE CHANGES
Vignette: extended content and adapted to use Bioconductor style.
Results from the kinship sum test are now, in addition to the p-value, also sorted by the kinship sum.
BUG FIXES
Changes in version 0.99.5:
SIGNIFICANT USER-VISIBLE CHANGES
Some changes related to the github repository.
Added a readme.org file.
Changes in version 0.99.4:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 0.99.3:
NEW FEATURES
BUG FIXES
Fixed a bug in plotPed related to optional labels.
Fixed a bug in familialIncidenceRate: self-self kinship was not excluded.
Changes in version 0.99.2:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 0.99.1:
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 0.99.0:
SIGNIFICANT USER-VISIBLE CHANGES
Improved the vignette.
Fixed several issues in the documentation.
Changes in version 1.1.11:
Improvement: Various speed improvements
Fix namespace clash between Matrix and S4Vectors
Changes in version 1.1.10:
Feature: New class pgSlim for handling pangenomes with no ref to sequence data
Bug fix: safeAAread/safeDNAread would return wrong sequences when number of fasta files exceeded 2000
Changes in version 1.1.9:
Feature: Threshold for core group classificaton can now be set (defaults to 1)
Improvement: Only investigate the neighbors to groups that have changed during iteration in neighborhoodMerge.
Changes in version 1.1.8:
Changes in version 1.1.7:
Feature: cdhitGrouping creates initial grouping based on cdhit algorithm
Feature: neighborhoodSplit now refines the splitting as a final step by merging highly similar groups sharing gene group up- or downstream
Feature: gpcGrouping can now precluster using CD-Hit
Feature: Key algorithms now reports progress and timing information
Feature: Custom linearKernel function that takes an upper similarity threshold to speed up comparisons.
Feature: Updated vignette, focusing on recommended workflow
Improvement: More performant pcGraph, neighborhoodSplit, pgMatrix methods
Improvement: pgMatrix now returns a sparseMatrix for lower memory footprint
Improvement: pangenome matrix no longer stored in pgInMem
Bug fix: Remove zerolength genes upon pangenome creation.
Bug fix: Batch accessing fasta files to avoid “too many open connections” error
Changes in version 1.1.6:
Changes in version 1.1.5:
Changes in version 1.1.4:
Changes in version 1.1.3:
Minor optimization of code
getRep now names genes by group name
transformSim now works on sparseMatrix rather than matrix objects - avoids coercing huge sparse matrices down to matrix format
Changes in version 1.1.1:
Changes in version 1.1.0:
Changes in version 1.37.6:
read.FCS
supports FCS that has diverse bit widths across parameters/channels
supports FCS that uses big integer (i.e. uint32 > R’s integer.max)
write.FCS
Changes in version 1.8.0:
Changes in version 1.7.18-1.7.22:
NEW FEATURES
LZMA compression algorithm is available in the GDS system (LZMA, LZMA_RA)
faster implementation of variable-length string: the default string becomes string with the length stored in the file instead of null-terminated string (new GDS data types: dStr8, dStr16 and dStr32)
UTILITIES
improve the read speed of characters (+18%)
significantly improve random access of characters
correctly interpret factor variable in
digest.gdsn() when
digest.gdsn(..., action="Robject"), since factors are not integers
Changes in version 1.7.0-1.7.17:
NEW FEATURES
digest.gdsn() to create hash function digests (e.g., md5, sha1,
sha256, sha384, sha512), requiring the package digest
new function
summarize.gdsn()
show() displays the content preview
define C MACRO ‘COREARRAY_REGISTER_BIT32’ and ‘COREARRAY_REGISTER_BIT64’ in CoreDEF.h
new C functions
GDS_R_Append() and
GDS_R_AppendEx() in R_GDS.h
allows efficiently concatenating compressed blocks (i.e., ZIP_RA and LZ4_RA)
v1.7.12: add a new data type: packedreal24
define C MACRO ‘COREARRAY_SIMD_SSSE3’ in CoreDEF.h
v1.7.13:
GDS_Array_ReadData(),
GDS_Array_ReadDataEx(),
GDS_Array_WriteData() and
GDS_Array_AppendData() return
void*
instead of
void in R_GDS.h
v1.7.15:
GDS_Array_ReadData() and
GDS_Array_ReadDataEx() allow
Start=NULL or Length=NULL
v1.7.16: new C function
GDS_Array_AppendStrLen() in R_GDS.h
UTILITIES
paste(..., sep="")is replaced by
paste0(...)(requiring
=R_v2.15.0)
DEPRECATED AND DEFUNCT
BUG FIXES
v1.7.7: fix a potential issue of uninitialized value in the first parameter passed to ‘LZ4_decompress_safe_continue’ (detected by valgrind)
v1.7.14: fix an issue of ‘seldim’ in
assign.gdsn(): ‘seldim’ should
allow NULL in a vector
Changes in version 1.6.0-1.6.2:
the version number was bumped for the Bioconductor release version 3.2
‘attribute.trim=FALSE’ in
print.gdsn.class() by default NEW
FEATURES
diagnosis.gds() returns detailed data block information BUG FIXES
v1.6.2: it might be a rare bug (i.e., stop the program when getting Z_BUF_ERROR), now the GDS kernel ignores Z_BUF_ERROR in deflate() and inflate(); see for further explanation
Changes in version 0.99.9:
USER FACING CHANGES
BUGFIXES
Changes in version 0.99.7:
MAJOR USER FACING CHANGES
USER VISIBLE CHANGES
BUGFIXES
Changes in version 0.99.6:
USER VISIBLE CHANGES
Added support for creating of TxDb objects from GenBankRecord objects
Added support for genpept files to readGenBank and parseGenBank
BUGFIXES
Changes in version 0.99.5:
MAJOR USER VISIBLE CHANGES
Move to single-class model with nullable sequence slot, class name changed to GenBankRecord. Removed methods, etc for old classes
parseGenBank now accepts ret.anno and ret.seq to control whether the the sequence and annotations are parsed.
Change how repeated annotation fields within a single feature entry is handled. Known multivalue fields (currently db_xref and EC_number) always return CharacterList. Others return CharacterList in case of duplicate entries only, with a warning.
BUGFIXES
correctly support cases where arbitrary annotations appear more than once within a single feature.
fix regression problem with handling GenBank files with no sequence information when ret.seq is TRUE
fix problem where tests weren’t being invoked properly during check
More informative error message when non-existent filename is passed to readGenBank or parseGenBank.
Fix bug when all variation features in a file lack /replace (for VRanges alt is a mandatory field).
Changes in version 0.99.4:
BUGFIXES
Changes in version 0.99.2:
USER VISIBLE CHANGES
Version number jump as part of Bioconductor submission process
Added runnable examples to (most) help files
Hook existing unit tests into Bioconductor testing harness
Add fastpass to extract sequence, in the form of seq.only argument to parseGenBank, and getSeq methods for GBAccession and GenBankFile classes
BUGFIXES
Changes in version 1.54.0:
DEPRECATED AND DEFUNCT
remove deprecated anyNA(); contradicted base::anyNA
remove deprecated allNA()
Changes in version 1.13.3:
Changes in version 1.13.2:
Changes in version 1.13.1:
Changes in version 1.3.3:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.3.2:
IMPROVEMENTS AND BUG FIXES
strands of reads in paired-end BAM files are inferred depending on strand of first alignment from the pair. It’s a default setting of the strandMode argument in the readGAlignmentPairs function.
added new argument to ScoreMatrix, ScoreMatrixBin and ScoreMatrixList functions library.size indicating total number of aligned reads of a BAM file for normalization.
removed argument stranded from ScoreMatrix, ScoreMatrixBin and ScoreMatrixList functions
Changes in version 1.3.1:
IMPROVEMENTS AND BUG FIXES
Changes in version 1.8.0:
NEW FEATURES
SIGNIFICANT USER-LEVEL CHANGES
DEPRECATED AND DEFUNCT
After being deprecated in BioC 3.2, the left() and right() getters and strand() setter for GAlignmentPairs objects are now defunct.
After being deprecated in BioC 3.2, the ‘invert.strand’ argument of the first() and last() getters for GAlignmentPairs objects are now defunct.
After being deprecated in BioC 3.2, the ‘order.as.in.query’ argument of the “grglist” method for GAlignmentPairs objects is now defunct.
After being deprecated in BioC 3.2, the ‘order.as.in.query’ argument of the “rglist” and “grglist” methods for GAlignmentsList objects are now defunct.
Remove the “mapCoords” and “pmapCoords” methods (were defunct in BioC 3.2).
Remove the readGAlignment*FromBam() functions (were defunct in BioC 3.2).
BUG FIXES
Changes in version 1.24:
NEW FEATURES
Add mapRangesToIds() and mapIdsToRanges() for mapping genomic ranges to IDs and vice-versa.
Support makeTxDbFromUCSC(“hg38”, “knownGene”) (gets “GENCODE v22” track).
Add pmapToTranscripts,GRangesList,GRangesList method.
SIGNIFICANT USER-VISIBLE CHANGES
Rename the ‘vals’ argument of the transcripts(), exons(), cds(), and genes() extractors -> ‘filter’. The ‘vals’ argument is still available but deprecated.
Rename the ‘filters’ argument of makeTxDbFromBiomart() and makeTxDbPackage() -> ‘filter’.
When grouping the transcripts by exon or CDS, transcriptsBy() now returns a GRangesList object with the “exon_rank” information (as an inner metadata column).
For transcripts with no exons (like in the GFF3 files from GeneDB), makeTxDbFromGRanges() now infers the exons from the CDS.
For transcripts with no exons and no CDS (like in the GFF3 files from miRBase), makeTxDbFromGRanges() now infers the exon from the transcript.
makeTxDbFromGRanges() and makeTxDbFromGFF() now support GFF/GTF files with one (or both) of the following peculiarities: - The file is GTF and contains only lines of type transcript but no transcript_id tag (not clear this is valid GTF but some users are working with this kind of file). - Each transcript in the file is reported to be on its own contig and spans it (start=1) but no strand is reported for the transcript. makeTxDbFromGRanges() now sets the strand to “+” for all these transcripts.
makeTxDbFromGRanges() now recognizes features of type miRNA, miRNA_primary_transcript, SRP_RNA, RNase_P_RNA, RNase_MRP_RNA, misc_RNA, antisense_RNA, and antisense as transcripts. It also now recognizes features of type transposable_element_gene as genes.
makeTxDbFromBiomart() now points to the Ensembl mart by default instead of the central mart service.
Add some commonly used alternative names (Mito, mitochondrion, dmel_mitochondrion_genome, Pltd, ChrC, Pt, chloroplast, Chloro, 2uM) for the mitochondrial and chloroplast genomes to DEFAULT_CIRC_SEQS.
DEPRECATED AND DEFUNCT
Remove the makeTranscriptDb*() functions (were defunct in BioC 3.2).
Remove the ‘exonRankAttributeName’, ‘gffGeneIdAttributeName’, ‘useGenesAsTranscripts’, ‘gffTxName’, and ‘species’ arguments from the makeTxDbFromGFF() function (were defunct in BioC 3.2).
BUG FIXES
Changes in version 1.5.3:
NEW FEATURES
SIGNIFICANT USER-LEVEL CHANGES
DEPRECATED AND DEFUNCT
Changes in version 1.24.0:
NEW FEATURES
Add the GPos class, a container for storing a set of “genomic positions” (i.e. genomic ranges of width 1). Even though a GRanges object can be used for that, using a GPos object can be much more memory-efficient, especially when the object contains long runs of adjacent positions.
Add a bunch of “invertStrand” methods to support strand inversion of any “stranded” object (i.e. any object with a strand() getter and setter). E.g. invertStrand() works on GRanges, GRangesList, GAlignments, GAlignmentPairs, GAlignmentsList, and RangedSummarizedExperiment objects.
Add “is.unsorted” method for GenomicRanges objects (contributed by Pete Hickey).
base::rank() gained a new ‘ties.method=”last”’ option and base::order() a new argument (‘method’) in R 3.3. Thus so do the “rank” and “order” methods for GenomicRanges objects.
Add “selfmatch” method for GenomicRanges objects.
Add “union” method for GRangesList objects.
SIGNIFICANT USER-LEVEL CHANGES
Remove old SummarizedExperiment class from the GenomicRanges package (this class is now defined in the SummarizedExperiment package).
Move the following generic functions from the GenomicRanges package to the SummarizedExperiment package: - SummarizedExperiment - exptData, “exptData<-“ - rowRanges, “rowRanges<-“ - colData, “colData<-“ - assayNames, “assayNames<-“ - assays, “assays<-“ - assay, “assay<-“
Rename “pintersect” and “psetdiff” methods for GRangesList objects -> “intersect” and “setdiff” without changing their behavior (they still do mendoapply(intersect, x, y) and mendoapply(setdiff, x, y), respectively). The old names were misnomers (see svn commit message for commit 113793 for more information).
Remove the ellipsis (…) from all the setops methods, except from: - “punion” method for signature GRanges#GRangesList; - “pintersect” and “psetdiff” methods for signature GRangesList#GRangesList; - “pgap” method for GRanges objects.
Use DESeq2 instead of DESeq in the vignettes (better late than never).
DEPRECATED AND DEFUNCT
Remove GIntervalTree class and methods (were defunct in BioC 3.2).
Remove mapCoords() and pmapCoords() (were defunct in BioC 3.2).
Changes in version 1.5:
New findOverlaps,GTuples,GTuples-method when type = “equal” gives 10-100x speedup by using data.table.
After being deprecated from GenomicRanges in BioC 3.1, mapCoords() and pmapCoords() are now defunct.
After being deprecated in BioC 3.1, the “intervaltree” algorithm in findOverlaps() is now defunct.
Changes in version 1.27.0:
Changes in version 1.0 (2016-01-01):
Introduction
Additional functions are added sporadically.
This news file reports changes that have been made as the package has been developed.
To do
Add practical examples where genphen has been used.
Implement multi-core execution.
Implement genphen for categorical phenotypes.
Changes in version 0.99.20:
Changes in version 0.99.19:
Added option for custom sort of gene/sample in waterfall plot
Added waterfall specific vignette
Changes in version 0.99.18:
Changes in version 0.99.17:
Changes in version 0.99.15:
Changes in version 0.99.10:
Changes in version 0.99.0:
Changes in version 1.19.1:
NEW FEATURES
Changes in version 1.3.16:
geom_treescale() supports family argument <2016-04-27, Wed> +
update fortify.phylo to work with phylo that has missing value of edge length <2016-04-21, Thu> +
support passing textConnection(text_string) as a file <2016-04-21, Thu> + contributed by Casey Dunn casey_dunn@brown.edu +
Changes in version 1.3.15:
geom_tiplab2 supports parameter hjust <2016-04-18, Mon>
geom_tiplab and geom_tiplab2 support using geom_label2 by passing geom=”label” <2016-04-07, Thu>
geom_label2 that support subsetting <2016-04-07, Thu>
geom_tiplab2 for adding tip label of circular layout <2016-04-06, Wed>
use plot$plot_env to access ggplot2 parameter <2016-04-06, Wed>
geom_taxalink for connecting related taxa <2016-04-01, Fri>
geom_range for adding range of HPD to present uncertainty of evolutionary inference <2016-04-01, Fri>
Changes in version 1.3.14:
geom_tiplab works with NA values, compatible with collapse <2016-03-05, Sat>
update theme_tree2 due to the issue of <2016-03-05, Sat>
offset works in
align=FFALSE with
annotation_image function
<2016-02-23, Tue> + see
subview and inset now supports annotating with img files <2016-02-23, Tue>
Changes in version 1.3.13:
add example of rescale_tree function in treeAnnotation.Rmd <2016-02-07, Sun>
geom_cladelabel works with collapse <2016-02-07, Sun> + see
Changes in version 1.3.12:
exchange function name of geom_tree and geom_tree2 <2016-01-25, Mon>
solved issues of geom_tree2 <2016-01-25, Mon> +
colnames_level parameter in gheatmap <2016-01-25, Mon>
raxml2nwk function for converting raxml bootstrap tree to newick format <2016-01-25, Mon>
Changes in version 1.3.11:
solved issues of geom_tree2 <2016-01-25, Mon> +
change compute_group() to compute_panel in geom_tree2() <2016-01-21, Thu> + fixed issue,
support phyloseq object <2016-01-21, Thu>
update geom_point2, geom_text2 and geom_segment2 to support setup_tree_data <2016-01-21, Thu>
implement geom_tree2 layer that support duplicated node records via the setup_tree_data function <2016-01-21, Thu>
rescale_tree function for rescaling branch length of tree object <2016-01-20, Wed>
upgrade set_branch_length, now branch can be rescaled using feature in extraInfo slot <2016-01-20, Wed>
Changes in version 1.3.10:
remove dependency of gridExtra by implementing multiplot function instead of using grid.arrange <2016-01-20, Wed>
remove dependency of colorspace <2016-01-20, Wed>
support phylip tree format and update vignette of phylip example <2016-01-15, Fri>
Changes in version 1.3.9:
optimize getYcoord <2016-01-14, Thu>
add ‘multiPhylo’ example in ‘Tree Visualization’ vignette <2016-01-13, Wed>:
add example of viewClade in ‘Tree Manipulation’ vignette <2016-01-13, Wed>
add viewClade function <2016-01-12, Tue>
support obkData object defined by OutbreakTools <2016-01-12, Tue>
update vignettes <2016-01-07, Thu>
05 advance tree annotation vignette <2016-01-04, Mon>
export theme_inset <2016-01-04, Mon>
inset, nodebar, nodepie functions <2015-12-31, Thu>
Changes in version 1.3.7:
Changes in version 1.3.6:
MRCA function for finding Most Recent Common Ancestor among a vector of tips <2015-12-22, Tue>
geom_cladelabel: add bar and label to annotate a clade <2015-12-21, Mon> - remove annotation_clade and annotation_clade2 functions.
geom_treescale: tree scale layer. (add_legend was removed) <2015-12-21, Mon>
Changes in version 1.3.5:
Changes in version 1.3.4:
rename beast feature when name conflict with reserve keywords (label, branch, etc) <2015-11-27, Fri>
get_clade_position function <2015-11-26, Thu> +
get_heatmap_column_position function <2015-11-25, Wed> + see
support NHX (New Hampshire X) format via read.nhx function <2015-11-17, Tue>
bug fixed in extract.treeinfo.jplace <2015-11-17, Thu>
Changes in version 1.3.3:
support color=NULL in gheatmap, then no colored line will draw within the heatmap <2015-10-30, Fri>
add
angle for also rectangular, so that it will be available for
layout=’rectangular’ following by coord_polar() <2015-10-27, Tue>
Changes in version 1.3.2:
update vignette, add example of ape bootstrap and phangorn ancestral sequences <2015-10-26, Mon>
add support of ape bootstrap analysis <2015-10-26, Mon> see
add support of ancestral sequences inferred by phangorn <2015-10-26, Mon> see
Changes in version 1.3.1:
change angle to angle + 90, so that label will in radial direction <2015-10-22, Thu> + see
na.rm should be always passed to layer(), fixed it in geom_hilight and geom_text2 <2015-10-21, Wed> + see
matching beast stats with tree using internal node number instead of label <2015-10-20, Tue>
Changes in version 0.99.4:
Added sample colours for MD plot.
Added DESeqResults method for glMDPlot
Changes in version 0.99.3:
Changes in version 0.99.2:
Changes in version 0.99.1:
Changes in version 1.5.5:
BUG FIX
Changes in version 1.5.4:
NEW FEATURES
Changes in version 1.5.3:
GENERAL UPDATES
Changes in version 1.5.2:
BUG FIX
Changes in version 1.5.1:
GENERAL UPDATES
Changes in version 1.29.2:
Changes in version 1.29.1:
Changes in version 1.17.6 (2016-04-26):
Changes in version 1.17.5 (2016-04-25):
Changes in version 1.17.4 (2016-04-04):
Changes in version 1.17.3 (2016-04-04):
Changes in version 1.3.6:
Changes in version 1.3.5:
Changes in version 1.3.4:
Changes in version 1.3.3:
Changes in version 1.3.2:
support ‘compact’ mode to arrange chromosomes
add customized functions to add points/lines/…
Changes in version 1.3.1:
Changes in version 1.9.4:
Changes in version 1.9.3:
Changes in version 2.3.6:
USER VISIBLE CHANGES
Changes in version 1.17.9:
Replace ncdf with ncdf4
Deprecate plinkToNcdf and convertVcfGds (use SNPRelate functions instead)
Add function kingIBS0FSCI to define expected IBS0 spread of full siblings based on allele frequency.
Changes in version 1.17.8:
Changes in version 1.17.7:
Changes in version 1.17.6:
Changes in version 1.17.5:
Changes in version 1.17.4:
Changes in version 1.17.3:
Changes in version 1.17.1:
Changes in version 1.5.2 (2016-04-18):
Changes in version 1.5.1 (2016-03-02):
Minor fixes to vignette.
Updating R code.
Changes in version 1.5.1:
Changes in version 1.8.0:
Changes in version 1.7.0-1.7.7:
hlaCheckAllele(),
hlaAssocTest(),
hlaConvSequence()and
summary.hlaAASeqClass()
Changes in version 1.1.6:
Changes in version 1.1.5:
Changes in version 1.1.4:
Changes in version 1.1.3:
Changes in version 1.1.2:
add examples in vignette
add
hc_polygon()
hc_map(): support borders
HilbertCurve(): start position can be selected, also the
orientation of the first segment can be selected
Changes in version 1.1.1:
Changes in version 1.15.1:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Change default behavior of binningC function to use sum of intervals instead of median
Hi-C color are now defined as a vector, so that more than 3 colors can be used to for the gradient
BUG FIXES
Changes in version 4.1.15:
Changes in version 4.1.13:
Changes in version 4.1.11:
Changes in version 4.1.10:
Changes in version 4.1.9:
Changes in version 4.1.8:
Changes in version 4.1.7:
Changes in version 4.1.6:
fixed a buglet (absence of summary_alignment.tab file) that prevented test.callVariantsVariantTools.genotype() to run
moved the loading of genomic_features in countGenomicFeatures and not countGenomicFeaturesChunk, for speedup
R CMD check ok
Changes in version 4.1.5:
Changes in version 4.1.4:
Changes in version 4.1.3:
Changes in version 4.1.2:
activate read quality trimming for GATK-rescaled
move sanger quality at the end of the list of possible qualities, as we do not do quality trimming for real sanger. Current and Recent data from illuma should always come with illumina 1.5 or 1.8 range, so we want to make sure the read triming is triggered.
Changes in version 4.1.1:
Changes in version 4.1.0:
Changes in version 0.13.1 (2016-01-12):
Changes in version 0.13.0 (2015-10-23):
Changes in version 0.99.0:
1.3.3: CHANGES * Optional plotting of additional parameter in FCS files * Minor changes in export meta-cluster features BUGFIXES * Problems with clustering of 1-D data sets
1.3.1: CHANGES * The normalization step within the meta.clusering proces is improved and extended which is also effects the related parameter settings
Changes in version 1.3.2:
BUG FIXES
Changes in version 1.3.1:
Changes in version 1.1.4:
Changes in version 1.1.3:
Changes in version 1.1.2:
Parallel computation is now managed completely within the BiocParallel framework. “newINSPEcT” function and “modelRates” method take as input the argument BPPARAM which is used to handle the parallelization. By default, BPPARAM is assigned with bpparam() function of BiocParallel package, which guarantee the maximum number of available cores used and the usage of forking in Linux and MacOS-X and the usage of the package Snow for Windows machines.
nCores methods and arguments are now deprecated.
Changes in version 1.1.1:
Re-introduced inferKBetaFromIntegralWithPre, which disappeared in the devel version following 1.0.1 (excluded)
selection of best model is now done applying brown test on the pairs of model where at least one of the two has a chi-squared test lower than the threshold. This is done because in case only one rate leads the dynamics, all the model which don’t involve that rate won’t have low chi-squared and no comparison will be made. This leads to brown p-values of 1 on that specific rates (change in method “ratePvals”)
in newINSPEcT, the guess of new rates can be done without assuming that degradation does not occur during the pulse
Solved two problems. One that occurred during modeling for genes with estimated variance within replicates equal to zero: in these cases the variance is estimated within the time course. A second problem was encountered in the parameter initialization of impulse model: h1 cannot be zero in order to evaluate a finite value.”
Evaluate ‘modelRates’ within the vignette in parallel only in Linux and Dawin environments. This is done to avoid timeout in the build process on Bioc servers.
Better estimation of rates in case degDuringPulse=TRUE (newINSPEcT function). Added controls on input arguments in newINSPEcT function. Fixed a bug in the saturation of values out of breaks in inHeatmap method (this change could cause a different clustering order of genes in the heatmap). Added the palette argument to inHeatmap method.
Fixed a bug in ‘[’ method and unpdated the NAMESAPACE and DESCRIPTION files according to the update of the ‘unlist’ method that is exported from BiocGenerics and not from GenomicRanges anymore.
Changes in version 0.99.0:
Changes in version 2.6.0:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Remove ‘algorithm’.
Restore ‘maxgap’ special meaning (from BioC < 3.1) when calling findOverlaps() (or other member of the family) with ‘type’ set to “within”.
No more limit on the max depth of on-the-fly NCList objects. Note that the limit remains and is still 100000 when the user explicitely calls the NCList() or GNCList() constructor.
Rename ‘ignoreSelf’ and ‘ignoreRedundant’ argument of the findOverlaps,Vector,missing method -> ‘drop.self’ and ‘drop.redundant’. The old names are still working but deprecated.
Rename grouplength() -> grouplengths() (old name still available but deprecated).
Modify “replaceROWS” method for IRanges objects so that the replaced elements in ‘x’ get their metadata columns from ‘value’. See this thread on bioc-devel:
Optimized which.min() and which.max() for atomic lists.
Remove the ellipsis (…) from all the setops methods, except the methods for Pairs objects.
Add “togroup” method for ManyToOneGrouping objects and deprecate default method.
Modernize “show” method for Ranges objects: now they’re displayed more like GRanges objects.
Coercion from IRanges to NormalIRanges now propagates the metadata columns when the object to coerce is already normal.
Don’t export CompressedHitsList anymore from the IRanges package. This doesn’t seem to be used at all and it’s not clear that we need it.
DEPRECATED AND DEFUNCT
Deprecate RDApplyParams objects and rdapply().
Deprecate RangedDataList objects.
Deprecate the “reduce” method for RangedData objects.
Deprecate GappedRanges objects.
Deprecate the ‘ignoreSelf’ and ‘ignoreRedundant’ arguments of the findOverlaps,Vector,missing method in favor of the new ‘drop.self’ and ‘drop.redundant’ arguments.
Deprecate grouplength() in favor of grouplengths().
Default “togroup” method is deprecated.
Remove IntervalTree and IntervalForest classes and methods (were defunct in BioC 3.2).
Remove mapCoords() and pmapCoords() generics (were defunct in BioC 3.2).
Remove all “updateObject” methods (they were all obsolete).
BUG FIXES
Fix segfault when calling window() on an Rle object of length 0.
Fix “which.min” and “which.max” methods for IntegerList, NumericList, and RleList objects when ‘x’ is empty or contains empty list elements.
Fix mishandling of zero-width ranges when calling findOverlaps() (or other member of the family) with ‘type’ set to “within”.
Various fixes to “countOverlaps” method for Vector#missing. See svn commit message for commit 116112 for the details.
Fix validity method for NormalIRanges objects (was not checking anything).
Changes in version 1.17.1:
Changes in version 1.6.0:
Changes in version 1.5.4:
importing apcluster package for avoiding method clashes
improved and completed change history in inst/NEWS and package vignette
Changes in version 1.5.3:
correction in prediction via feature weights for very large sparse explicit representation
adaption of vignette template
vignette engine changed from Sweave to knitr
Changes in version 1.5.2:
correction in distance weights for mixed distance weighted spectrum and gappy pair kernel
allow featureWeights as numeric vector for method getPredictionProfile
correction for plot of single prediction profile without legend
change of copyright note
namespace fixes
Changes in version 1.5.1:
new method to compute prediction profiles from models trained with mixture kernels
correction for position specific kernel with offsets
corrections for prediction profile of motif kernel
additional hint on help page of kbsvm
Changes in version 1.5.0:
Changes in version 3.28.0:
Improved capabilities and performance for fry(). fry() has two new arguments, ‘geneid’ and ‘standardize’. The index argument of fry() can now be a list of data.frames containing identifiers and weights for each set. The options introduced by the standardize argument allows fry() to be more statistically powerful when genes have unequal variances. fry() now accepts any arguments that would be suitable for lmFit() or eBayes(). The ‘sort’ argument of fry() is now the same as for mroast().
roast(), mroast(), fry() and camera() now require the ‘design’ matrix to be set. Previously the design matrix defaulted to a single intercept column.
Two changes to barcodeplot(): new argument ‘alpha’ to set semitransparency of positional bars in some circumstances; the default value for ‘quantiles’ is increased slightly.
kegga() has several new arguments and now supports any species supported by KEGG. kegga() can now accept annotation as a data.frame, meaning that it can work from user-supplied annotation.
New functions getGeneKEGGLinks() and getKEGGPathwayNames() to get pathway annotation from the rest.kegg website.
topKEGG() now breaks tied p-values by number of genes in pathway and by name of pathway.
goana() now supports species=”Pt” (chimpanzee).
plotRLDF() now produces more complete output and the output is more completely documented. It can now be used as a complete LDF analysis for classification based on training data. The argument ‘main’ has been removed as an explicit argument because it and other plotting arguments are better passed using the … facility. New arguments ‘ndim’ and ‘plot’ have been added. The first allows all possible discriminant functions to be completed even if only two are plotted. The second allows dicriminant functions to be computed without a plot.
plotRLDF() also now uses squeezeVar() to estimate how by how much the within-group covariance matrix is moderated. It has new arguments ‘trend’ and ‘robust’ as options to the empirical Bayes estimation step. It also has new argument ‘var.prior’ to allow the prior variances to be optionally supplied. Argument ‘df.prior’ can now be a vector of genewise values.
new argument ‘save.plot’ for voom().
diffSplice() now returns genewise residual variances.
removeExt() has new ‘sep’ argument.
Slightly improved algorithm for producing the df2.shrunk values returned by fitFDistRobustly(). fitFDistRobustly() now returns tail.p.value, prob.outlier and df2.outlier as well as previously returned values. The minimum df.prior value returned by eBayes(fit, robust=TRUE) may be slightly smaller than previously.
tmixture.vector() now handles unequal df in a better way. This will be seen in slightly changed B-statistics from topTable when the residual or prior df are not all equal.
If a targets data.frame is provided to read.maimages() through the ‘files’ argument, then read.maimages() now overwrites its rownames on output to agree with the column names of the RGList object.
More explicit handling of namespaces. Functions needed from the grDevices, graphics, stats and utils packages are now imported explicitly into the NAMESPACE, avoiding any possible masking by other packages. goana(), alias2Symbol(), alias2SymbolTable() now load the name spaces of GO.db the relevant organism package org.XX.eg.db instead of loading the packages. This keeps the user search path cleaner.
Various minor bug fixes and improvements to error messages.
Changes in version 1.1:
Adding plotRDAMulti and topRDAhits
Improve plotRegion
Improve caseExample and MEAL vignettes
Return a sorted data.frame in correlationMethExprs
Changes in version 0.0-1:
Changes in version 1.6.1:
Changes in version 1.5.1:
Changes in version 2.3.0:
NEW FEATURES
Refactored the bootstrap approach to improve memory usage and reduce calculation time.
Added a plot_metagene function to produce a metagene plot from a data.frame to avoid always having to rely on the metagene object.
Deprecated the range and bin_size params.
Added new sections in the vignettes: “Managing large datasets” and “Comparing profiles with permutations”.
BUG FIXES
Removed params that no longer works with ggplot2 > 2.2.0.
Changed the seqlevels checks to match changes in GenomicAlignments.
Added a check to stop early and output clear error message when position in a GRanges is greater than the size of a chromosome..13:
Upgrade support for biom-format vs. 2.0
Fixed issue - “MRtable, etc will report NA rows when user requests more features than available”
Fixed s2 miscalculation in calcZeroComponent
Changes in version 1.5.2:
Changes in version 1.4.3:
Changes in version 1.4.2:
Changes in version 1.4.1:
Changes in version 1.5.1:
Changes in version 2.9.9:
BUG FIXES
Changes in version 2.9.8:
BUG FIXES
construtBins(), generateWig(), extractReads(): ‘readGAlignments’ replaces the defunct ‘readGAlignmentsFromBam’.
extractReads(), findSummit(), adjustBoundary(), filterPeak(): more safeguards added.
constructBins(): fix typo.
Changes in version 2.9.7:
BUG FIXES
mosaicsPeak(): fix the error that chrID is missing.
adjustBoundary(): fix the error that chrID is incorrectly processed.
constructBins(), generateWig(), extractReads(): fix the error for the processing of BAM files for PET data.
Changes in version 2.9.6:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 2.9.5:
SIGNIFICANT USER-VISIBLE CHANGES
extractReads(): Users can now choose whether to keep read-level data with the argument ‘keepReads’.
Object sizes are significantly decreased for the output of extractReads(), findSummit(), adjustBoundary(), and filterPeak().
MosaicsPeak class definition is modified to reflect the changes above.
In the peak lists, now, logMinP and logAveP (i.e., -log10 transformation of minP and aveP, respectively) are reported instead of minP and aveP, respectively.
show() method becomes significantly faster.
Changes in version 2.9.4:
SIGNIFICANT USER-VISIBLE CHANGES
Peak list now incorporates mean(-log10(PP)), summitSignal, and summit.
In the peak list, the counts of control samples and the log ratio of ChIP over control counts are adjusted by the ratio of sequencing depth, instead of the ratio of sum of ChIP and control counts.
postProb(): Return posterior probabilities for arbitrary peak regions.
export() becomes significantly faster.
construtBins(): calculate sequencing depth and keep this information in the first line (commented) of bin-level files.
seqDepth(): returns sequencing depth information, which can be applied to all of BinData, MosaicsFit, MosaicsHMM, MosaicsPeak class objects.
Name of method coverage() is changed to readCoverage().
BUG FIXES
findSummit() & adjustBoundary(): fix the error that an average point of multiple apart summit ties is reported as a summit. Now, the first summit block is chosen first and then an average point of the first summit block is reported as a summit. Also, fix some minor numerical issues regarding the calculation of summit locations.
filterPeak(): fix the error that the improvement of ChIP over control samples is set to zero when there is no control signal at the position. Now, in this case, control signal is set to zero.
adjustBoundary(): fix the error “multiple methods tables found for ‘coverage’” in R CMD check.
Changes in version 2.9.3:
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
generateWig(): fix the error that values in the exported files are written in scientific notation.
constructBins(): fix the error that values in the exported files are written in scientific notation.
Changes in version 2.9.2:
BUG FIXES
extractReads(): fix the error that strands are incorrectly handle when loading read-level data.
export(): fix the error to incorrectly ask to run exportReads() when it is not needed.
Changes in version 2.9.1:
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Changes in version 2.9.0:
SIGNIFICANT USER-VISIBLE CHANGES
extractReads(): Load read-level data and extract reads corresponding to each peak region.
findSummit(): Find a summit within each peak, based on local ChIP profile.
adjustBoundary(): Adjust peak boundaries (designed for histone modification peaks).
filterPeak(): Filter peaks based on their peak lengths and signal strengths.
mosaicsPeakHMM: Posterior decoding is set to default (decoding=”posterior”).
mosaics package now additionally depends on GenomicRanges, GenomicAlignments, Rsamtools, GenomeInfoDb, and S4Vectors packages.
BUG FIXES
Changes in version 1.4.0:
Changes in version 1.3.7:
Changes in version 1.3.6:
msaPrettyPrint() now also accepts dashes in file names
added section about pretty-printing wide alignments to package vignette
Changes in version 1.3.5:
Changes in version 1.3.4:
added function for checking and fixing sequence names for possibly problematic characters that could lead to LaTeX errors when using msaPrettyPrint()
corresponding changes in documentation
minor namespace fix
Changes in version 1.3.3:
added function for converting multiple sequence alignments for use with other sequence alignment packages
corresponding changes in documentation
Changes in version 1.3.2:
further fixes in Makefiles and Makevars files to account for changes in build system
update of citation information
Changes in version 1.3.1:
Changes in version 1.3.0:
Changes in version 1.19.24:
Changes in version 1.19.23:
Changes in version 1.19.22:
more unit tests <2016-04-23 Sat> <2016-04-24 Sun> <2016-04-26 Tue>
remove readIspyData functions <2016-04-23 Sat>
Fixed bug in error catching in utils.mergeSpectraAndIdentificationData <2016-04-23 Sat>
Changes in version 1.19.21:
Changes in version 1.19.20:
Changes in version 1.19.19:
Changes in version 1.19.18:
Changes in version 1.19.17:
Changes in version 1.19.16:
Changes in version 1.19.15:
limit readMSData unit test due to Windows-only error <2016-03-09 Wed>
fix unit test for utils.colSd(x, na.rm=TRUE)
Changes in version 1.19.14:
Changes in version 1.19.13:
Fixed bug in bin_Spectrum, reported by Weibo Xie <2016-02-16 Tue>
Added unit test for bug above <2016-02-16 Tue>
Merged ‘apply functions columnwise by group’ <2016-02-16 Tue>
In nQuants, the fcol argument has been replaced with groupBy to make the signature consistent with featureCV <2016-02-16 Tue>
Changes in version 1.19.12:
Changes in version 1.19.11:
Fix trimws generic/methods with useAsDefault (see issue #75) <2016-02-02 Tue>
add exprs,MSnSet-method alias (since exprs is now exported) <2016-02-02 Tue>
Changes in version 1.19.10:
Changes in version 1.19.9:
Changes in version 1.19.8:
readMSnSet2 now also accepts a data.frame as input <2016-01-29 Fri>
selective ggplot2 import <2016-01-29 Fri>
Changes in version 1.19.7:
new sampleNames<- for pSet and MSnExp objects <2015-12-15 Tue>
Fix bug preventing to write MS1 to mgf (fixes issue #73 reported by meowcat) <2015-12-18 Fri>
Changes in version 1.19.6:
MSnExp feautreNames are now X01, X02 (0 after X) to maintain numerical sorting of ASCII characters; contributed by sgibb <2015-12-14 Mon>
Update MSnbase:::subsetBy to use split instead of lapply, which makes topN faster. This nevertheless changes the order of the resulting MSnSet (see issue #63 for details and background); contributed by sgibb <2015-12-14 Mon>
Changes in version 1.19.5:
Changes in version 1.19.4:
Changes in version 1.19.3:
Changes in version 1.19.2:
Changes in version 1.19.1:
Changes in version 1.19.0:
Changes in version 3.3.11:
New functionalities : calculation of the LOD and LOQ, 1) linear_quantlim, 2) nonlinear_quantlim, 3) plot_quantlim, and two example datasets, SpikeInDataLinear, SpikeInDataNonLinear are available.
Update for featureSelection =‘highQuality’ in dataProcess
allow colon(“:”) in the peptide sequence
fix the bug for ‘fill in incomplete rows’ in dataProcess. If there are only one feature has incomplete rows, the issue for getting run and feature ID in dataProcess and not show the list. Now, it works.
change the default for blimp in dataProcessPlots for profile plot and QC plot. The upper limit of y-axis with ylimUp=FALSE is calculated by maximum log2(intensity) across all proteins after normalization + 3 and then rounding off to the nearest integer.
Changes in version 3.3.10:
fix the bug for dataProcess
When the number of proteins for $ProcessedData and $RunlevelData are different, the bug happened for calculating %missing and imputation.
fix the bug for groupComparison
when one of condition is completely missing or other special case, .fit.model.single can handle and output of .fit.model.single is not try-error. Then output for fitted and residual should be updated.
Changes in version 3.3.9:
Condition plot from dataProcessPlots : Now condition plots are drawn with run-level summarized intensities per condition.
ComparisonResult from groupComparison
flag about missingness and imputation : Calculation for MissingPercentage and ImputationPercentage columns is changed 1) MissingPercentage : number of measured intensities/ total number of intensities (which is the number of features * the number of runs in a protein) in the conditions used for comparison (from ‘Label’ column) by protein. Therefore different comparisons(Label in the output) from the same protein can have the different percentage of missingness. 2) ImputationPercentage : number of imputed intensities/total number of intensities in the conditions used for comparison (from ‘Label’ column) by protein. Therefore different comparisons(Label in the output) from the same protein can have the different percentage of imputation.
new column, ‘issue’, shows special cases, such as completely missing in a condition or all conditions for comparisons.
VolcanoPlot
flag the proteins which have the condition with completely missing. On the left of protein name, ‘*’ will appear in Volcano plot
Changes in version 3.3.8:
normalization : overall median -> median of medians. For all workflow for MSstats, the result should not be changed. But, log(sum) will have slightly different result.
flag about missingness and imputation
RunlevelData from dataProcess include two or three more columns 1) NumMeasuredFeature : number of measured features in a run 2) Missing percentage : number of measured features/total number of features by run 3) NumImputedFeature : number of imputed intensities in a run. This column is shown only if users allow to impute the missing value.
ComparisonResult from groupComparison : one or two columns will be added. 1) MissingPercentage : number of measured intensities/ total number of intensities (which is the number of features * the number of runs in a protein) by protein 2) ImputationPercentage : number of imputed intensities/total number of intensities by protein
Changes in version 3.3.4:
Changes in version 3.3.3:
Changes in version 3.3.2:
ProteinName=TRUE in groupComparisonPlots shows only the name of significant proteins, adjusting location. ggrepel package is used.
change featureSubset option in ‘dataProcess’ for selecting high quality features. featureSubset=‘highQuality’
Fix the bug for ‘fillIncompleteRows=TRUE’ for label-based experiments.
change ‘quantification’ function. use run summarization from dataProcess. If there are technical replicates, use median run summarization for each subject.
Changes in version 3.3.1:
fix the bug for volcano plot in groupComparisonPlots, with logbase=2.
update all plots for ggplot2
Change the default for ‘cutoffCensored’. Now the default is “minFeature”.
for imputing the censored peak intensities, remove the features which has only 1 measurement for survreg function.
Changes in version 3-2-16:
Changes in version 2-13-16:
Changes in version 2-11-16:
Changes in version 1.9.1:
Changes in version 2.5.8:
Changes in version 2.5.7:
Changes in version 2.5.6:
Changes in version 2.5.3:
Changes in version 2.5.2:
Changes in version 2.5.1:
Changes in version 2.16.0 (2016-02-11):
NOISeqBIO has been modified when few replicates are available and the computation time has been drastically removed.
Gene clustering in NOISeqBIO when few replicates are available: It will be done when total number of samples is 9 or less (instead of 10 or less).
Fixed a bug in “biotype detection” plot. It failed when none of the genes in the sample had values = 0.
Corrected an error in the calculation of standard deviation of D statistic in NOISeqBIO.
Changes in version 0.99.0 (2016-03-16):
Changes in version 2.2.0:
Plots of genotypes.
Stacked area and stream plots (code from Marc Taylor).
Example of modules and no epistasis.
Removed requirement of Root in geneToModule.
More tests (and reorganized them)
Miscell. improvements and typos fixed in documentation and vignette.
Added mutationPropGrowth as argument.
Some minor bug fixes and additional checks for user errors.
Changes in version 2.1.6 (2016-04-14):
Changes in version 2.1.5 (2016-04-09):
Changes in version 2.1.4 (2016-04-09):
Changes in version 2.1.3 (2016-04-04):
Changes in version 2.1.2 (2016-03-27):
Arguments to BNB_Algo5 explicit.
Example of modules and no epistasis.
Removed requirement of Root in geneToModule.
More tests (and reorganized them)
Miscell. improvements in documentation and vignette.
Changes in version 2.1.1 (2016-03-07):
Added mutationPropGrowth as argument.
Stacked area and stream plots (code from Marc Taylor).
Plots of genotypes.
Expanded vignette.
Changes in version 1.14.0:
MODIFICATIONS
replace with
import ‘mcols’, ‘mcols<-‘ from S4Vectors
follow name change for GenomicFeatures:::.set_group_names()
add biomaRt, rtracklayer to ‘Suggests’; used in unit tests/man pages
elementLengths was renamed -> elementNROWS in S4Vectors
replace require() with requireNamespace()
adjustments in response to the ‘vals’ -> ‘filter’ renaming in GenomicFeatures
update unit tests to reflect new PFAM data
load RSQLite in unit tests; no longer free from AnnotationDbi::dbFileConnect
use newly exported functions from AnnotationDbi related to select() and building annotation packages
Changes in version 1.5.1 (2015-10-20):
GENERAL
NEW FEATURES
IMPROVEMENTS
MODIFICATIONS
BUG FIXES
Changes in version 0.99.0:
Submission for Bioconductor
Added vignette
Added panviz method for use of functionality in R (instead of just shiny)
Make gene ontology a subsequent download instead of bundle with package
Check usability of current GO before building PanViz
Changes in version 1.1.3:
Changes in version 1.1.2:
Change of maintainers’ email adress
Minor changes in the package vignette (e.g. section arrangement)
Updated citation
Changes in version 1.1.1:
Changes in package titel and description
Changes and additional explanations in the package vignette
Changes in version 1.10.2:
Changes in version 1.10.1:
Changes in version 0.11.3:
Changes in version 0.11.2:
Changes in version 0.11.1:
Changes in version 0.11.0:
Changes in version 0.99.5:
DEPENDENCIES
Changes in version 0.99.4:
DEPENDENCIES
Depends, Imports and NAMESPACES libraries were modified following codetoolsBioC suggestions (Thanks to Valerie Oberchain).
cowplot was used in subjectReport function as textGrob update broke
the code.
DOCUMENTATION
CODE
Changes in version 0.99.3:
DEPENDENCIES
BiocParallel (>= 1.3.13)has been updated to feedback the user with progressbar if verbose=TRUE in permutate (Thanks to Valerie Oberchain).
DESCRIPTION
StatisticalMethodhas been removed from biocViews.
Changes in version 0.99.2:
DEPENDENCIES
R (>= 3.x.y) has been updated to 3.2
Imports:
BiocParallel has replaced
parallel (Thanks to Valerie
Obenchain).
Changes in version 0.99.1:
DOCUMENTATION & CODE
Minor modifications to cope with BiocCheck policies.
RUnit tests were added.
Changes in version 0.99.0:
DOCUMENTATION
NEWS file was added.
First functional version
Changes in version 0.99.1:
OTHER NOTES
Changes in version 0.99.0:
OTHER NOTES
Changes in version 1.12:
Changes in version 1.10.2:
DOCUMENTATION
Changes in version 1.10.1:
BUG FIXES
Changes in version 1.3.1:
added missing method readVariantInfo() for signature ‘character’, ‘GRanges’
minor streamlining of source code of readGenotypeMatrix()
corrections of namespace imports
Changes in version 1.3.0.0:
Novel algorithm for identification of potential intramolecular G-quadruplex (G4) patterns in DNA sequence.
Supports multiple defects in G-runs like bulges or mismatches.
Provides the most accurate results currently available.
Highly customizable to detect even novel G4 types that might be discovered in the future.
Changes in version 2.0.0:
updated models PrOCoilModel and PrOCoilModelBA that have been trained with newer data and up-to-date methods
general re-design of classes and functions to allow for multiple predictions per run; that comes along with a more streamlined and more versatile interface how to supply sequences and registers to predict().
predictions and plots are now performed by the ‘kebabs’ package; this led to a major performance increase.
the integration of the ‘kebabs’ package also allowed for inheriting functions like heatmap() and accessors, such as, sequences(), baselines(), and profiles()
added a fitted() method to allow for easy extraction of predictions
addition of small example model file inst/examples/testModel.CCModel
streamlining/simplification of some man pages
several corrections and updates of man pages and package vignette
changed vignette building engine from Sweave to knitr
removal of reference to Git-SVN bridge
Changes in version 1.11.23:
New gomarkers functionality for adding annotation information to spatial proteomics data and accompanying new vignette <2016-04-18 Mon>
Added unit tests <2016-04-20 Wed> <2016-04-21 Thu>
Moved makeNaData2 and whichNA to MSnbase <2016-04-21 Thu>
Renamed addGoMarkers and orderGoMarkers to addGoAnnotations and orderGoAnnotations and all associated documentation. Vignette also renamed to pRoloc-goannotations <2016-04-21 Thu>
Changes in version 1.11.22:
Changes in version 1.11.21:
Changes in version 1.11.20:
Changes in version 1.11.19:
Update dunkley2006params <2016-04-01 Fri>
Update dunkley2006params <2016-04-01 Fri>
Changes in version 1.11.18:
Update dunkley2006params <2016-03-30 Wed>
Update dunkley2006params <2016-03-30 Wed>
Changes in version 1.11.17:
Selective imports <2016-03-20 Sun>
Selective imports <2016-03-20 Sun>
Changes in version 1.11.16:
Changes in version 1.11.15:
Changes in version 1.11.14:
Changes in version 1.11.13:
new method argument added to knntlOptimisation that allows optimisation of class weights as per Wu and Dietterich’s original k-NN TL method <2016-02-19 Fri>
seed argument added to knntlOptimisation for reproducibility <2016-02-22 Mon>
New section in tl vignette describing preparation of auxiliary PPI data <2016-02-29 Mon>
Changes in version 1.11.12:
Changes in version 1.11.11:
Changes in version 1.11.10:
Changes in version 1.11.9:
Changes in version 1.11.8:
New Lisa cols and changed default unknown col <2016-02-03 Wed>
mrkVecToMat has been updated so that the column order reflects the factor levels of fcol, rather than calling unique on fcol. This change means that the order of the classes in fcol are now consistent between plot2D and new visualisation apps that rely on mrkVecToMat. <2016-02-03 Wed>
Changes in version 1.11.7:
Changes in version 1.11.6:
Changes in version 1.11.5:
highlightOnPlot support labels = TRUE to use featureNames as labels <2015-12-21 Mon>
selective ggplot2 import <2015-12-21 Mon>
highlightOnPlot also support a vector of feature names in addition to an instance of class FeaturesOfInterest <2015-12-21 Mon>
Changes in version 1.11.4:
Changes in version 1.11.3:
Update dunkley2006params to use plant_mart_30 <2015-12-16 Wed>
API change in plot2D: to plot data as is, i.e. without any transformation, method can be set to “none” (as opposed to passing pre-computed values to method as in previous versions). If object is an MSnSet, the untransformed values in the assay data will be plotted. If object is a matrix with coordinates, then a matching MSnSet must be passed to methargs. <2015-12-16 Wed>
Changes in version 1.11.2:
Changes in version 1.11.1:
New orgQuants function and update to getPredictions <2015-10-13 Tue>
Deprecate minClassScore replaced by getPredictions <2015-10-19 Mon>
Add pRolocVisMethods and check for new apps in pRolocGUI <2015-10-19 Mon>
new fDataToUnknown function <2015-10-23 Fri>
New section in vignette describing readMSnSet2 <2015-11-30 Mon>
Changes in version 1.11.0:
Changes in version 1.5.6:
Removing plotMat2D app (closes issue #69) <2016-03-11 Fri>
add package startup msg <2016-03-11 Fri>
instruct users to install latest version from github
Changes in version 1.5.5:
Changes in version 1.5.4:
Changes in version 1.5.3:
Changes in version 1.5.2:
Updated pca app <2016-01-11 Mon>
Updated vignette <2016-01-12 Tue>
Fixed bugs, pca app renamed main app, removed profiles app <2016-01-14 Thu>
new compare app <2016-01-30 Sat>
updated vignette <2016-02-03 Wed>
Changes in version 1.5.1:
New shiny apps <2015-10-12 Mon>
New vignette <2015-10-29 Thu>
Fixed bugs and updated examples in classify app <2015-11-09 Mon>
Changes in version 1.5.0:
Changes in version 1.1.2:
Major rewrite of the data preparation, now relying on simple dcf files as input and intermediate PAHD objects. <2015-12-17 Thu>
import read.table <2016-03-29 Tue>
replace curl code by RCurl::getURL <2016-03-29 Tue>
Changes in version 1.1.1:
Changes in version 1.1.0:
Changes in version 1.3.3:
Changes in version 1.3.2:
Changes in version 1.3.1:
Changes in version 1.3.0:
Changes in version 3.1.1:
include static consensus profiles function
include activating/inhibiting edges in dynamic consensus net
Changes in version 1.9.1:
Changes in version 1.8.0 (2016-04-15):
RELEASE
IMPROVEMENTS
estimateCorrection(), segmentBins(), createBins(), and calculateBlacklist() now support parallel computing (see vignette for more details)
callBins() can now also use cutoffs instead of CGHcall
binReadCounts() now contains parameter pairedEnds to specify when using paired-end data, so that expected variance can be calculated correctly
segmentBins() now allows seeds for random number generation to be specified for reproducibility
binReadCounts() supports chunked processing of bam files
estimateCorrection() now also allows correcting for only GC content or mappability
BUG FIXES
applyFilters() and highlightFilters() now work properly when using a numerical value for parameter residual
highlightFilters() no longer highlights entire chromosomes for which the residual filter is missing altogether, which matches the behavior of applyFilters()
getBinAnnotations() now allows custom bin annotations to be loaded via the path parameter even when an annotation package has been installed
phenodata files with a single variable are now handled correctly
calculateMappability() now retains correct chromosome order even when bigWigAverageOverBed reorders them
calculateBlacklist() now correctly handles non-integer chromosome names
Changes in version 2.60:
USER VISIBLE CHANGES
Changes in version 2.40:
BUG FIXES
Changes in version 2.20:
USER VISIBLE CHANGES
BUG FIXES
Changes in version 1.0.0:
1.4.0: Updates: * Fixed some import issues.15.6:
Changes in version 1.15.5:
add maxGSSize parameter <2016-03-10, Thu>
update ReactomePA citation info <2016-02-17, Wed>
Changes in version 1.15.4:
Changes in version 1.15.3:
Changes in version 1.15.2:
Changes in version 1.15.1:
Changes in version 0.99.4 (2016-04-02):
NEW FEATURES
Added support for BigWig files which greatly increases coverage reading speed.
Added plyr import to reduce memory footprints in some averaging calculations.
Added another plot type which shows correlation between average coverage in summarized genomic regions and respective plot control parameters.
Plotting of confidence intervals for profile (and the newly added correlation) plots (geom_ribbon) is now an option.
The setter function now supports mulitple argument setting at once.
Object slicing is now also performed on genomic position instead of only reference regions and samples.
Stopped automatic width in heatmaps and passed control to the user through the use of … and also using … for plots rendered with ggplot.
Moved sumStat and smooth options from binParams to plotParams as smoothing should be available for reusing/replotting recoup objects. sumStat remained in binParams to be used for region binning over e.g. gene bodies.
Added documentation for recoup_test_data
Added small BAM files for testing of the preprocessRanges()) function.
Updated vignettes.
BUG FIXES
Fixed bug when reading reads from bed files. GenomeInfoDb is used to fill the seqinfo slot of the produced GenomicRanges. Credits to Orsalia Hazapis, BSRC ‘Alexander Fleming’.
Fixed bug when region is “custom” and the intervals were not of fixed width. Credits to Orsalia Hazapis, BSRC ‘Alexander Fleming’.
Fixed bug in custom heatmap ordering.
Fixed bug in calculation of average profiles in recoupCorrelation when using mean/median instead of splines.
Changes in version 0.4.0 (2016-02-02):
NEW FEATURES
Removed bigmemory/biganalytics as storage is a trouble.
Added (almost) full reusability of recoup objects. Now the most serious, memory and time consuming calculations need to perform just once.
Added slicing/subsetting or recoup list object.
Broke the code in more collated files.
k-means design function is now much more flexible.
BUG FIXES
Changes in version 0.2.0 (2016-01-28):
NEW FEATURES
Added a global sampling factor to reduce the size of the total reads and genomic areas for fast overviews.
Coverages and profiles are not recalculated when only ordering (k-means or other) changes.
Exported kmeansDesign and removeData functions.
Added full reusability of the output list object of recoup. Upon a call using this object, only the required elements are recalculated according to the change in input parameters, saving a lot of computation time for simple plotting/profile ordering/binning changes.
Reduced binned coverage calculation time by two
Switched to bigmemory and biganalytics packages to handle coverage profile matrices.
Added sample subsetting when plotting. In this way, profiles are computed once and the user may choose what to plot later.
Added a simple setter and getter for easier manipulation and reusability of recoup objects.
BUG FIXES
Changes in version 0.1.0 (2016-01-18):
NEW FEATURES
Changes in version 1.5.48:
BUG FIXES
Changes in version 1.5.43:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.5.33:
NEW FEATURES
Changes in version 1.5.19:
NEW FEATURES
Changes in version 1.5.12:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Added a ‘digits’ argument to control how to round some numerical variables in all type of reports.
Added a ‘theme’ argument to allow setting the ggplot2 theme for the plots.
BUG FIXES
Changes in version 1.5.6:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.3.3:
documentations were improved.
test files are moved to tests/testthat
Changes in version 1.3.2:
Changes in version 1.3.1:
fixed a bug when gene symbol annotated by GREAT is ‘’
print warnings when GREAT gives warnings
Changes in version 2.16.0:
NEW FEATURES
New access of HDF5 files by file, group and dataset handles. HDF5 groups and datasets can be read and written by the $-operator (e.g. h5f$A) and the [-operator can be used for partial reading and writing of datasets (e.g. h5d[3,,]).
New low level general library function H5Dget_create_plist implemented.
Removed #include <R.H> from external C code. To be compatible with newest C-compilers and current R-devel
Changes in version 1.1.6:
replaced IRanges::as.table since it is not exported by IRanges anymore, with as.table
motifSize in countShiftReads and codonInfo can be only 3, 6 or 9
in countShiftReads and codonInfo, motifs of size 6 or 9 correspond to overalpping windows, sliding 1 codon at a time
in countShiftReads, for motifSize = 6, the P-site is the first codon of the 2 it means that only the reads of the 1st codon in the 2 codon-motif are counted
in countShiftReads, for motifSize = 9, the P-site is the second codon of the 3 it means that only the reads of the 2nd codon in the 3 codon-motif are counted
adapted the RiboProfiling.Rnw to explain the treatment of motifs of codons (1, 2, or 3) as opposed to the previous treatment of unique codons.
changed S4Vectors::elementLengths to S4Vectors::elementNROWS in riboSeqFromBAM, aroundPromoter
Changes in version 1.1.5:
Changes in version 1.1.4:
Changes in version 1.1.3:
function “codonInfo” modified optimized for motifs of 1 or 2 consecutive codons
getCodons (utils), countShiftReads, and codonInfo functions modified it allows to look at sequence motifs not only for codons (3 nucleotides) but for other sizes as well motifSize parameter has been added to these functions to specify the number of nucleotides on which to compute usage and coverage.
function “readsToReadStart” replaced by “readsToStartOrEnd” this new function focuses not only the read 5’ (start) but also the read 3’ (end)
In riboSeqFromBAM added a parameter offsetStartEnd. It specifies if the offset is to be computed on the 5’ or 3’ of the read.
Changes in version 1.1.2:
Changes in version 1.1.1:
Changes in version 1.3.8:
Changes in version 1.3.7:
Changes in version 1.3.6:
Support for Illumina EPIC array and related bugfixes
Added a section on whitelist and blacklist in the Preprocessing module
Changes in version 1.3.5:
Fixes with ggplot in 450K array QC plotting
Added an option ‘differential.report.sites’ to decrease runtime by skipping the differential site methylation section in the report
Changes in version 1.3.4:
Changes in version 1.3.3:
Changes in version 1.3.2:
Performance and memory improvements during loading, qc and preprocessing of large bisulfite-derived RnBSets
Added optional arguments to meth() and covg() allowing for partial retrieval of methylation and coverage information from RnBSets. Particularly makes sense for large, disk-based datasets.
Added option “disk.dump.bigff.finalizer” for setting the finalizer for temporary BigFfMat objects
Changes in version 1.3.1:
Added “remove.regions” function for RnBSet class
summarize.regions now summarizes for each sample individually in order to reduce memory consumption for large WGBS datasets
import.skip.object.check option for keeping the memory profile low while loading huge datasets
NA sites (filtering.missing.value.quantile) are now removed even if they were previously masked (filtering.low.coverage.masking)
Added “nsites” method for quickly extracting the number of sites for large RnBSet objects without having to retrieve the full methylation matrix
Added “hasCovg” method for quickly determining whether coverage information is present in large RnBSet objects without having to retrieve the full coverage matrix
Changes in version 1.7.6:
BUG FIXES
Changes in version 1.7.2:
Tests now performed with testthat BUG FIXES
Changes in version 1.7.1:
NEW FEATURES
BUG FIXES
Changes in version 1.99.1:
Changes in version 1.99.0:
Changes in version 1.13.1:
Changes in version 1.13.0:
Changes in version 2.0.0:
Changes in version 1.3.20:.3.19:
BUG FIXED
Changes in version 1.3.18:
NEW FEATURE
BUG FIXED
Changes in version 1.3.17:
BUG FIXED
Changes in version 1.3.16:
BUG FIXED
Changes in version 1.3.15:
NEW FEATURES
Switching to S4 class
Evaluating the metabolomics workflow in the vignette
Changes in version 1.3.14:
NEW FEATURES
BUG FIXED
Changes in version 1.3.13:
NEW FEATURES
BUG FIXED
Changes in version 1.3.12:
NEW FEATURES
BUG FIXED
Changes in version 1.3.11:
NEW FEATURES
opls: for OPLS(-DA), vipVn and orthoVipVn are now computed as the VIP4,p and VIP4,o described in Galindo-Prieto et al (2014)
plot.opls: changes in palette: black/grey colors for diagnostics and other colors for scores
Changes in version 1.3.10:
BUG FIXED
Changes in version 1.3.9:
NEW FEATURES
Changes in version 1.3.8:
NEW FEATURES
Changes in version 1.3.7:
NEW FEATURES
Changes in version 1.3.6:
NEW FEATURES
strF: object size now displayed in Mb (instead of bytes); minor corrections to handle all matrices and data frames (whatever the dimensions, mode, row and column names)
unit tests: new tests added to increase test coverage
Changes in version 1.3.5:
NEW FEATURES
Changes in version 1.3.4:
NEW FEATURES
Changes in version 1.3.3:
NEW FEATURES
Changes in version 1.3.2:
NEW FEATURES
Changes in version 1.3.1:
NEW FEATURES
opls: now takes either a (numeric) data frame or matrix as ‘x’ input (instead of matrix only)
predict: now takes either a (numeric) data frame or matrix as ‘newdata’ input (instead of matrix only)
Changes in version 1.0.0:
Added log fold change in output.
Added default plotting for ROTS class objects
Added support for paired tests
Updated vignette
Bug fixes
Modified for Bioconductor release
Changes in version 1.27.41 (2016-04-14):
Changes in version 1.6:
NEW FEATURES
USER VISIBLE CHANGES
Changes in version 1.23:
NEW FEATURES
filterBam can filter one source file into multiple destinations by providing a vector of destination files and a list of FilterRules.
phred2ASCIIOffset() helps translate PHRED encodings (integer or character) to ASCII offsets for use in pileup()
BUG FIXES
scanBam() fails early when param seqlevels not present in file.
Rsamtools.mk for Windows avoids spaces in file paths
Changes in version 1.22.0:
NEW FEATURES
featureCounts() is able to count reads of up to 250kb long.
A parameter
juncCounts is added to featureCounts() function to
report counts for exon-exon junctions in RNA-seq data.
A parameter
nonSplitOnly is added to featureCounts() function to
count non-split alignments only.
Improved parsing of gzipped fastq files in align() and subjunc() aligners.
Improved screen output and error reporting for align(), subjunc() and featureCounts().
Changes in version 1.5:
When data are not integer, only a warning is thrown and not an error.
Added an example on how to use RUV with DESeq2.
Added volume and pages to citation.
Changes in version 0.10.0:
NEW FEATURES
Add SelfHits class, a subclass of Hits for representing objects where the left and right nodes are identical.
Add utilities isSelfHit() and isRedundantHit() to operate on SelfHits objects.
Add new Pairs class that couples two parallel vectors.
head() and tail() now work on a DataTable object and behave like on an ordinary matrix.
Add as.matrix.Vector().
Add “append” methods for Rle/vector (they promote to Rle).
SIGNIFICANT USER-VISIBLE CHANGES
Transposition of Hits objects now propagates the metadata columns.
Rename elementLengths() -> elementNROWS() (the old name was clearly a misnomer). For backward compatibility the old name still works but is deprecated (now it’s just an “alias” for elementNROWS()).
Rename compare() -> pcompare(). For backward compatibility the old name still works but is just an “alias” for pcompare() and is deprecated.
Some refactoring of the Rle() generic and methods: - Remove ellipsis from the argument list of the generic. - Dispatch on ‘values’ only. - The ‘values’ and ‘lengths’ arguments now have explicit default values logical(0) and integer(0) respectively. - Methods have no more ‘check’ argument but new low-level (non-exported) constructor new_Rle() does and is what should now be used by code that needs this feature.
Optimize subsetting of an Rle object by an Rle subscript: the subscript is no longer decoded (i.e. expanded into an ordinary vector). This reduces memory usage and makes the subsetting much faster e.g. it can be 100x times faster or more if the subscript has many (e.g. thousands) of long runs.
Modify “replaceROWS” methods so that the replaced elements in ‘x’ get their metadata columns from ‘value’. See this thread on bioc-devel:
Remove ellipsis from the argument list of the “head” and “tail” methods for Vector objects.
pc() (parallel combine) now returns a List object only if one of the supplied objects is a List object, otherwise it returns an ordinary list.
The “as.data.frame” method for Vector objects now forwards the ‘row.names’ argument.
DEPRECATED AND DEFUNCT
Deprecate elementLengths() in favor of elementNROWS(). New name reflects TRUE semantic.
Deprecate compare() in favor of pcompare().
After being deprecated in BioC 3.2, the “ifelse” methods for Rle objects are now defunct.
Remove “aggregate” method for vector objects which was an undocumented bad idea from the start.
BUG FIXES
Fix and improve the elementMetadata/mcols setter method for Vector objects so that the specific methods for GenomicRanges, GAlignments, and GAlignmentPairs objects are not needed anymore and were removed. Note that this change also fixes setting the elementMetadata/mcols of a SummarizedExperiment object with NULL or an ordinary data frame, which was broken until now.
Fix bug in match,ANY,Rle method when supplied ‘nomatch’ is not NA.
Fix findMatches() for Rle table.
Changes in version 0.99.3 (2016-02-29):
Package added to Bioconductor
Bioc-submission branch merged with master
Changes in version 0.99.2 (2016-02-21):
Changes in version 1.11.19-1.11.22:
utilizes the official C API
R_GetConnection() to accelerate text
import and export, requiring R (>=v3.3.0); alternative version
(backward compatible with R_v2.15.0) is also available on github
()
~4x speedup in the sequential version of
seqVCF2GDS(), and
seqVCF2GDS() can run in parallel
variables in “annotation/format/” should be two-dimensional as what mentioned in the vignette.
Changes in version 1.11.0-1.11.18:
rewrite
seqSummary()
a new vignette file with Rmarkdown format (replacing SeqArray-JSM2013.pdf)
bug fix in
seqBED2GDS() if the total number of genotypes > 2^31
(integer overflow)
bug fixes in
seqMerge() if chromosome and positions are not unique
seqStorage.Option() is renamed to
seqStorageOption()
new function
seqDigest()
seqVCF.Header() is renamed to
seqVCF_Header(),
seqVCF.SampID()
is renamed to
seqVCF_SampID()
seqSetFilter(): ‘samp.sel’ is deprecated since v1.11.12, please use ‘sample.sel’ instead
accelerate reading genotypes with SSE2(+13%) and AVX2(+23%)
new function
seqSystem()
allow “$dosage” in
seqGetData() and
seqApply() for the dosages of
reference allele
accelerate
seqSetFilterChrom() and allow a selection with multiple
regions
new methods
\S4method{seqSetFilter}{SeqVarGDSClass, GRanges}() and
\S4method{seqSetFilter}{SeqVarGDSClass, GRangesList}()
‘as.is’ in
seqApply() allows a ‘connection’ object (created by
file, gzfile, etc)
seqSummary(f, "genotype")$seldim returns a vector with 3 integers
(ploidy, # of selected samples, # of selected variants) instead of 2
integers
Changes in version 1.10.0-1.10.6:
the version number was bumped for the Bioconductor release version 3.2
fix a memory issue in
seqAlleleFreq() when ‘ref.allele’ is a vector
seqSetFilter() allows numeric vectors in ‘samp.sel’ and
‘variant.sel’
seqSummary() returns ploidy and reference
seqStorage.Option() controls the compression level of FORMAT/DATA
seqVCF2GDS() allows extract part of VCF files via ‘start’ and
‘count’
seqMerge() combines multiple GDS files with the same samples
export methods for compatibility with VariantAnnotation
a new argument ‘.useraw’ in
seqGetFilter()
a new argument ‘allow.duplicate’ in
seqOpen()
fix a bug in
seqParallel()
() and optimize its
performance
‘gdsfile’ could be NULL in
seqParallel()
Changes in version 1.9.11:
Add Firth test option to regression
Bug fix for refFracPlot: hets significantly different from 0.5 plotted as triangles, median line shown
Changes in version 1.9.10:
Changes in version 1.9.9:
Changes in version 1.9.8:
Changes in version 1.9.4:
Changes in version 1.9.2:
Changes in version 1.1.16:
NEW FEATURES
Full support for API V2, user-friendly call from R
CWL Draft 2+ generator in R, create JSON/YAML tool directly
5 Vignettes added for comprehensive tutorials and reference
Three examples inlcuded under inst/docker for cwl app examples
Auth configuration file to maintain multiple platforms and user account
Works for multiple Seven Bridges supported platforms
More features like task hook function to ease the automation
Changes in version 1.0.0:
Initial version
All the APIs of the SBG platform are supported
First vignette added
Changes in version 1.6.0:
New vignette
Added predictVariantEffects() for predicting the effect of a splice variant on annotated protein-coding transcripts
Changes in the SGVariants and SGVariantCounts class. Instances created with previous versions of SGSeq have to be updated.
Replaced functions for accessing assay data with two generic functions counts() and FPKM()
Support BamFileLists in sample info
Changed behavior of the annotate() function when assigning gene names
Changed behavior of the min_denominator argument in analyzeVariants() and getSGVariantCounts(). The specfied minimum count now has to be achieved at either the start or end of the event.
Adjacent exons no longer cause a warning in convertToTxFeatures()
Deprecated legacy classes TxVariants, TxVariantCounts
Bug fixes and other improvements
Changes in version 2.4:
Changed main plots to use -log10(p) as X axis
Added nolegend parameter to sigCheckPlot
Added title parameter to sigCheckPlot
Updated nkiResults data object.
Section added to vignette explaining how to get signatures from MSigDB.
Changes in version 1.3.1:
Changes in version 1.6.0:
Changes in version 1.5.0-1.5.2:
fix an issue in
snpgdsVCF2GDS() if sample.id has white space
bug fix in
snpgdsPCASampLoading() when the input is SeqArray GDS
file
improve
snpgdsGetGeno()
Changes in version 1.5.10-13:
USER UNVISIBLE CHANGES
added specLSet class support for cdsw methode
changed Rmd5 vignette style
added cdsw test case
intro new vignette for cdsw method
Changes in version 1.5.9:
USER UNVISIBLE CHANGES
Changes in version 1.5.5:
USER VISIBLE CHANGES
Changes in version 1.5.4:
USER UNVISIBLE CHANGES
Changes in version 1.5.3:
USER UNVISIBLE CHANGES
Changes in version 1.5.2:
USER UNVISIBLE CHANGES
find all signals having two or more in-silico fragment ions.
keep only the nearest fragment ion; if there are more take the first in line
Changes in version 1.15.0:
Changed signature for functions dnaRanges and write.annDNA
Added further explanations for calculations on HBond in vignette BUG FIXES
Changes in version 1.2.0:
NEW FEATURES
Add ‘rowData’ argument to SummarizedExperiment() constructor. This allows the user to supply the row data at construction time.
The SummarizedExperiment() constructor function and the assay() setter now both take any matrix-like object as long as the resulting SummarizedExperiment object is valid.
Support r/cbind’ing of SummarizedExperiment objects with assays of arbitrary dimensions (based on a patch by Pete Hickey).
Add “is.unsorted” method for RangedSummarizedExperiment objects.
NULL colnames() supported during SummarizedExperiment construction.
readKallisto() warns early when files need names.
base::rank() gained a new ‘ties.method=”last”’ option and base::order() a new argument (‘method’) in R 3.3. Thus so do the “rank” and “order” methods for RangedSummarizedExperiment objects.
SIGNIFICANT USER-VISIBLE CHANGES
Re-introduce the rowData() accessor (was defunt in BioC 3.2) as an alias for mcols() and make it the preferred way to access the row data. There is now a pleasant symmetry between rowData and colData.
Rename SummarizedExperiment0 class -> SummarizedExperiment.
Improved vignette.
Remove updateObject() method for “old” SummarizedExperiment objects.
DEPRECATED AND DEFUNCT
BUG FIXES
Fix bug in “sort” method for RangedSummarizedExperiment objects when ‘ignore.strand=TRUE’ (the argument was ignored).
Fix 2 bugs when r/cbind’ing SummarizedExperiment objects: - r/cbind’ing assays without names would return only the first element. See - r/cbind’ing assays with names in different order would stop() with ‘Assays must have the same names()”
Fix validity method for SummarizedExperiment objects reporting incorrect numbers when the nb of cols in assay(x) doesn’t match the nb of rows in colData(x).
assay colnames() must agree with colData rownames()
Fix bug where assays(se, withDimnames=TRUE) was dropping the dimnames of the 3rd and higher-order dimensions of the assays. Thanks to Pete Hickey for catching this and providing a patch.
A couple of minor tweaks to the rowData() setter to make it behave consistently with mcols()/elementMetadata() setters for Vector objects in general.
Changes in version 1.1.17:
DOCUMENTATION
Changes in version 1.1.16:
BUG FIXES
Changes in version 1.1.15:
DOCUMENTATION
Changes in version 1.1.14:
DOCUMENTATION
import_data: minor changes in documentation
defining several functions at the beginning of function script
Changes in version 1.1.13:
DOCUMENTATION
assess_fdr_byrun, assess_fdr_overall, filter_mscore_fdr: add sentence to manual page about FFT
vignettes: Add/change title
BUG FIXES
Changes in version 1.1.12:
NEW FEATURES
sample_annotation: remove option column.runid.
assess_fdr_byrun and assess_fdr_overall: add option to set range of plotting with n.range
DOCUMENTATION
Changes in version 1.1.11:
NEW FEATURES
Changes in version 1.1.10:
BUG FIXES
Changes in version 1.1.9:
DOCUMENTATION
DEPRECATED AND DEFUNCT
BUG FIXES
Changes in version 1.1.8:
BUG FIXES
Changes in version 1.1.7:
NEW FEATURES
added functions: count_analytes, plot_correlation_between_samples, plot_variation, plot_variation_vs_total, transform_MSstats_OpenSWATH.
filter_mscore_requant renamed to filter_mscore_freqobs
BUG FIXES
sample_annotation: Bug fix - it reported upon error different conditions instead of filenames.
assess_fdr_overall: Correction of transition level column name from “id” to “transition_group_id”
assess_fdr_byrun: Correction of transition level column name from “id” to “transition_group_id”
plot.fdr_cube: added na.rm=TRUE to plotting functions
Changes in version 1.1.6:
BUG FIXES
Changes in version 1.1.5:
BUG FIXES
Changes in version 1.1.4:
BUG FIXES
Changes in version 1.1.1:
NEW FEATURES
Improved the function disaggregate() that also data with different number of transitions per precursor than 6 can be used
Added tests for disaggregate.R and convert4pythonscript.R
Changes in version 1.1.0:
NEW FEATURES
Changes in version 1.13.1:
Update call to nQuants to accomodate changes in MSnbase
Defunct synapterGUI <2016-02-29 Mon>
Changes in version 1.13.0:
Changes in version 1.1.6:
CODE
Changes in summaryIntervals in order to allow the exploration at pool levels, useful when a targeted sequencing involving several PCR pools was performed.
plotAttrPerform method was added. This function produces a ggplot graph illustrating relative and cumulative frequencies of features in attribute intervals. If the panel has several pools, then the graph shows the mentioned results for each pool.
Changes in version 1.1.2:
CODE
Changes in buildFeaturePanel in order to reduce the run time. Now, the pileupCounts is not called, thus the pileup matrix is not built. Instead it, coverage and others Rsamtools and IRanges methods are used.
readPercentages and plotInOutFeatures methods were added in order to explore experiment efficiency.
biasExploration and plotMetaDataExpl were added in order to perform bias sources exploration. The first allows the gc content, feature length or other source distribution exploration. The second implement a plot in which attribute distribution for each source bias quartile or group is explored.
VIGNETTE
Changes in version 1.1.1:
CODE
VIGNETTE
Changes in version 1.1.0:
DOCUMENTATION
Changes in version 1.1.26:
Changes in version 1.1.25:
Changes in version 1.1.24:
Changes in version 1.1.23:
Changes in version 1.1.22:
Changes in version 1.1.21:
Changes in version 1.1.20:
Changes in version 1.1.19:
Changes in version 1.1.18:
Changes in version 1.1.17:
Adding batch information to the package. batch.info object is available for user and TCGAprepare adds automtically info for the summarizedExperiment object
TCGAvisualize_starburst new parameter: circle, to draw or not the circles in the plot
Database update
Changes in version 1.1.16:
Bug fix: subsetByOverlaps was removed from SummarizedExperiment package TCGAbiolinks should not import it
Small fixes in documentation
TCGAanalyze_DMR is now saving the results in a cvs file
Changes in version 1.1.15:
ggplot2 updat broke the package. Some small fixes were made, but the function TCGAvisualize_profilePlot is not working as sjPlot is not updated yet.
small fixes in documentation
Changes in version 1.1.14:
Changes in version 1.1.11:
Changes in version 1.1.10:
TCGAPrepare: bug fix for bt.exon_quantification files from IlluminaHiSeq_RNASeqV2 platform
Database update TCGAbiolinks 1.1.8
TCGAvisualize_Heatmap Now it is using Heatmap plus package and is calculating z-cores
TCGAvisualize_profilePlot Visualize the distribution of subgroups in the groups distributions
Database update
From version 1.0: small bugs corrections in some plots and TCGAprepare_elmer, documentation improvement. TCGAbiolinks 0.99.2 FIRST VERSION - FEATURES
TCGAanalyze_DEA Differentially expression analysis (DEA) using edgeR package.
TCGAanalyze_DMR Differentially methylated regions Analysis
TCGAanalyze_EA Enrichment analysis of a gene-set with GO [BP,MF,CC] and pathways.
TCGAanalyze_EAcomplete Enrichment analysis for Gene Ontology (GO) [BP,MF,CC] and Pathways
TCGAanalyze_Filtering Filtering mRNA transcripts and miRNA selecting a threshold.
TCGAanalyze_LevelTab Adding information related to DEGs genes from DEA as mean values in two conditions.
TCGAanalyze_Normalization normalization mRNA transcripts and miRNA using EDASeq package.
TCGAanalyze_Preprocessing Array Array Intensity correlation (AAIC) and correlation boxplot to define outlier
TCGAanalyze_survival Creates survival analysis
TCGAanalyze_SurvivalKM survival analysis (SA) univariate with Kaplan-Meier (KM) method.
TCGAbiolinks Download data of samples from * TCGA
TCGAdownload Download the data from * TCGA using as reference the output from * TCGAquery
TCGAintegrate Filtering common samples among platforms from * TCGAquery for the same tumor
TCGAinvestigate Find most studied TF in pubmed related to a specific cancer, disease, or tissue
TCGAprepare Read the data from level 3 the experiments and prepare it for downstream analysis into a SummarizedExperiment object.
TCGAquery Searches * TCGA open-access data providing also latest version of the files.
TCGAquery_clinic Get the clinical information
TCGAquery_clinicFilt Filter samples using clinical data
TCGAquery_MatchedCoupledSampleTypes Retrieve multiple tissue types from the same patients.
TCGAquery_samplesfilter Filtering sample output from * TCGAquery
TCGAquery_SampleTypes Retrieve multiple tissue types not from the same patients.
TCGAquery_Version Shows a summary (version, date, number of samples, size of the data) of all versions of data for a given tumor and platform.
TCGAsocial Finds the number of downloads of a package on CRAN or BIOC and find questions in website (“bioconductor.org”, “biostars.org”, “stackoverflow).
TCGAvisualize_EAbarplot barPlot for a complete Enrichment Analysis
TCGAvisualize_meanMethylation Mean methylation boxplot
TCGAvisualize_PCA Principal components analysis (PCA) plot
TCGAvisualize_starburst Create starburst plot
TCGAvisualize_SurvivalCoxNET Survival analysis with univariate Cox regression package (dnet)
Changes in version 3.11.1:
Changes in version 3.3:
BUG FIXES
Adapt the runMEME to work with meme 4.10.x version.
Fix the scientific notation in run_MEME
Better error handling of MEME wrappe
Changes in version 1.9.4:
NEW FEATURES
Add conversion from IUPAC string to matrix
toPWM, toICM work on PFMatrixList
BUG FIXES
Fix the seqLogo error when the frequency of each base is same at certain site. Thanks to Liz.
Fix the database interface to deal with the duplicated/missing records for certain TFBS in JASPAR2016.
Fix coersion method failure on certain cases.
Changes in version 1.24.2:
BUG FIXES
Changes in version 1.24.1:
BUG FIXES
Changes in version 1.7.9:
Changes in version 1.7.8:
fix the typo in documentation.
update the lollipop plot.
Changes in version 1.7.7:
Changes in version 1.7.6:
Changes in version 1.7.5:
Changes in version 1.7.4:
Changes in version 1.7.3:
Changes in version 1.7.2:
Changes in version 1.7.1:
adjust the fontsize for optimizing styles with theme.
add gene symbols if possible for geneModelFromTxdb.
Changes in version 1.15.1:
BUG FIXES
Changes in version 2.4.0:
Changes in version 0.99.0:
Changes in version 0.0.19:
summarizeToGenewhich breaks out the gene-level summary step, so it can be run by users on lists of transcript-level matrices produced by
tximportwith
txOut=TRUE.
Changes in version 0.0.18:
gene2txto
tx2gene. This order is more intuitive: linking transcripts to genes, and matches the
geneMapargument of Salmon and Sailfish.
Changes in version 0.99.15 (2016-05-03):
Minor Update of statistics
Changes in version 0.99.14 (2016-05-03):
Major Update of statistics
Changes in version 0.99.13 (2016-04-08):
Fixed unit tests
Changes in version 0.99.12 (2016-04-08):
Fixed unit tests
Changes in version 0.99.11 (2016-04-01):
Fixed unit tests
Changes in version 0.99.10 (2016-03-30):
Added Unit tests
Changes in version 0.99.9 (2016-03-29):
Improved documentation
Changes in version 0.99.8 (2016-03-25):
Fixed Bug
Changes in version 0.99.7 (2016-03-24):
Fixed Bug
Changes in version 0.99.6 (2016-03-23):
Fixed Bug
Changes in version 0.99.5 (2016-03-22):
Fixed Bug
Changes in version 0.99.4 (2016-03-16):
Added features
Changes in version 0.99.3 (2016-03-02):
Fixed bugs
Changes in version 0.99.2 (2016-03-02):
Fixed bugs
Changes in version 0.99.1 (2016-03-02):
Fixed bugs
Fixed NEWS section
Fixed unit testing
Fixed the Vignette: replaced the github-dependent installation with the biocLite installation
Changes in version 0.99.0 (2015-01-27):
New features
Identify cancer cell lines
Show which cancer cell line are contained
Show which mutations are annotated for selected cancer cell lines
Show which mutaitons are overall included
Parse cancer cell line custom data -> add your own samples and identify these
Changes in version 1.1.7:
Changes in version 1.1.6:
Move packages from Depends to Imports
For clarity, replace = with <- in parts of examples and vignette
Stop cluster in examples to solve error on Windows machines
Changes in version 1.1.5:
Changes in version 1.1.4:
Changes in version 1.1.3:
Add details to vignette
Fix ggplot2 compatibility issues
Changes in version 1.1.2:
Changes in version 1.1.1:
add plotPercentBars() to vizualize variance fractions for a subset of genes
add ESS() to compute effective sample size
fix x.labels argument in plotStratifyBy(). Previously, this argument was not used correctly
Changes in version 1.20.0:
NEW FEATURES
add SnpMatrixToVCF()
add patch from Stephanie Gogarten to support ‘PL’ in genotypeToSnpMatrix()
MODIFICATIONS
move getSeq,XStringSet-method from VariantAnnotation to BSgenome
update filterVcf vignette
remove ‘pivot’ export
work on readVcf(): - 5X speedup for readVcf (at least in one case) by not using “==” to compare a list to a character (the list gets coerced to character, which is expensive for huge VCFs) - avoiding relist.list()
update summarizeVariants() to comply with new SummarizedExperiment rownames requirement
defunct VRangesScanVcfParam() and restrictToSNV()
use elementNROWS() instead of elementLengths()
togroup(x) now only works on a ManyToOneGrouping object so replace togroup(x, …) calls with togroup(PartitioningByWidth(x), …) when ‘x’ is a list-like object that is not a ManyToOneGrouping object.
drop validity assertion that altDepth must be NA when alt is NA there are VCFs in the wild that use e.g. “*” for alt, but include depth
export PLtoGP()
VariantAnnotation 100% RangedData-free
BUG FIXES
Changes in version 1.18.0:
MODIFICATIONS
defunct VRangesScanVcfParam()
defunct restrictToSNV()
BUG FIXES
scanVcf,character,missing-method ignores blank data lines.
Build path for C code made robust on Windows.
Changes in version 1.8:
USER VISIBLE CHANGES
Changed human defaults of VariantFilteringParam() and updated documentation on MafDb packages to reflect newer package (shorter) names of the 1000 Genomes Project.
Sequence Ontology (SO) annotations have been updated from to the latest version of the data available on April, 2016.
Analysis methods (autosomalDominant(), etc.) now accept an argument called ‘svparam’ that takes an object produced by the ‘ScanVcfParam()’ function from the VariantAnnotation package. This allows one to parametrize the way in which VCF files are read. For instance, if one wishes to analyze a specific set of genomic ranges.
Analysis methods (autosomalDominant(), etc.) now accept an argument called ‘use’ that allows the user to select among three simple strategies to handle missing genotypes. See the corresponding help page for further information.
No messages from the AnnotationDbi::select() method are given anymore about the 1:1, 1:many or many:1 results obtained when fetching annotations.
BUG FIXES
Several bugfixes including dealing with transcript-centric annotations anchored at Ensembl gene identifiers, avoid querying for an OMIM column when working with organisms other than human, correctly identifying unaffected individuals in the autosomal recessive heterozygous analysis.
Added methods to deal with the new ExAC MafDb packages that enable the user to query allele frequencies by position.
Changes in version 2.5.0:
fixed bug in plotSubstitutions. The transition type of interest was not correctly highlighted in some instances. Thanks to Charlotte Sonenson for pointing this out.
Vignette migrated to Rmarkdown.
Changes in version 1.47.3:
Changes in version 1.47.2:
BUG FIXES
Fix problem in getEIC on xcmsSet objects reported by Alan Smith in issue #7 and add a RUnit test case to test for this (test.issue7 in runit.getEIC.R).
Changed some unnecessary warnings into messages.
USER VISIBLE CHANGES
No packages were removed from the release.
17 packages were marked as deprecated, to be removed in the next release.
One package, sbgr, was renamed to sevenbridges.
Deprecated packages: | http://bioconductor.org/news/bioc_3_3_release/ | CC-MAIN-2018-39 | refinedweb | 23,278 | 50.63 |
fsync man page
fsync, fdatasync — synchronize a file's in-core state with storage device
Synopsis
#include <unistd.h>
int fsync(int fd);
int fdatasync(int fd);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
fsync():
f to
POSIX.1-2001, POSIX.1-2008, 4.3BSD.
Availability
On POSIX systems on which fdatasync() is available, _POSIX_SYNCHRONIZED_IO is defined in <unistd.h> to a value greater than 0. (See also sysconf(3).)
Notes do not know how to flush disk caches. In these cases disk caches need to be disabled using hdparm(8) or sdparm(8) to guarantee safe operation.
See Also
sync(1), bdflush(2), open(2), posix_fadvise(2), pwritev(2), sync(2), sync_file_range(2), fflush(3), fileno(3), hdparm(8), mount(8)
Colophon
This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
aio(7), aio_error(3), aio_fsync(3), aio_return(3), bdflush(2), beanstalkd(1), close(2), cups-files.conf(5), explain(1), explain(3), explain_fsync(3), explain_fsync_or_die(3), fclose(3), fflush(3), fio(1), guestfish(1), guestfs(3), ioping(1), libssh2_sftp_fsync(3), mount(2), mount(8), mount.fuse(8), nbdkit-lua-plugin(3), nbdkit-perl-plugin(3), nbdkit-plugin(3), nbdkit-python-plugin(3), nbdkit-ruby-plugin(3), nbdkit-tcl-plugin(3), open(2), posix_fadvise(2), scpio(1), sftp(1), signal-safety(7), star(1), stress-ng(1), sync(1), sync(2), sync_file_range(2), syscalls(2), systemd.exec(5), wipe(1), write(2), xfs_io(8).
The man page fdatasync(2) is an alias of fsync(2). | https://www.mankier.com/2/fsync | CC-MAIN-2019-26 | refinedweb | 275 | 58.99 |
06 May 2008 10:42 [Source: ICIS news]
LONDON (ICIS news)--Borealis lifted first quarter operating profits by 7.9% to €137m ($211m) despite “challenging market conditions”, the Austria-based polyolefins producer said on Tuesday.
Net profit for the quarter was 15% higher year-to-year on sales 14.5% ahead at €1.70bn.
The quarterly results were achieved by growth in infrastructure, automotive and advanced packaging polyolefins markets and the contribution from Borouge and the newly-formed base chemicals business, Borealis said.
Borouge is a joint venture with the Abu Dhabi National Oil Company (Adnoc).
Borealis said it had taken on more debt in the quarter with gearing rising to 36% from 30% in the first quarter of 2007.
"Our steadfast dedication to innovation, operational competitiveness, commercial excellence and safety combined with a strong focus on our key market segments continue to drive sales growth and bring us good results,” CEO Mark Garrett said commenting on the quarterly results.
Borealis’s largest European investment to date, on a 350,000 t/y low-density polyethylene (LDPE) plant to serve the wire and cable market is on-track for completion in 2009, the company said in a statement.
In the first quarter it signed a memorandum of understanding with ?xml:namespace>
It is also conduction a feasibility study for a further expansion of the Borouge joint venture which would add approximately 2.5 million tonnes per year of polyolefins capacity by 2014.
( | http://www.icis.com/Articles/2008/05/06/9121374/borealis-lifts-q1-op-profits-by-7.9-to-137m.html | CC-MAIN-2014-49 | refinedweb | 242 | 52.39 |
Python Client Class Reference
This chapter describes how Caché classes and datatypes are mapped to Python code, and provides details on the classes and methods supported by the Caché Python binding. The following subjects are discussed:
Datatypes — %Binary data.
Connections — methods used to create a physical connection to a namespace in a Caché database.
Database — methods used to open or create Caché objects, create queries, and run Caché class methods.
Objects — methods used to manipulate Caché objects by getting or setting properties, running object methods, and returning information about the objects.
Queries — methods used to run queries and fetch the results.
Times and Dates — methods used to access the Caché %Time, %Date, or %Timestamp datatypes.
Locale and Client Version — methods that provide access to Caché version information and Windows locale settings.
Datatypes
All Caché datatypes are supported. See the following sections for information on specific datatypes:
The Caché %Binary datatype corresponds to a Python list of integers. See Using %Binary Data for examples.
Collections such as %Library.ArrayOfDataTypes and %Library.ListOfDataTypes are handled through object methods of the Caché %Collection classes. See %Collection Objects for examples.
Caché %Library.List variables are mapped to Python lists. A list can contains strings, ordinary or unicode, integers, None, and doubles. See %List Variables for examples.
Caché %Time, %Date, and %Timestamp datatypes are supported by corresponding classes in the Python binding. See Times and Dates for a description of these classes.
Connections
Methods of the intersys.pythonbind.connection package create a physical connection to a namespace in a Caché database. A Connection object is used only to create a Database object, which is the logical connection that allows Python binding applications to manipulate Caché objects. See Connecting to the Caché Database for information on how to use the Connection methods.
Here is a complete listing of connection methods:
conn = intersys.pythonbind.connection() conn.connect_now(url,user,password, timeout)
See Connection Information later in this section for a detailed discussion of the parameters.
conn = intersys.pythonbind.connection() conn.secure_connect_now(url, srv_principal, security_level, timeout)
Connection.secure_connect_now() returns the connection proxy that is used to get the proxy for the Caché namespace identified by url. This method takes the following parameters:
url — See Connection Information later in this section for a detailed description of the URL format.
srv_principal — A Kerberos "principal" is an identity that is represented in the Kerberos database, has a permanent secret key that is shared only with the Kerberos KDCs (key distribution centers), can be assigned credentials, and can participate in the Kerberos authentication protocol.
A "user principal" is associated with a person, and is used to authenticate to services which can then authorize the use of resources (for example, computer accounts or Caché services).
A "service principal" is associated with a service, and is used to authenticate user principals and can optionally authenticate itself to user principals.
A "service principal name" (such as srv_principal_name) is the string representation of the name of a service principal, conventionally of the form:
<service>/<instance>@<REALM>
For example:
cache/turbo.iscinternal.com@ISCINTERNAL.COM
On Windows, The KDCs are embedded in the domain controllers, and service principal names are associated with domain accounts.
See your system's Kerberos documentation for a detailed discussion of principals.
security_level — Sets the "Connection security level", which is an integer that indicates the client/server network security services that are requested or required. Security level can take the following values:
0 — None.
1 — Kerberos client/server mutual authentication, no protection for data.
2 — As 1, plus data source and content integrity protection.
3 — As 2, plus data encryption.
timeout — Number of seconds to wait before timing out.
Connection Information
The following information is needed to establish a connection to the Caché database:
URL — The URL specifies the server and namespace to be accessed as a string with the following format:
<address>[<port>]:<namespace>Copy code to clipboard
For example, the sample programs use the following connection string:
"localhost[1972]:Samples"Copy code to clipboard
The components of this string are:
<address> — The server TCP/IP address or Fully Qualified Domain Name (FQDN). The sample programs use "localhost" (127.0.0.1), assuming that both the server and the Python application are on the same machine.
<port> — The server TCP/IP port number for this connection. Together, the IP address and the port specify a unique Caché server.
<namespace> — The Caché namespace containing the objects to be used. This namespace must have the Caché system classes compiled, and must contain the objects you want to manipulate. The sample programs use objects from the SAMPLE namespace.
username — The username under which the connection is being made. The sample programs use "_SYSTEM", the default SQL System Manager username.
password — The password associated with the specified username. Sample programs use the default, "SYS".
Database
Database objects provide a logical connection to a Caché namespace. Methods of the intersys.pythonbind.Database package are used to open or create Caché objects, create queries, and run Caché class methods. Database objects are created by calling database = intersys.pythonbind.database(conn), where conn is a intersys.pythonbind.connection object. See Connecting to the Caché Database for more information on creating a Database object.
Here is a complete listing of Database methods:
obj = database.create_new(type, init_val)
Creates a new Caché object instance from the class named by type. Normally, init_val is None. See Objects for details on the objects created with this method.
obj = database.open(class_name, oid, concurrency, timeout, res)
Opens a Caché object instance using the class named by class_name and the oid of the object. The concurrency argument has a default value of -1. timeout is the ODBC query timeout.
obj = database.openid(class_name, id, concurrency, timeout)
Opens a Caché object instance using the class named by class_name and the id of the object. The concurrency argument has a default value of -1. timeout is the ODBC query timeout.
value = database.run_class_method(class_name, method_name, [LIST])
Runs the class method method_name, which is a member of the class_name class in the namespace that database is connected to. Arguments are passed in LIST. Some of these arguments may be passed by reference depending on the class definition in Caché. Return values correspond to the return values from the Caché method.
Objects
Methods of the intersys.pythonbind.object package provide access to a Caché object. An Object object is created by the intersys.pythonbind.database create_new() method (see Database for a detailed description). See Using Caché Object Methods for information on how to use the Object methods.
Here is a complete listing of Object methods:
value = object.get(prop_name)
Returns the value of property prop_name in Caché object object.
value = object.run_obj_method(method_name, [LIST])
Runs method method_name on Caché object object. Arguments are passed in LIST. Some of these arguments may be passed by reference depending on the class definition in Caché. Return values correspond to the return values from the Caché method.
object.set(prop_name, val)
Sets property prop_name in Caché object object to val.
Queries
Methods of the intersys.pythonbind.query package provide the ability to prepare a query, set parameters, execute the query, and and fetch the results. See Using Queries for information on how to use the Query methods.
A Query object is created as follows:
query = intersys.pythonbind.query(database)
Here is a complete listing of Query methods:
prepare query
query.prepare(string)
Prepares a query using the SQL string in string.
query.prepare_class(class_name, query_name)
Prepares a query in a class definition
set parameters
query.set_par(idx, val)
Assigns value val to parameter idx. The method can be called several times for the same parameter. The previous parameter value will be lost, and the new value can be of a different type. The set_par() method does not support by-reference parameters.
nullable = query.is_par_nullable(idx)
Returns 1 if parameter idx is nullable, else 0.
unbound = query.is_par_unbound(idx)
Returns 1 if parameter idx is unbound, else 0.
num = query.num_pars()
Returns number of parameters in query.
size = query.par_col_size(idx)
Returns size of parameter column.
num = query.par_num_dec_digits(idx)
Returns number of decimal digits in parameter.
type = query.par_sql_type(idx)
Returns sql type of parameter.
execute query
query.execute()
Generates a result set using any parameters defined by calls to set_par().
fetch results
data_row = query.fetch([None])
Retrieves a row of data from the result set and returns it as a list. When there is no more data to be fetched, it returns an empty list.
name = query.col_name(idx)
Returns name of column.
length = query.col_name_length(idx)
Returns length of column name.
sql_type = query.col_sql_type(idx)
Returns sql type of column.
num_cols = query.num_cols()
Returns number of columns in query.
Times and Dates
The PTIME_STRUCTPtr, PDATE_STRUCTPtr, and PTIMESTAMP_STRUCTPtr packages are used to manipulate Caché %TIME, %DATE, or %TIMESTAMP datatypes.
%TIME
Methods of the PTIME_STRUCTPtr package are used to manipulate the Caché %DATE data structure. Times are in hh:mm:ss format. For example, 5 minutes and 30 seconds after midnight would be formatted as 00:05:30. Here is a complete listing of Time methods:
time = PTIME_STRUCTPtr.new()
Create a new Time object.
hour = time.get_hour()
Return hour
minute = time.get_minute()
Return minute
second = time.get_second()
Return second
time.set_hour(hour)
Set hour (an integer between 0 and 23, where 0 is midnight).
time.set_minute(minute)
Set minute (an integer between 0 and 59).
time.set_second(second)
Set second (an integer between 0 and 59).
stringrep = time.toString()
Convert the time to a string: hh:mm:ss.
%DATE
Methods of the PDATE_STRUCTPtr package are used to manipulate the Caché %DATE data structure. Dates are in yyyy-mm-dd format. For example, December 24, 2003 would be formatted as 2003-12-24. Here is a complete listing of Date methods:
date = PDATE_STRUCTPtr.new()
Create a new Date object.
year = date.get_year()
Return year
month = date.get_month()
Return month
day = date.get_day()
Return day
date.set_year(year)
Set year (a four-digit integer).
date.set_month(month)
Set month (an integer between 1 and 12).
date.set_day(day)
Set day (an integer between 1 and the highest valid day of the month).
stringrep = date.toString()
Convert the date to a string: yyyy-mm-dd.
%TIMESTAMP
Methods of the PTIMESTAMP_STRUCTPtr package are used to manipulate the Caché %TIMESTAMP data structure. Timestamps are in yyyy-mm-dd<space>hh:mm:ss.fffffffff. format. For example, December 24, 2003, five minutes and 12.5 seconds after midnight, would be formatted as:
2003-12-24 00:05:12:500000000
Here is a complete listing of TimeStamp methods:
timestamp = PTIMESTAMP_STRUCTPtr.new()
Create a new Timestamp object.
year = timestamp.get_year()
Return year in yyyy format.
month = timestamp.get_month()
Return month in mm format.
day = timestamp.get_day()
Return day in dd format.
hour = timestamp.get_hour()
Return hour in hh format.
minute = timestamp.get_minute()
Return minute in mm format.
second = timestamp.get_second()
Return second in ss format.
fraction = timestamp.get_fraction()
Return fraction of a second in fffffffff format.
timestamp.set_year(year)
Set year (a four-digit integer).
timestamp.set_month(month)
Set month (an integer between 1 and 12).
timestamp.set_day(day)
Set day (an integer between 1 and the highest valid day of the month).
timestamp.set_hour(hour)
Set hour (an integer between 0 and 23, where 0 is midnight).
timestamp.set_minute(minute)
Set minute (an integer between 0 and 59).
timestamp.set_second(second)
Set second (an integer between 0 and 59).
timestamp.set_fraction(fraction)
Set fraction of a second (an integer of up to nine digits).
stringrep = timestamp.toString()
Convert the timestamp to a string yyyy-mm-dd hh:mm:ss.fffffffff.
Locale and Client Version
Methods of the intersys.pythonbind. default package provide access to Caché version information and Windows locale settings. Here is a complete listing of these methods:
clientver = intersys.pythonbind.get_client_version();
Identifies the version of Caché running on the Python client machine.
newlocale = intersys.pythonbind.setlocale(category, locale)
Sets the default locale and returns a locale string for the new locale. For example:
newlocale = intersys.pythonbind.setlocale(0, "Russian") # 0 stands for LC_ALL
would set all locale categories to Russian and return the following string:
Russian_Russia.1251
If the locale argument is an empty string, the current default locale string will be returned. For example, given the following code:
intersys.pythonbind.setlocale(0, "English") mylocale = intersys.pythonbind.setlocale(0, ""),"\n";
the value of mylocale would be:
English_United States.1252
For detailed information, including a list of valid category values, see the MSDN library () entry for the setlocale() function in the Visual C++ runtime library.
intersys.pythonbind.set_thread_locale(lcid)
Sets the locale id (LCID) for the calling thread. Applications that need to work with locales at runtime should call this method to ensure proper conversions.
For a listing of valid LCID values, see the "Locale ID (LCID) Chart" in the MSDN library (). The chart can be located by a search on "LCID Chart". It is currently located at:
For detailed information on locale settings, see the MSDN library entry for the SetThreadLocale() function, listed under "National Language Support". | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GBPY_CLASSES | CC-MAIN-2021-10 | refinedweb | 2,169 | 51.44 |
It's Hack Week again at SUSE. 🥳 An annual tradition where we all work on passion projects for a whole week. Some of us make music, others use the time to experiment with the latest technologies and start new Open Source projects.
My project this time was to see if there's a better way for us to do automated browser testing of our Mojolicious web applications. For a long time Selenium has been the de facto standard for browser automation, but these days there are more modern alternatives available, such as Playwright. But how good is Playwright really? Spoiler: It's very good. But keep reading to find out why and at what cost.
What is Playwright?
Playwright, just like Selenium before it, is a framework for browser automation. You can use it for all sorts of scripted interactions with websites, such as buying those Nvidia GPUs from online retailers faster than everyone else 😉, but it is most commonly used in test suites of web applications.
The code is being developed by Microsoft as an Open Source project with Apache 2.0 license and distributed as an NPM package. So all you need is Node, and you can install it with a one-liner.
$ npm i playwright
There are bindings for other languages, but to get the most out of Playwright you do want to be using JavaScript. Now when it comes to browser support, where Selenium would give you the choice to pick any WebDriver compatible browser as backend, Playwright will download custom builds of Chromium, Firefox and WebKit for you. And that's all you get.
They are doing it for pretty good reasons though. The browser binaries tend to work flawlessly on all supported platforms, which currently include Windows, macOS and Linux (x86). And when you are used to the sluggishness of Selenium, it almost seems magical how fast and reliable Playwright runs.
This is because where Selenium sticks to open protocols, Playwright will use every trick in the book for better performance. Including custom patches for those browsers, extending their DevTools protocols, and then using those protocols to control the browsers. I'm not a huge fan of the approach, but it's hard to argue with the results.
Short term there are huge benefits, but having to maintain these browser patches indefinitely, if they don't get merged upstream, might hamper the longevity of the project.
Using Playwright
import assert from 'assert/strict'; import { chromium } from 'playwright'; (async () => { const browser = await chromium.launch({ headless: false, slowMo: 50 }); const context = await browser.newContext(); const page = await context.newPage(); await page.goto(' await page.click('text=Documentation'); await page.click('text=Tutorial'); assert.equal(page.url(), ' await page.screenshot({ path: 'tutorial.png' }); await context.close(); await browser.close(); })();
If you've ever done web development before, the API will be very intuitive, and it was clearly designed with
async/await in mind, which i'm a huge fan of. You can have multiple isolated browser contexts, with their own cookies etc., and each context can have multiple pages.
Every interaction, such as
page.click(), will automatically wait for the element to become visible, with a timeout that defaults to 30 seconds. This is a huge step up from Selenium, where you have to build this logic yourself, and will get it wrong in many many entertaining ways. 😅
You can emulate devices such as iPhones, use geolocation, change timezones, choose between headless and headful mode for all browsers, and have the option to take screenshots or make video recordings at any time.
One of the latest features to be added was the GUI recorder, which opens a Chromium window, and then records all user interactions while generating JavaScript code as you go. I was a bit sceptical about this at first, but it can significantly speed up test development, since you don't have to think too much about CSS selectors anymore. Even if you just end up using parts of the generated code.
Playwright and Perl
Running Playwright against live websites is very straight forward. But for automated testing of web applications you also want your test scripts to start and stop the web server for you. And this is where things get a little bit tricky if your web application happens to be written in a language other than JavaScript.
use Mojolicious::Lite -signatures; get '/' => {template => 'index'}; app->start; __DATA__ @@ index.html.ep <!DOCTYPE html> <html> <body>Hello World!</body> </html>
What i needed to run my Perl app was a JavaScript superdaemon with support for socket activation. Unfortunately i've not been able to find a module for the job on NPM, and had to resort to writing my own. And now the Mojolicious organisation is not just on CPAN, but also on NPM. 😇
import assert from 'assert/strict'; import ServerStarter from '@mojolicious/server-starter'; import { chromium } from 'playwright'; (async () => { const server = await ServerStarter.newServer(); await server.launch('perl', ['test.pl', 'daemon', '-l', ' const browser = await chromium.launch(); const context = await browser.newContext(); const page = await context.newPage(); const url = server.url(); await page.goto(url); const body = await page.innerText('body'); assert.equal(body, 'Hello World!'); await context.close(); await browser.close(); await server.close(); })();
You might have noticed the odd listen location
That's a Mojolicious feature we've originally developed for
systemd deployment with .socket files. The superdaemon, in that case
systemd, would bind the listen socket very early during system startup, and then pass it to the service the
.socket file belongs to as file descriptor
3. This has many advantages, such as services being started as unprivileged users able to use privileged ports.
Anyway, our use case here is slightly different, but the same mechanism can be used. And by having the superdaemon activate the socket we can avoid multiple race conditions. The socket will be active before the web application process has even been spawned, meaning that
page.goto() can never get a connection refused error. Instead it will just be waiting for its connection to be accepted. And important for very large scale testing, with many tests running in parallel on the same machine, we can use random ports assigned to us by the operating system. Avoiding the possibility of conflicts as a result of bad timing.
Combining Everything
And for my final trick i will be using the excellent Node-Tap, allowing our JavaScript tests to use the Test Anything Protocol, which happens to be the standard used in the Perl world for testing.
#!/usr/bin/env node import t from 'tap'; import ServerStarter from '@mojolicious/server-starter'; import { chromium } from 'playwright'; t.test('Test the Hello World app', async t => { const server = await ServerStarter.newServer(); await server.launch('perl', ['test.pl', 'daemon', '-l', ' const browser = await chromium.launch(); const context = await browser.newContext(); const page = await context.newPage(); const url = server.url(); await page.goto(url); const body = await page.innerText('body'); t.equal(body, 'Hello World!'); await context.close(); await browser.close(); await server.close(); });
You might have noticed the shebang line
#!/usr/bin/env node. That's another little Perl trick. When the Perl interpreter encounters a shebang line that's not
perl it will re-exec the script. In this case with
node, and as a side effect we can use standard Perl testing tools like
prove to run our JavaScript tests right next to normal Perl tests.
$ prove t/*.t t/*.js t/just_a_perl_test.t ... ok t/test.js .. ok All tests successful. Files=3, Tests=4, 2 wallclock secs ( 0.03 usr 0.01 sys + 2.42 cusr 0.62 csys = 3.08 CPU) Result: PASS
In fact, you could even run multiple of these tests in parallel with
prove -j 9 t/*.js to scale up effortlessly. Playwright can handle parallel runs and will perform incredibly well in headless mode.
One More Thing
And if you've made it this far i've got one more thing for you. In the mojo-playwright repo on GitHub you can find a WebSocket chat application and mixed JavaScript/Perl tests that you can use for experimenting. It also contains solutions for how to set up test fixtures with wrapper scripts and how to run them in GitHub Actions. Have fun!
Discussion (3)
Awesome work, and very useful remarks, congrats Sebastian!
Combination with Mojolicious and JavaScript!
Impressive! 👍 | https://dev.to/kraih/playwright-and-mojolicious-21hn | CC-MAIN-2022-21 | refinedweb | 1,395 | 67.55 |
Display installed pip packages and their update status..
Project description
pip-check
pip-check gives you a quick overview of all installed packages and their update status. Under the hood it calls pip list --outdated --format=columns and transforms it into a more user friendly table.
Installation:
pip install pip-check
The last version that runs on Python 2.7 is v2.5.2. Install it with:
pip install pip-check==2.5.2
Usage:
$ pip-check -h usage: pip-check [-h] [-a] [-c PIP_CMD] [-l] [-r] [-f] [-H] [-u] [-U] A quick overview of all installed packages and their update status. optional arguments: -h, --help show this help message and exit -a, --ascii Display as ASCII Table -c PIP_CMD, --cmd PIP_CMD The pip executable to run. Default: `pip` -l, --local Show only virtualenv installed packages. -r, --not-required List only packages that are not dependencies of installed packages. -f, --full-version Show full version strings. -H, --hide-unchanged Do not show "unchanged" packages. -u, --show-update Show update instructions for updatable packages. -U, --user Show only user installed packages.
Testing:
Test against a variation of Python versions:
$ pip install tox tox-pyenv $ tox
Test against your current Python version:
$ python setup.py test
Recommeded Similar Tools
Changelog
v2.6 (201-12-12):
- Requires Python 3.5 or higher.
- Command error is shown if pip exits with a status code 1 (or larger).
- Error message is shown if pip is not able to load packages in case of network problems.
- Update instructions will now add --user in case the pip-check command should only show user packages as well.
v2.5.2 (2019-08-08):
- This is the last version that runs on Python 2.7. Install it with pip install pip-check==2.5.2
- Windows color fixes.
v2.5.1 (2019-08-08):
- Windows script fixes.
v2.5 (2019-08-08):
- A more robust installation that installs pip-check as a proper console script.
- Added new --disable-colors argument.
- Added tests for Python 3.7 and 3.8.
- Fixed Syntax warning happening with no outdated packages.
- Cleanup of the entire codebase.
v2.4 (2019-07-23):
- Added support to only show packages from the user or local package namespace.
v2.3.3 (2018-02-19):
- Visual fixes around --show-update
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pip-check/2.6/ | CC-MAIN-2021-25 | refinedweb | 409 | 69.28 |
PYTHON
An Introduction To Asynchronous Programming In Python
Overview
Asynchronous programming is a software paradigm which involves scheduling many small tasks that are invoked when events occurs. It is also known as event-driven programming. It is an alternative (although it can also be a complement) to both multi-threading and multiprocessing. Asynchronous programming is well suited to tasks which are IO bound and not CPU bound. It is well suited for IO bound applications because it allows other tasks to occurs while one task is blocked, waiting on some external process to complete. Because control is only given up explicitly with the
await keyword, you do not have to worry about common multi-threading issues such as data contention. It is not well suited to CPU bound applications because it does not make use of multiple cores/CPUs.
Two core parts of Pythons asynchronous capabilities a provided through the
await and
async keywords. The rest of the functionality is largely supplied by the
asyncio library. The
asyncio library provides event loops and functions to create, run and await tasks. Event loops are the “runners” of asynchronous functions. They keep track of all the coroutines which are currently blocked waiting for an event, and continue these coroutines from where they left off once the event occurs.
When you wait for an event with the
await keyword, Python can save the state of the function (i.e. the value of all the local variables, and the point of execution), and return to the active event loop. In the active event loop, the application can respond to other events while it is waiting. Once the specific event you waited on occurs, Python restores the state of the function and returns execution to that exact point is was saved at (they are very similar to Python generators).
Python’s style of asynchronous programming goes a long way to prevent call-back hell. Call-back hell was a common problem in Javascript (and many other languages) before the use of
futures and
promises became popular. It occurred because the only way to perform asynchronous programming was to provide callbacks (lambda functions). These nested within each other, broke the flow of the code, and severely hindered the readability of the software.
However smart are flexible asynchronous programming may be, synchronous programming is still the bread-and-butter of the Python language. Unfortunately, the two don’t mix that well (you can’t
await a synchronous function — and forgetting to await an asynchronous function will just return a
coroutine object). You can think of synchronous Python and asynchronous Python as two separate programming styles, and most of your libraries have to be specifically designed to work with the style you are using.
What Is A Coroutine?
A coroutine is a Python function that has the keyword
await before the
def, e.g.:
async def my_coroutine(): print('Hello')
Calling a coroutine normally won’t actually do what you expect!
my_coroutine() # "Nothing" happens!
It would be wrong to say that nothing at all happens. Instead of calling the function,
my_coroutine() creates and returns a
coroutine object. This
coroutine object can be waited on with:
await my_coroutine() # This time, 'Hello' will be printed
But please remember,
await can only be called within a asynchronous function. So in reality, the call would have to look something like this:
async def my_coroutine_1(): print('Hello') async def my_coroutine_2(): await my_coroutine_1() # This time, 'Hello' will be printed
So now you a probably thinking, since the parent function, and the parent’s parent function, and the parent’s parent’s parent function all have to defined with
async to be able to use
await…where does it stop? What if my
main() is not
async? And even if that was, how would I call it? This is where the
asyncio library comes into play.
So actually, I lied, you can actually call an
async function from a non-
async function, but you have to use
asyncio to do so. The simplest way is to use
asyncio.run(), which takes a coroutine, runs it in a new event loop, and then returns.
import asyncio async def my_coroutine(): await asyncio.sleep(1) print('Hello after 1s') def main(): # NOTE: main() is not async! # We can call an async function from a non-async function by using the asyncio library asyncio.run(my_coroutine) if __name__ == '__main__': main()
If you forget to await all coroutines, Python will print the warning:
main.py:6: RuntimeWarning: coroutine 'my_coroutine' was never awaited
Before Python v3.5
Before Python v3.5, the
async keyword is not available. You can however use a decorator to define a coroutine:
@asyncio.coroutine def my_coroutine(): print('Hello')
And instead of using
await to call the above coroutine, you would use the
yield from syntax:
yield from my_coroutine()
Calling Async Code From Sync
Invariably, at some point you will want to call asynchronous code from a synchronous function. What you can’t do is:
def main(): await my_coroutine() # ERROR: We can't use `await` inside a synchronous function (main() is synchronous)
However, remember that we can always pass control over to the event loop from synchronous code. The easiest way to do this is with
asyncio.run():
def main(): asyncio.run(my_coroutine) # Passes control to the event loop, which will run my_coroutine, and then return control to here.
Creating A Worker Model
Below is a Python snippet showing a worker/job application using asynchronous programming. 10 jobs are created. 3 workers are created which will process these 10 jobs. Each worker is started as a task with
asyncio.create_task(). The jobs are fed to the workers via a
asyncio.Queue. Each worker
awaits a job on the queue, processes the job, and then waits for another one. Once all of the jobs are processed, the workers are terminated and the application exits.
import asyncio import random async def worker_fn(id: str, job_queue: asyncio.Queue) -> None: while True: sleep_for = await job_queue.get() print(f'Worker {id} sleeping for {sleep_for:.2}s.') await asyncio.sleep(sleep_for) print(f'Worker {id} woke up.') job_queue.task_done() async def main() -> None: queue = asyncio.Queue() # Create jobs for workers to complete print(f'Creating jobs...') for i in range(0, 10): sleep_for_s = random.uniform(0.1, 1.0) queue.put_nowait(sleep_for_s) # Create three worker tasks print(f'Creating and starting workers...') workers = [] for i in range(3): worker = asyncio.create_task(worker_fn(i, queue)) workers.append(worker) print(f'Waiting for jobs to be completed.') await queue.join() print(f'Jobs finished.') print(f'Terminating workers...') for worker in workers: worker.cancel() await asyncio.gather(*workers, return_exceptions=True) print(f'Workers terminated. Example finished.') if __name__ == '__main__': asyncio.run(main()) exit(0)
Will produce the following output:
$ python worker_example.py Creating jobs... Creating and starting workers... Waiting for jobs to be completed. Worker 0 sleeping for 0.15s. Worker 1 sleeping for 0.39s. Worker 2 sleeping for 0.49s. Worker 0 woke up. Worker 0 sleeping for 0.12s. Worker 0 woke up. Worker 0 sleeping for 0.7s. Worker 1 woke up. Worker 1 sleeping for 0.52s. Worker 2 woke up. Worker 2 sleeping for 0.63s. Worker 1 woke up. Worker 1 sleeping for 0.98s. Worker 0 woke up. Worker 0 sleeping for 0.33s. Worker 2 woke up. Worker 2 sleeping for 0.39s. Worker 0 woke up. Worker 2 woke up. Worker 1 woke up. Jobs finished. Terminating workers... Workers terminated. Example finished.
Make sure that you terminate all the tasks before terminating the application. If you terminate while a task is still waiting on a queue you will get the following warning:
Task was destroyed but it is pending!
Related Content:
- September 2019 Updates
- Python Classes And Object Orientated Design
- Parsing Command-Line Arguments In Python
- Python Sets
- A Tutorial On geopandas
- Python
- programming
- programming languages
- software
- async
- await
- asyncio
- coroutines
- event loops
- asynchronous programming
- asynchronous
- IO bound
- CPU bound
- queues
- yield
- generators
- tasks | https://blog.mbedded.ninja/programming/languages/python/an-introduction-to-asynchronous-programming-in-python/ | CC-MAIN-2021-17 | refinedweb | 1,328 | 67.45 |
Docker Captains speak bluntly: “Containerd is basically the real engine behind Docker”
> Chanwit Kaewkasi?
Chanwit Kaewkasi: In our laboratory, we were searching for a virtualization layer to help manager our Big Data stack dated back to 2014. VM solutions were too heavy for us, and we were lucky enough to find Docker.
JAXenter: Docker is revolutionizing IT — that is what we read and hear very often. Do you think this is true? If we were to look beyond the hype, what’s so disruptive about Docker technology?
Chanwit Kaewkasi: It’s pretty true in my opinion. In the past, it was very hard to up and run a set of Web servers.
With Docker, we can just do it in a couple of minutes.
JAXenter: How is Docker different from a normal virtual machine?
Chanwit Kaewkasi: Docker is basically using the OS-level virtualization, Linux namespaces and control groups, for example. Its overhead is very thin compared to a virtualization technique, like Hypervisor used by virtual machines.
Containerd is basically the real engine behind Docker.
JAXenter: How do you use Docker in your daily work?
Chanwit Kaewkasi: I help companies in South-East Asia and Europe design and implement their application architectures using Docker, and deploy them on a Docker Swarm cluster.
JAXenter: What issues do you experience when working with Docker? What are the current challenges?
Chanwit Kaewkasi: Multi-cluster management is still not easy. I have to create my own tool to manage them.
It would be great if we can do this natively in Docker Swarm.
Multi-host networking is current good, but I still find some minor issues. However, it’s going to be better as many SDN vendors are implementing their own network stacks as Docker plugins. It’s a good news.
SEE ALSO: Machine Learning as a microservice in a Docker container on a Kubernetes cluster — say what?
JAXenter: Talking about the evolution of the Docker ecosystem: How do you comment on Docker’s decision to donate containerd runtime to CNCF?
Chanwit Kaewkasi: It’s a great and cool move. Containerd is basically the real engine behind Docker. The standardized container runtime benefits everyone in the community.
Multi-cluster management is still not easy.
JAXenter: Is there a special feature you would like to see in one of the next Docker releases?
Chanwit Kaewkasi: Sure. I hope to see cluster namespacing and the network layer stabilization in the near coming releases.
JAXenter: Could you share one of your favorite tips when using Docker?
Chanwit Kaewkasi:
`docker system prune -f` always makes my day.
Thank you! | https://jaxenter.com/docker-captains-interview-kaewkasi-138933.html | CC-MAIN-2021-31 | refinedweb | 431 | 58.89 |
How want to learn Jquery
i want to learn Jquery i want to learn jquery can u plz guide me
Yes, you can learn these technologies by yourself. Go through the following links:
Ajax Tutorials
JSON Tutorials
JQuery Tutorials...) specifications from Sun Microsystem to provide the platform to run Java code... of downloading and Installing is easy and you can learn the process very
i need a quick response about array and string...
i need a quick response about array and string... how can i make a dictionary type using array code in java where i will type the word then the meaning will appear..please help me..urgent
code and specification u asked - Java Beginners
can build in java are extensive and have plenty of capability built in.
We...();
display.dispose();
}
}
so i have sent u the code and specification...i think this will help u to solve my problem....expecting the solution
Where can I learn Java Programming
and
time to learn Java in a right way. This article is discussing the most asked
question which is "Where can I learn Java Programming?". We
have... have to search more for "Where can I learn Java
Programming?", just
want a program for cd writing in java - Java Beginners
want a program for cd writing in java Hi
Can u tell some body, doing a program on cd writing in java. I m facing some problem on it.
Thanks in advance.
Regards
sanjaya
want a project
want a project i want to make project in java on railway reservation using applets and servlets and ms access as database..please provide me code and how i can compile and run
can u plz try this program - Java Beginners
can u plz try this program Write a small record management application for a school. Tasks will be Add Record, Edit Record, Delete Record, List....
---------------------
<%@ page language="java
Can you provide me Hibernate configuration tutorial?
Can you provide me Hibernate configuration tutorial? Hello There,
I want to learn about Hibernate configuration, Can anyone provide me with hibernate configuration tutorial>
i want make a simple tools like turnitin using java
i want make a simple tools like turnitin using java it just simple tools can select the file like .doc,.pdf,.txt..the tools can read inside the file.....can u help me
required treenodes checkboxes are not clicking if u change the url - Java Server Faces Questions
=1001,1007,1002,1005,1008,1002,1009,1010,1008&client=0
I m passing album Ids and clientId in d... previous albums are showing clicked.
I want to modify d url and want to see d...required treenodes checkboxes are not clicking if u change the url
else if (selection = * 'M'); - Java Beginners
else if (selection = * 'M'); I am trying to get 2 numbers 2 multiply and i keep getting this error
illegal start of expression
else if (selection = * 'M');
^
this is my program - what am i?
Which java can i download?
Which java can i download? Hello,
i'm a beginner on java.
Which java can i download for to exercise with my codes?
Thanks in advance.
nobody.
And i download Eclipse java. But when i want to install
i want print the following out put
i want print the following out put Hello sir i want the following out put can u provide the program in c#
o/p;
HELLOLLEH
HELLLEH
HELEH
HEH
H
quick sort
quick sort sir, i try to modify this one as u sugess me in previous answer "array based problem" for run time input.but i am facing some problem.plz...(array[i]);
}
quick_srt(array,0,array.length-1);
System.out.print
To provide Help Option - Java Beginners
To provide Help Option hi i am writing one small application there i wanted to provide "Help" button to user.From there they will be geting how... theire own help article writen..." how we can provide our Help writen for our
i want to remove specific node in xml file
i want to remove specific node in xml file
<A>
<B>hi
<C>by
<A>
<B>hellow
<C>how r u ?
i want to delet node which is == hellow using java program please help me .
tanks in advance
The quick overview of JSF Technology
;
The quick Overview of JSF Technology
This section gives you an overview of Java
Server Faces technology, which simplifies the development of Java
Based web applications. In this JSF
Where to learn java programming language
Where to learn java programming language I am beginner in Java and want to learn Java and become master of the Java programming language? Where... oriented programming language. It's easy to start learning Java. You can learn how
provide me the program for that ques.
provide me the program for that ques. wtite a program in java...;Hi Friend,
You can try the following code:
import java.util.*;
class...=input.nextLine();
char ch[]=st.toCharArray();
for(int i=0;i<ch.length;i
Quick Sort in Java
Quick sort in Java is used to sort integer values of an array... into a sorted array.
Example of Quick Sort in Java:
public class QuickSort... QuickSort.java
C:\array\sorting>java QuickSort
RoseIndia
Quick Sort
Thank U - Java Beginners
Thank U Thank U very Much Sir,Its Very Very Useful for Me.
From SUSH
I need to launch multiple browsers installed on another windows M/C from my windows M/C over the network.
I need to launch multiple browsers installed on another windows M/C from my windows M/C over the network. Please share the java code for the same as soon as possible
provide source - Java Beginners
provide source please provide code for this program
Write...*;
public class DnldURL {
public static void main (String[] args) {
URL u;
InputStream is = null;
DataInputStream dis;
String s;
try {
u = new
I want to learn Hibernate tutorial quickly
I want to learn Hibernate tutorial quickly Hello,
I want to learn Hibernate tutorial quickly. Is there any way.. I want to learn Hibernate online.. Please help
want to get job on java - Java Beginners
want to get job on java want to get job on java what should...-programming.shtml
Quick Hibernate Annotation Tutorial
information in Hibernate. The Java 5 (Tiger) version has
introduced a powerful way... :-
Make sure you have Java 5.0 or higher version .
You should have Hibernate...Hibernate Annotations
what should i do next?? - Java Beginners
can learn servlets after that JSP.My personal interest iam saying that if u... if u don't know how to install the software of java also u can from him.He...what should i do next?? I know java basics.actully i passed the SCJP
i want to create dynamic calendar in java
i want to create dynamic calendar in java i want code and explanation
I want solution for this jsp program..
I want solution for this jsp program.. <%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<%@ page import="java.sql.*" %>
JSP Page
<%!
String
how to create text file from jsp and provide downlode option
how to create text file from jsp and provide downlode option HI Deepak ,
thanks for your post u really doing great work for people who is new in java/j2ee tech.
my question is who do i make a jsp output in the form of text
Learn Java online
can learn at his/her pace.
Learning Java online is not difficult because every...Learning Java is now as easy as never before. Many websites today provide.... If they cannot understand
any topic they can ask there query to java developers
provide me code - Java Beginners
provide me code can any body provide code for SNAKE XENZIA game in NOKIA mobile?
please urgently
Want to Package Applocation - Java Beginners
to my questions.
I have a FInished application that i want to deploy, the application is in Standard Edition Application but i dont want the user... before using it, i want a particular way of packaging so that the user or client
JDBC Training, Learn JDBC yourself
are providing free JDBC training material online.
You can learn JDBC at your own... programming language of java then fist
learn our following two tutorials:
Learn Java in a Day and
Master
how to print the average of each column of 2d n*m array using java
the content of file.now i want to calculate mean(average) of each column present...how to print the average of each column of 2d n*m array using java ... = new float[values.length];
for (int i = 0; i < values.length; i
java
of collection?
2.Actualy sir I m developing one shopping moll project.In that project i want to sell my product through my site online.In that project i want get... also...............?plz provide this process,coding(in java launguage) and your... why i thght der might b d prob with the db structure.Can u help me
Learn Java for beginners
Beginners can learn Java online though the complete Java courses provided... a
programmer from basic to advance level of Java.
New programmers prefer to learn Java... program can be used.
The details in which the Java programming is taught online
Best way to learn Java
Best way to learn Java is through online learning because you can learn... that
provide you Java guides or courses online, even providing certificates... can learn how
he/she can make a program on IDE, actually seeing it done
How to learn Java easily?
on how quick they learn the advanced phase.
Advanced Java cover more complex...If you are wondering how to learn Java easily, well its simple, learn...
Java learning.
The students opting to learn Java are of two types, either
Java Glossary Term - M - Java Important Terms - Java Programming Glossary
M
Java Map
In Java, a Map is a kind of generalized array. It
provides a more... duplicate keys.
Java Mail
JavaMail includes APIs
I want detail information about switchaction? - Struts
I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction
Learn Java for beginner
and is the
main reason behind its popularity.
More and more students are opting to learn Java... more.
One way to learn Java is through online training. Here a beginner in Java... helps you to learn Java and
clear your basic concepts. Once your concepts
reg : the want of source code
reg : the want of source code Front End -JAVA
Back End - MS Access
Hello Sir, I want College Student Admission Project in Java with Source code... -that can be Modified Administrator.(Admin Login)
MBA,MCA,MCOM
5) Fees Structure
Learn Hibernate programming with Examples
In this tutorial I will provide you the examples of Hibernate API which
teaches you to use the Hibernate in real life project. Here you will learn the
concepts...Learn the important concepts of Hibernate programming language
how can i connected jsp wih mysql - Java Beginners
how can i connected jsp wih mysql i have a code jsp file and connection file connection file will be compiling successful. i m using session in my... to connect this and how can i use it plz help me its very urgent plz send my to Learn Java
,
how to learn Java? Learning Java is not
that difficult just the right start is needed. The best way to learn Java today
without spending money in through... knowledge to be called a Java professional and
he/she can get an easy job
Learn Java in 24 hours
and programmers having knowledge of C, C++. They can have a basic idea for Java... it hard
to code in Java language.
RoseIndia brings you a quick Java guide... find it easy to understand the example.
To learn Java in 24 hours, you must clear
learn
learn how to input value in java.e.
provide code - Swing AWT
provide code Dear frnds please provide code for two player CHESS... );
chessBoard.setBounds(0, 0, boardSize.width, boardSize.height);
for (int i = 0; i < 64; i++)
{
JPanel square = new JPanel( new want java code for this xml file...please show me..
i want java code for this xml file...please show me..
xbrli:shares
xbrli:pure
iso4217:INR...;Does anyone know how to get it to be in java format instead of stick
Want Mini project - Development process
Want Mini project Hi Can u plz send me mini project.I want to learn..., HibernateDaoImpl, i.e;
I want mini project implementing all concepts like jsp ,struts, servlets,javascript, hibernate, ajax and backend oracle or sql
i want immediate code - Development process
i want immediate code Basic sales tax is applicable at a rate of 10... of 5%, with no
exemptions.
When I purchase items I receive a receipt which lists... string;
}
}
For more information on Java visit to :
http code for these programs
i want code for these programs Advances in operating system
Laboratory Work:
(The following programs can be executed on any... use of a bounded buffer (size can be prefixed at a suitable value
If Sessin Contain Array how can i get each value.
If Sessin Contain Array how can i get each value. sir,
i m trying... a value(array) from 'mysqlfetcharray()' returned i stored in session. bcoz i want to use this value in different conditions..so how can i get it..
Thank u
< | http://www.roseindia.net/tutorialhelp/comment/13451 | CC-MAIN-2015-11 | refinedweb | 2,241 | 65.52 |
working with badge.swf and air.swf (browser api)cjm771 Jun 29, 2008 8:48 PM
I basically want to install and run my app through the browser so ive been testing but can't manage to figure out how the air.swf api works. i am stuck at loading air.swf. the page below has a tutorial but a downloadable sample code would be perfect, not sure where the little snippets of code from the "Loading the air.swf file", "Checking from a web page if an AIR application is installed", "installing an AIR application from the browser", and "Launching an installed AIR application from the browser" go in my own codebase. Also im a bit unclear on where i get appid or developer id. if someone has an example app or more in-depth explantion of incorporating the given code , i would much appreciate it.
This content has been marked as final. Show 4 replies
1. Re: working with badge.swf and air.swf (browser api)Dr. Fred Mbogo Jun 29, 2008 9:51 PM (in response to cjm771)
quote:The main thing you must understand about air.swf is that its most important functionality can only be called from within a UI event handler, such as for a button click. It's very picky about this. You can't, for example, use the button click event handler to begin the loading of air.swf, then in the "loaded" callback do the air.swf API call. air.swf has to be loaded and ready at the time the event handler is called. So, load it on app startup. I even go to the extent of disabling the buttons that call into air.swf until it's loaded.
Originally posted by: cjm771
can't manage to figure out how the air.swf api works
quote:The appid is your application's unique ID, which you gave in setting up your project. Adobe recommends using something based on your web site's domain name, in reverse order as is done in Java and Objective C. If you're at foo.com, and call your program Qux, then com.foo.qux is a good appid. The use of domain-like names helps ensure that programs from different companies don't collide with each others' namespaces.
im a bit unclear on where i get appid or developer id
By default, the pubid is a random number assigned by the IDE. I forgot how you find out what number it used, just that there's a way. Or, you can assign it yourself, in the project settings for the AIR app. Right-click the project, go to the Run/Debug Settings section, edit the launch configuration for your AIR app. You'll find a Publisher ID field there. The documentation for ADL may be helpful for picking your own pubid.
quote:See my code in this thread: =1352505&highlight_key=y&keyword1=air%2Eswf
if someone has an example app or more in-depth explantion of incorporating the given code , i would much appreciate it.
2. Re: working with badge.swf and air.swf (browser api)cjm771 Jun 30, 2008 7:44 AM (in response to Dr. Fred Mbogo)thanks for the information i just have a few quick questions. I know you said to call checkForAirVersion(flexversion) before it is loaded, just wondering how that looks (whether it is under the function or within a button click attribute or etc) also, how do i get the version of flex, do i type it in manually or is there a function or variable that will automatically detect it? Thanks for the help, im familiar with java, javascript, and php but kinda new to flex/air.
3. Re: working with badge.swf and air.swf (browser api)Dr. Fred Mbogo Jun 30, 2008 2:25 PM (in response to cjm771)You're misunderstanding the purpose of the version check.
My application -- from which that code is extracted -- comes in two flavors, AIR and Flex, both with the same version number. This code lives in the Flex flavor of the app. When it sees that the AIR flavor of the app is installed, it checks its version number to see if it's the same as the Flex flavor's version number, which is always up to date since it's served from a single public web site. If not, it figures the Flex flavor of the app got updated since the last time you installed the AIR flavor, so it offers to upgrade your copy of the AIR app.
As for the "format" of that code, there's a command in Flex Builder to reformat the code. On doing that, you'll see that there are several functions, but they're all defined as closures within the one big public function, checkForAirVersion(). Thus, the entire feature is encapsulated. The Flex flavor of my app just calls this at startup, and it takes care of everything: it starts air.swf loading, then on successful load, it sets up the UI event handlers for calling back into air.swf from button clicks in the Flex app.
You may not have come across closures before. They're very powerful and useful, if a bit confusing at first. This bit of code shows why they're nice to have: we can define functions that use variables from an outer scope that don't get values until later, and ensure that those closures don't get called until the variables do get good values. You could mimic this by defining it all in a class, but, well, this is ActionScript, not C++. :) Classes are grafted onto AS, while closures are a first class feature.
4. Re: working with badge.swf and air.swf (browser api)Oliver Goldman
Jun 30, 2008 10:22 PM (in response to cjm771)You may also find this article helpful: | https://forums.adobe.com/thread/236400 | CC-MAIN-2018-13 | refinedweb | 984 | 74.39 |
I am having an issue with lines between para.I need the line starting with Inserts the string....to be on the line right under the bolded para. This is functions. Here is my example: <para><emphasis role="bold"><literal>void insertByName</literal> </emphasis><para> <para>
I can get it to work by using the following: <para><emphasis role="bold"><literal>void insertByName</literal> </emphasis><sbr/>> However, I get the validity of element "sbr" from namespace not allowed in this context, but it works for me. Will there be problems in the future. I tried the linegroup, however it indents, but works the way I would want it to but don’t want the indention. Thank you, Emily Forsyth Fluids & Machinery Engineering Department Propulsion & Energy Machinery Southwest Research Institute Tel: 210.522.2045 Email: emily.fors...@swri.org -----Original Message----- From: Shikareva, Ekaterina [mailto:eshikar...@luxoft.com] Sent: Wednesday, August 24, 2016 3:04 AM To: Stefan Knorr <skn...@suse.de> Cc: Bob Stayton <b...@sagehill.net>; docbook-apps@lists.oasis-open.org Subject: RE: [docbook-apps] Removing newlines between <para>s Hello Stefan! For me, this empty line appears even before any conversion, I can see it in XMLMind: see the same file in Notepad++ - XMLMind DocBook Editor v. 7.0 - Also visible in XMLMind 5.3.0. And btw, if you create an informaltable with XMLMind, the paras inside will be saved on the same line, as in the first row. For these screenshots, I had to manually insert a line break in the second row to reproduce the problem. For conversion, I use stylesheets 1.78.1, FOP 1.1, not sure which version of xsltproc.exe with xslt 1.0. As suggested by Bob, I will send him my stylesheets to check them. But if this is already visible before any conversion, I'm not sure it can help... Anyway, the problem in our system is being solved with scripting, so this investigation is just to know if this additional empty line is the expected behavior. -- Ekaterina Shikareva. -----Original Message----- From: Stefan Knorr [mailto:skn...@suse.de] Sent: Dienstag, 23. August 2016 17:54 To: Shikareva, Ekaterina Cc: Bob Stayton; docbook-apps@lists.oasis-open.org Subject: RE: [docbook-apps] Removing newlines between <para>s Hi Ekaterina, from your response to Bob, I finally understood your problem... this sounds like a bug. However, I can't reproduce this here; not with HTML and not with FO output. Spacing is exactly the same in between same-line paras and para with a line break in between. However, the space between the paras is indeed a bit large but that should be easy to fix with FO attributes/CSS. Screenshot from my PDF: (Input paras on the left as a screen, output on the right.) I used the following tools: Upstream XSLT 1.0 DocBook 5.0 Stylesheets 1.78.1, xsltproc/libxml 2.9.4, and FOP 2.1. I also verified that our profiling scripting does not interfere by reformatting XML. So, not sure what happens in your toolchain, but I guess something is different. Stefan. --- . SUSE Linux GmbH. Geschäftsführer: Felix Imendörffer, Jane Smithard, Graham Norton. HRB 21284 (AG Nürnberg). ________________________________. | https://www.mail-archive.com/docbook-apps@lists.oasis-open.org/msg20842.html | CC-MAIN-2017-04 | refinedweb | 535 | 68.87 |
From: "Andy Pandy"
Newsgroups: uk.finance uk.gov.social-security
Subject: Re: Pension planning
Date: Fri, 8 Apr 2005 18:35:30 +0100
"Martin Davies" wrote in message
news:Wgy5e.42127$C12.2945@fe1.news.blueyonder.co.uk...
> An option you might want to try is to stick money by regularly (perhaps
> cutting into luxury money if necessary) and invest it. Perhaps one of you
> into an ISA linked to stock market and one of you into another type of ISA -
> so benefitting from not paying tax.
> Force yourself to build the funds and not dip into them. If you can't resist
> the temptation to spend & spend, then its not a good option for you.
> The end result though can be a pension fund the pair of you have plenty of
> control over - for fees, for risk, for rate of return (in the case of cash
> ISA). A good book to get is 'The Richest Man in Babylon' by George Classon
> (or Clason), only a short read but a good personal finance book. Written 70
> years ago or so but just as relevant today.
> That money can come in handy for buying annuity or for simply living off
> part-capital, part interest in your old age, in addition to state pension.
> And unlike pension funds, can give far better return for far less fees. But
> you do need financial discipline not to go raiding the pot whenever you
> like - something a traditional pension at least tries to prevent.
But he would be daft not to work out the effect such savings would have on means
tested benefits.
--
Andy | http://www.info-mortgage-loans.com/usenet/posts/35881-84528.uk.finance.shtml | crawl-002 | refinedweb | 269 | 68.7 |
In wiki format: 12:07 < quaid> <meeting> 12:07 -!- quaid changed the topic of #fedora-meeting to: FDSCo mtg -- welcomes 12:07 < quaid> ... and welcome 12:08 -!- jmtaylor [n=jason fedora/jmtaylor] has joined #fedora-meeting 12:08 -!- jmbuser [n=jmbuser 195 229 25 134] has joined #fedora-meeting 12:09 < quaid> roll call for easy record keeping, if you are here ... 12:09 < quaid> <- Karsten is here 12:09 < jmbuser> JohnBabich 12:09 < Sparks> Eric Christensen 12:09 -!- Ludvick [n=ludvick_ adsl-065-012-235-102 sip mia bellsouth net] has joined #fedora-meeting 12:10 * quaid is getting agenda up on his screen 12:10 -!- mcepl [n=matej adsl3050 in ipex cz] has left #fedora-meeting ["Bye bye!"] 12:10 < jmbuser> JohnBabich the psychic 12:10 < quaid> heh 12:10 * ianweller lurks 12:10 < quaid> ok, I saw couf join 12:10 < quaid> and jsmith is half-here 12:10 < quaid> stickster_afk is at a booth or dinner or something 12:11 * jsmith wishes he were eating dinner 12:11 -!- quaid changed the topic of #fedora-meeting to: FDSCo rollin' in the hood -- Elections! 12:11 < quaid> cool, we have everyone here to discuss elections, governance, and the like 12:11 < quaid> paul posted a bit on list 12:11 < quaid> 12:12 * quaid waits a moment for others to read the thread 12:13 < quaid> ok 12:13 < Sparks> There was also some additional conversation that was had but it didn't go much further 12:13 < quaid> some differeint ideas there, ditt and sparks 12:14 < quaid> what I propose is this: 12:14 < quaid> i. we discuss until :35 at the latest 12:14 < quaid> ii. see if we have a consensus 12:14 < quaid> iii. if not, push the discussion contents back to the list and continue 12:15 < Sparks> +1 12:15 < quaid> I started the whole thing off because we are looking at how we govern in Fedora, and I think it makes sense to review on a subproj basis if we are following a formula that works for us or not 12:15 < jmbuser> +1 12:16 < jmbuser> continue 12:17 * quaid could talk for 20 minutes if he isn't careful :) 12:17 < quaid> simple idea: 12:17 < quaid> how do we turn from "the leader" into "a leader" and "A group of leaders"? 12:17 < quaid> eol 12:18 < jmbuser> We already seem to have a pretty motivated group of people 12:18 < jsmith> quaid: People don't learn to lead by watching a leader. They learn to lead by having adversity thrown at them 12:19 -!- fab [n=bellet bellet info] has quit [Read error: 113 (No route to host)] 12:19 < jsmith> The person you call "the leader" is simply the one that's experienced the most adversity, and done the best at getting through it 12:19 -!- spoleeba [n=one fedora/Jef] has joined #fedora-meeting 12:19 < quaid> what is interesting to me is this ... we have a process we've defined, and we have a way we've grown organically ... and they don't necessarily match 12:19 < jmbuser> This is not that unusual 12:20 < quaid> do we fix the process then? dissolve it? 12:20 -!- wolfy [n=lonewolf fedora/wolfy] has left #fedora-meeting ["The chief excitement in a woman's life is spotting women who are fatter than she is."] 12:20 < Sparks> In my opinion, I think the Steering Committee is too bulky for where I see the DocsProject is currently at 12:21 < jmbuser> Planned processes and the way things actually work out are usually two different things 12:21 < quaid> spoleeba: you might want to throw in here -- discussing governance of Docs, how to work with SIGs, etc. 12:21 < jmbuser> The solution is to have the process reflect reality 12:21 < quaid> spoleeba: or you might rightly say, "not my place, proceed" :) 12:21 < Sparks> If we defined a chair and a vice-chair I think they could "steer" the process 12:21 < quaid> reality is -- interestd people show up at a meeting time, on list, etc. 12:22 < quaid> Sparks: I see that, as a group, Fedora appreciates where there is a named leader or two or three so people know who to "go to" 12:22 < Sparks> Exactly 12:22 < Sparks> But I don't think we have the following necessary for a committee to lead the project 12:23 * jsmith agrees 12:23 < quaid> oh good 12:23 < quaid> that's how I've been feeling :) 12:23 < jsmith> In fact, I'd gladly give up my seat on the said commitee 12:23 < quaid> the committee weight is a bit heavy to maneuver with 12:23 < jsmith> (as I've been practically worthless lately) 12:23 < quaid> or 12:24 < quaid> make it "opt in" 12:24 < quaid> you want in, you are in 12:24 < quaid> you want out, just say you are disappearing for a while 12:24 < quaid> and let people "breathe" that way as per their life 12:24 < Sparks> That works 12:24 < quaid> I've been fortunate to have more Fedora time now, but I've always had weeks or months where I disappear into RHT work 12:25 < Sparks> We, as a project, should be able to say "we want this"... and we already do, really 12:25 < quaid> yep 12:25 < quaid> as for picking chair/v-chair stuff ... ideas that occur to me are: 12:25 < quaid> * have that as a general subproj election 12:25 < quaid> * have the opt-in FDSCo do it for everyone else 12:25 < quaid> sorry, that was 1 and 2 12:26 < jmbuser> "Is Fedora Docs going to remain a project or become a SIG?" is the question to ask, in my opinion 12:26 < quaid> 3. don't elect, just make sure things move around often enough 12:26 < quaid> 4. don't elect but have a clear process to kick out people who become tyrants 12:26 * jmbuser is always out of phase lately - sorry 12:26 < quaid> jmbuser: now, there is a way to ask that question, but I think it is already answered 12:27 < jsmith> Let me throw out one other question... is this a case of "much ado about nothing"? 12:27 < quaid> I support the general scheme that spoleeba (Jef) has proposed. 12:27 < quaid> in that one, Docs is clearly a subproject 12:27 -!- mccann [n=jmccann nat/redhat-us/x-4789468b54e83c36] has quit [Read error: 110 (Connection timed out)] 12:27 < quaid> and each SIG has a docs role to fill, with that person connecting back up to Docs the subproj 12:27 < jsmith> I mean, has the FDSCo really been that bad? 12:27 < quaid> jsmith: not bad, just ... 12:28 < quaid> jsmith: we said we'd have elections and stuff 12:28 < jsmith> quaid: And we have... at least I think I got elected somehow 12:28 < quaid> jsmith: so we need to be clear what we are doing, for those in the proj but not involved in leading, etc. 12:28 -!- mccann [n=jmccann nat/redhat-us/x-fdef7a5fbd075095] has joined #fedora-meeting 12:28 < quaid> jsmith: I mean, it's time again for elections :) 12:28 < Sparks> +1 to quaid's list... 12:28 < quaid> turnout has been not very big nor grown across elections; in fact, I think it might have declined 12:28 < Sparks> I think we should "elect" or "appoint" a leader of some sort 12:29 < quaid> how about this as a scheme: 12:29 < jsmith> FSSCo senate? 12:29 < quaid> * FDSCo is opt-in, consisting of all who want to be involved in steering 12:29 < quaid> * FDSCo has the charge to make sure leadership remains relevant 12:29 < quaid> * FDSCo decides to elect or appoint 12:30 < quaid> * If project members have problems with any of that, the answer is obviously to opt-in to the process and help from within the steering 12:30 < jsmith> +1 12:30 < jsmith> WORKSFORME 12:30 < Sparks> +1 12:30 * quaid is thinking it looks OK and quite a bit like what we do already :) 12:31 < Sparks> It is... only less strict... more flexible 12:31 < jsmith> quaid: And yes, if you become an evil tyrant we'll kick you out ;-) 12:31 < jmbuser> In that anyone who wants to be on the steering committee generally gets elected, it doesn't seem to be much different 12:31 * quaid is happy we found a way to make Sparks' vote official, too :) 12:31 < jsmith> be right back 12:32 < jmbuser> than what you propose 12:32 < quaid> jmbuser: right, except we artificially constrained the SCo before, so people who cannot be active are "taking slots" from people who can be active right now; so yeah, better 12:32 -!- RodrigoPadula [i=c8c6c292 gateway/web/ajax/mibbit.com/x-15a8c9d839c8dfb7] has joined #fedora-meeting 12:33 < RodrigoPadula> hello guys! 12:33 < jmbuser> In other words, people get elected, then their life situation changes, then someone else becomes active between elections? 12:33 < spoleeba> let me ask this.. do you have a handle on the number of active people are in the fas groups you think should have a say in the direction of docs? 12:33 < quaid> I don't think so 12:33 < quaid> that said, 12:33 < quaid> most such people tend to come in there anyway in some fashion 12:34 < quaid> but we are not well represented from certain groups 12:34 < spoleeba> is that number big enough to support an election? elections on make sense if you need representative governance..versus referendum 12:35 < spoleeba> if sigs grow doc roles...then maybe you'll need elections of some sort 12:37 < quaid> +1 12:37 -!- J5_ [n=quintice 66 187 234 199] has joined #fedora-meeting 12:37 < quaid> when it gets to where we have some actual contention to elect against :) 12:38 < quaid> right now it's like a girls club electing "officers" 12:38 < quaid> which was important 12:38 < quaid> back when we needed to make it clear RHT wasn't puppetizing things 12:38 < jmbuser> What about high-level decisions like not documenting closed-binary workarounds? 12:38 < quaid> now that we all know that RHT barely notices Docs (j/k ... 12:39 < quaid> jmbuser: where it's not clear from the overall project, SCo should be able to handle that 12:39 < jmbuser> Encouraging FOSS solutions instead? 12:39 < quaid> well, if in the future that becomes OK to do in Fedora, we'll follow suit. 12:40 < quaid> I mean, Fedora doesn't support closed binary workaround, so we don't have to, and really shouldn't 12:40 < quaid> if we do our job right and are visible enough, the rest of Fedora will make sure we don't drift, too :) 12:41 < quaid> ok, we went over the mark 12:41 < quaid> but I think we got some consensus, yes? 12:41 < jmbuser> please sum up 12:42 < quaid> ok, let's see ... 12:42 * jsmith stumbles back 12:42 < Sparks> +1 12:42 < quaid> 12:29 < quaid> * FDSCo is opt-in, consisting of all who want to be involved in steering 12:42 < quaid> 12:29 < quaid> * FDSCo has the charge to make sure leadership remains relevant 12:42 < quaid> 12:29 < quaid> * FDSCo decides to elect or appoint 12:42 < quaid> 12:30 < quaid> * If project members have problems with any of that, the answer is obviously to opt-in to the process and help from within the steering 12:42 < quaid> add to that: 12:42 < quaid> FDSCo elects or appoints leadership as they see fit. 12:42 < quaid> and what I propose: 12:43 < quaid> all FDSCo members say "I am a Fedora Docs Leader" 12:43 < quaid> and we emphasize points of contact that are subject matter focused rather than one big daddy 12:43 < quaid> (that is a grow-to strategy that includes better DocsProject pages to help others find their SME) 12:44 < quaid> eosummary 12:44 < quaid> SME == subject matter expert 12:44 < quaid> did I miss anything? 12:44 -!- kms [n=kms mailgate passback co uk] has joined #fedora-meeting 12:44 * jmbuser is starting to "get it" 12:45 < jsmith> quaid: You forget that we're going to elect you puppet dictator for life 12:45 < jmbuser> All hail, quaid! 12:45 < jsmith> quaid: But other than that minor issue, you've hit the issue squarely on the head 12:45 < quaid> hey, I have an ego, too 12:45 < Sparks> quaid quaid quaid quaid 12:45 < jsmith> quaid++ 12:45 < jmbuser> MIB II reference :-) 12:45 < quaid> anyone who says they aren't proud of their roles in Fedora is probably lying :) 12:45 -!- JSchmitt [n=s4504kr p4FDD1A55 dip0 t-ipconnect de] has joined #fedora-meeting 12:46 < jsmith> quaid: I'm not proud of my role on FDSCo... does that count? 12:47 < Sparks> So that went twice as long as was "allowed"... :) 12:48 < quaid> word 12:48 < quaid> anything more? 12:48 -!- quaid changed the topic of #fedora-meeting to: FDSCo mtg rolls onward -- release notes 9.0.2-1 12:48 < quaid> anyone here know anything? 12:49 * jsmith doesn't know *anything* 12:49 < quaid> mdious isn't here, it's middle of night in .au 12:49 < quaid> stickster_afk is dining still 12:49 * quaid is joking, he doesn't know 12:49 < quaid> ok, moving on 12:49 < jsmith> ~hail gluttony! 12:49 -!- tiagoaoa [i=c8c6c292 gateway/web/ajax/mibbit.com/x-d005906909200f78] has joined #fedora-meeting 12:49 -!- quaid changed the topic of #fedora-meeting to: FDSCo is as FDSCo does -- Wiki gardening ... 12:49 < quaid> let's make this the final topic for now 12:49 < quaid> oh, sorry 12:49 < quaid> Sparks had some stuff too 12:50 < Sparks> Not really... It can wait. 12:50 < quaid> Sparks: are those sub topics to wiki gardening? 12:50 < Sparks> Yes 12:50 < couf> pong, sorry 12:50 < quaid> if you say yes, then go ahead, that's as good a place to start as any 12:50 < Sparks> Okay... So wiki gardening... 12:50 -!- quaid changed the topic of #fedora-meeting to: FDSCo is as FDSCo does -- Wiki gardening ... UG, SecG, Other, cleaning up projects list ... 12:50 < Sparks> I've been making a run through the DocsProject and Documentation pages... 12:51 < quaid> (it's been going pretty well, IMO, thanks to all who have been helping) 12:51 -!- jmbuser [n=jmbuser 195 229 25 134] has quit [Read error: 104 (Connection reset by peer)] 12:51 < Sparks> and I think I've hit most of the 'big' pages... 12:51 < quaid> +1 sweet 12:51 < Sparks> but if you want to see how many pages are actually attributed to the DocsProject... 12:51 < Sparks> just go to. 12:52 < Sparks> This brings up my first request... 12:52 < tiagoaoa> let me see if I can talk here 12:52 < Sparks> categories. 12:52 < tiagoaoa> yep.. not moderated, see? 12:52 < Sparks> tiagoaoa Go ahead 12:52 < tiagoaoa> nevermind 12:52 < quaid> Sparks: we can have categories in cats, right? 12:53 < Sparks> quaid: We can have anything we want. 12:53 < Sparks> Looks like Drkludge wrote something for our category... 12:53 < quaid> tiagoaoa: if you are having trouble talking in a #fedora-* channel, the channel topic there should point you at directions for registering your nick. 12:53 < quaid> Sparks: what are you thinking about for cats? 12:54 < Sparks> so that if anyone clicks on the category it will give them some information on what it is. 12:54 -!- ldimaggi_ [n=ldimaggi 66 187 234 199] has quit ["Leaving"] 12:54 < Sparks> There seems to be two... DocsProject and Documentation 12:54 < quaid> they are different 12:54 < quaid> one is content useful for people, the other is the project that maintains that content 12:54 < Sparks> If we can flag all the Documentation as such then it would make it easier to maintain and have people find it 12:54 < quaid> true that 12:54 < Sparks> quaid: exactly 12:55 < quaid> do we want to move the actual docs out from the DocsProject cat? 12:55 < Sparks> I'd like to propose we also do one for the drafts. 12:55 < quaid> what about a namespace? 12:55 < Sparks> quaid: I don't know. That was one of my questions 12:55 < quaid> Docs: or something 12:55 < quaid> ianweller: can we have a page in multiple, non-nested categories? 12:55 -!- Ludvick [n=ludvick_ adsl-065-012-235-102 sip mia bellsouth net] has quit ["Leaving"] 12:56 < quaid> ianweller: or should we have a Documentation cat, and a DocumetationDraft sub-cat? 12:56 < Sparks> quaid: yes... Check the security guide. 12:56 < ianweller> it depends on how you want to do it. do you want your drafts in [[Category:Documentation]]? 12:56 < quaid> 12:56 < quaid> ok, I see 12:56 < jsmith> Gotta run again... 12:56 -!- jsmith is now known as jsmith-away 12:56 < ianweller> if not, make them separate; if so, add [[Category:Documentation]] to the page for Category:DocumentationDraft 12:56 < quaid> ianweller: yes 12:57 < Sparks> ianweller: cool... hadn't thought of that. 12:57 < quaid> that seems clear enough 12:57 < quaid> Sparks: +1 to the general idea, fwiw 12:57 -!- J5 [n=quintice nat/redhat-us/x-0f613b84aea53d37] has quit [Connection timed out] 12:57 < Sparks> Yeah, just trying to get a standard out there 12:57 < quaid> I want to see us leading others in how to use MediaWiki to our advantage 12:57 < Sparks> the cats make it VERY easy to maintain things 12:57 < quaid> ianweller: what is the advantage of a Namespace: over or alongside a Category: ? 12:58 < quaid> Sparks: can you write up a policy? DocsProject/Categories or something 12:58 < Sparks> Sure 12:58 < quaid> policy/procedure/guideline whatever 12:58 < Sparks> guide 12:58 < Sparks> that's not a problem 12:58 -!- JSchmitt [n=s4504kr fedora/JSchmitt] has quit ["Konversation terminated!"] 12:58 < Sparks> Anyone have anything else? If not I'll go on to the orphan pages and that will be it for me 12:59 < quaid> I want to talk about namespaces but need to grok it better 12:59 < quaid> so we can move on to orphaned, sure 12:59 < Sparks> 12:59 * quaid reading 12:59 -!- rdieter_away is now known as rdieter 12:59 < Sparks> So this page shows all the pages in the wiki that aren't linked to any other page in the wiki 12:59 < quaid> oooooh, nice Special: page 12:59 < ianweller> quaid: i'm trying to search for what would be a good reason to completely switch over to namespaces 12:59 < Sparks> Lots of fun stuff in here. 13:00 < quaid> wow, there are tons there 13:00 < quaid> for the MoinEditorBackup, ianweller or someone was looking at a way to mass delete them 13:00 < Sparks> Yeah, and if they aren't linked some how then they are only going to be found by a search which to me is inefficient 13:01 < quaid> that one is on the Migration Masters to-do list 13:01 < Sparks> yeah 13:01 < quaid> Sparks: well ... 13:01 < quaid> Sparks: one thing about MW is search is useful 13:01 < quaid> Sparks: also, they might be linked from the outside, which is legit 13:01 < quaid> I'd want to see a cross between this list and a Google frequency of some kind 13:01 < Sparks> I'm not saying we should go in and try to shoehorn all these pages in, but there are a lot of DocsProject files out there that need some love 13:01 < quaid> to use it as a basis for declaring orphans 13:01 < quaid> that is true 13:01 < Sparks> I agree 13:01 < quaid> ok, we are out of time 13:02 < Sparks> Yep, the orphan thing was just food for thought. 13:02 < Sparks> eof 13:02 < quaid> let's move this over to #fedora-docs to continue, a policy will take more discussion. 13:02 < quaid> ok, then, cool 13:02 < quaid> thanks everyone 13:02 < quaid> </meeting>
Attachment:
signature.asc
Description: This is a digitally signed message part | http://www.redhat.com/archives/fedora-docs-list/2008-May/msg00124.html | crawl-002 | refinedweb | 3,430 | 71.58 |
kubectl-delete - Man Page
Delete resources by filenames, stdin, resources and names, or by resources and label selector
Eric Paris Jan 2015
Synopsis
kubectl delete [Options]
Description specify the --force flag. Note: only a subset of resources support graceful deletion. In absence of the support, --grace-period is ignored..
-A, --all-namespaces=false If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.
--cascade=true If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
--dry-run="none" Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.
--field-selector="" Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-f, --filename=[] containing the resource to delete.
--force=false.
-k, --kustomize="" Process a kustomization directory. This flag can't be used together with -f or -R.
--now=false If true, resources are signaled for immediate shutdown (same as --grace-period=1).
-o, --output="" Output mode. Use "-o name" for shorter output (resource/name).
--raw="" Raw URI to DELETE to the server. Uses the transport specified by the kubeconfig file.
-R, --recursive=false Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
-l, --selector="" Selector (label query) to filter on, not including uninitialized ones.
--timeout=0s The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
--wait=true If true, wait for resources to be gone before returning. This waits for finalizers.
Options Inherited from Parent Commands
-
Example
# Delete a pod using the type and name specified in pod.json. kubectl delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml. kubectl delete -k dir # minimal delay kubectl delete pod foo --now # Force delete a pod on a dead node kubectl delete pod foo --force # Delete all pods kubectl delete pods --all
See Also
kubectl(1),
History
January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since!
Referenced By
kubectl(1). | https://www.mankier.com/1/kubectl-delete | CC-MAIN-2021-10 | refinedweb | 409 | 58.79 |
shell's .load is pathSep picky (bug, now fixed)
(1.1) By Larry Brasfield (LarryBrasfield) on 2020-04-26 23:44:24 edited from 1.0 [source]
I see that the bug I described in this thread, and for which a simple fix is shown in that same post, remains in the source for loadext.c .
If it is not a bug, then the documentation for the API and SQL functions that reach sqlite3LoadExtension() needs a note added regarding the required path separator, which is always '/' for the last one now when the entry point default is taken and the given DLL location has more than a basename.
If it has been recorded as a bug, I missed noticing that. (I need to learn ticket searching skills.)
Amended 2020-04-26: Now fixed on trunk as of 22:04 check-in.
(2.1) By Larry Brasfield (LarryBrasfield) on 2020-04-26 15:23:32 edited from 2.0 in reply to 1.0 [link] [source]
Check-in bc3bf7c6 appears to be intended as a fix for the subject bug, but it does not work as intended.
The hunt for beginning of the DLL basename when SQLITE_OS_WIN is true now reads:
for(iFile=ncFile-1; iFile>=0 && ((c=zFile[iFile]!='/')||c=='\\'); iFile--){}
This has two problems: (1) The search should stop, rather than continue, when the usual Windows path separator '\' is found; and (2) the value of c becomes the boolean result of the comparison to '/', which is always false when the comparison to '\\' is reached thus being equivalent to 0=='\\' for that test.
The intent of that code would be better expressed with this replacement:
for(iFile=ncFile-1; iFile>=0 && (c=zFile[iFile])!='/'&&c!='\\'; iFile--){}
I have built and tested both versions with these shell commands, (where the shell executable is not located in /Bin), with results as noted:
.load /Bin/natsort .load \\Bin\\natsort sqlite3_natsort_init These two succeed for all versions. .load \\Bin\\natsort This fails with "Error: The specified procedure could not be found." for check-in bc3bf7c6 (and earlier versions) and succeeds with the proposed replacement.
(3.1) By Larry Brasfield (LarryBrasfield) on 2020-04-26 15:24:38 edited from 3.0 in reply to 2.0 [link] [source]
I almost added that a good optimizer would eliminate the 0=='\\' test, but decided that is irrelevant to this thread. However, out of curiosity about the MSVC v19 code generator given optimization flags, I looked at assembler output for both the bc3bf7c6 check-in and the proposed replacement. Here are the results, lightly annotated in comments:
; Original code: ; Line 123565 lea r10d, DWORD PTR [rbx-1] mov QWORD PTR [rax], rcx mov ecx, ebx sub rcx, 1 js SHORT $LN205@sqlite3Loa $LL7@sqlite3Loa: cmp BYTE PTR [rcx+rsi], 47 ; '/' je SHORT $LN205@sqlite3Loa dec r10d sub rcx, 1 jns SHORT $LL7@sqlite3Loa $LN205@sqlite3Loa: ; Revised code: ; Line 123567 ; '/' je SHORT $LN203@sqlite3Loa cmp al, 92 ; '\\' je SHORT $LN203@sqlite3Loa dec r10d sub rcx, 1 jns SHORT $LL7@sqlite3Loa $LN203@sqlite3Loa:
As can be seen in the 'cmp al, ?' instructions, the 2nd test is optimized out when it could not succeed. The optimizer recognized this, not via constant expression analysis but by analyzing the possible values of c when the 2nd test was reached.
As can also be seen, the possibly performant conversion of code that might be written:
zFile[iFile]!='/'&&zFile[iFile]!='\\'
into something that avoids an indexing operation:
(c=zFile[iFile])!='/'&&c!='\\'
is not needed with this particular optimizing compiler. I dare say that recognizing common subexpressions such as 'zFile[iFile]' is an easier and more common optimizer feature than the one demonstrated in the above assembler output.
Of course, this code runs after a DLL has been loaded, after some file reads and many thousands of instructions run to (dynamically) link it, so the possible elimination of a load instruction is like a sneeze in a hurricane.
(4) By Larry Brasfield (LarryBrasfield) on 2020-04-26 21:22:23 in reply to 2.1 [link] [source]
I am sorry to have to report that check-in 57b16d8c is still not right.
Put briefly, the newest change's code fragment,
(c=zFile[iFile]!='/')
, assigns the comparison result, a truth value (1 or 0), to c rather than the char value that was probably intended. If that fragment was instead written,
(c=zFile[iFile])!='/'
, then c would be assigned the char value zFile[iFile] and the code then comparing c to '\\' would work.
If that is accepted as a fact, the rest of this post can be ignored because it only goes to prove what I claim above and show, in detail, why it it is true.
// I modified the subject code taken from loadext.c (in the amalgamation sqlite3.c) to read:
memcpy(zAltEntry, "sqlite3_", 8); #if SQLITE_OS_WIN # if defined(FAVOR_DRH_LOAD_EXT_ENTRY_FIX1) for(iFile=ncFile-1; iFile>=0 && ((c=zFile[iFile]!='/')||c=='\\'); iFile--){} # elif defined(FAVOR_DRH_LOAD_EXT_ENTRY_FIX2) for(iFile=ncFile-1; iFile>=0 && ((c=zFile[iFile]!='/')&&c!='\\'); iFile--){} /* This is line 123567 */ # else for(iFile=ncFile-1; iFile>=0 && (c=zFile[iFile])!='/'&&c!='\\'; iFile--){} /* Note movement of 2nd ')' */ # endif #else for(iFile=ncFile-1; iFile>=0 && zFile[iFile]!='/'; iFile--){} #endif iFile++;
I added this in Makefile.msc where it's timely:
# Temp defs to test bug fix options !IF defined(BUGFIX_OPT) TCC = $(TCC) -DFAVOR_DRH_LOAD_EXT_ENTRY_FIX$(BUGFIX_OPT) !ENDIF
Here are excerpts from a subsequent OS shell session, minus much extraneous matter:
> build BUGFIX_OPT=0 clean ... > build BUGFIX_OPT=2 ... cl ... -DFAVOR_DRH_LOAD_EXT_ENTRY_FIX2 ... > sqlite3m -bail -cmd ".load \\Bin\\natsort" Error: The specified module could not be found. > build BUGFIX_OPT=0 clean ... > build BUGFIX_OPT=1 ... cl ... -DFAVOR_DRH_LOAD_EXT_ENTRY_FIX1 > sqlite3m -bail -cmd ".load \\Bin\\natsort" Error: The specified procedure could not be found. > build BUGFIX_OPT=0 clean ... > build BUGFIX_OPT=0 ... cl ... -DFAVOR_DRH_LOAD_EXT_ENTRY_FIX0 > sqlite3m -bail -cmd ".load \\Bin\\natsort" SQLite version 3.32.0 2020-04-23 20:45:46 (with modified shell) Enter ".help [<command>]" for usage hints. sqlite> .q > alias build nmake -f Makefile.msc SESSION=1 SQLITE3EXE=sqlite3m.exe DYNAMIC_SHELL=1 OPTIMIZATIONS=4 DEBUG=0 SYMBOLS=0 TEMP_STORE=2 OPT_FEATURE_FLAGS="-DSQLITE_DQS=0 -DSQLITE_ENABLE_FTS4=1 -DSQLITE_ENABLE_FTS5=1 -DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1 -DSQLITE_ENABLE_COLUMN_METADATA=1 -DSQLITE_DEFAULT_FOREIGN_KEYS=1 -DSQLITE_ENABLE_GEOPOLY -DSQLITE_ENABLE_SESSION _USE_URI -DSQLITE_OMIT_DEPRECATED -DSQLITE_USE_ALLOCA -DSQLITE_DEFAULT_SYNCHRONOUS=3 -DSQLITE_LIKE_DOESNT_MATCH_BLOBS -DSQLITE_DEFAULT_WORKER_THREADS=3" SHELL_EXTRA_OPTS="-Ox -I..\..\zlib-1.2.11 -DBLOB_IO=1 -DTSV_INIT=1 -DUSE_SYSTEM_SQLITE=1 -DSQLITE_ENABLE_SESSION=1 -DSQLITE_HAVE_ZLIB=1 -DNO_SELFTEST -DSQLITE_UNTESTABLE" LDOPTS="C:\Work\Projects\zlib-1.2.11\zlib.lib /NODEFAULTLIB:MSVCRT /INCREMENTAL:NO"
Here are excerpts from the assembly output with -DFAVOR_DRH_LOAD_EXT_ENTRY_FIX2 compilation:
_TEXT SEGMENT ... sqlite3LoadExtension PROC ; File C:\Work\Projects\Sqlite\v3r32pr6\sqlite3.c ... ; Line 123567 lea r10d, DWORD PTR [rbx-1] mov QWORD PTR [rax], rcx mov ecx, ebx sub rcx, 1 js SHORT $LN203@sqlite3Loa $LL7@sqlite3Loa: cmp BYTE PTR [rcx+rsi], 47 ; 0000002fH je SHORT $LN203@sqlite3Loa dec r10d sub rcx, 1 jns SHORT $LL7@sqlite3Loa $LN203@sqlite3Loa:
; Note that above comparison to 47 (aka '/') is to an indirectly accessed location. Also note that there is no comparison to 92 (aka '\') anywhere. The code that might do so has been optimized away because this
&& ((c=zFile[iFile]!='/')&&c!='\\')
is logically equivalent to this appearing the top of the loop body:
if( !( c=zFile[iFile]!='/' ) ){ /* 1st test */ if( !( c!='\\' ) ) /* 2nd test */ break; }
The compiler, enforcing the higher binding precedence of != relative to =, "stores" 1st comparison truth value into what would be c. (The quotes are because, in fact, the "c" value remains enregistered, not stored, as the above .asm excerpts show.) Because the 2nd test can only be reached when c would have been given the truth value "False", or 0 per C convention, and because 0!='\' is always true and the 2nd test condition can never be met, there is nothing achieved by executed such code with a constant outcome. And so it is optimized away.
Here, with the troublesome clause selected (at line 123569) as
&& (c=zFile[iFile])!='/'&&c!='\\'
, are excerpts from the assembly output:
; Line 123569 ; 0000002fH je SHORT $LN203@sqlite3Loa cmp al, 92 ; 0000005cH je SHORT $LN203@sqlite3Loa dec r10d sub rcx, 1 jns SHORT $LL7@sqlite3Loa $LN203@sqlite3Loa: ; Note that above comparisons to 47 and 92 (aka '/' and '\\') are to a loaded register.
This last is the code that works as demonstrated by the shell session excerpts. It ends the loop if either of two successive comparisons are met.
I apologize if this is excessively belaboring the obvious. My first fix was not the one I first suggested, so I am sympathetic regarding the confusion.
(5) By Larry Brasfield (LarryBrasfield) on 2020-04-26 23:39:00 in reply to 4 [link] [source]
Check-in b73d9a7d6f7fec0f, using new DirSep(c) macro, works great. Thanks. | https://sqlite.org/forum/info/745021e7e5e6a54d | CC-MAIN-2022-05 | refinedweb | 1,433 | 64.41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.