text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Following is a quick and dirty C++ XML XPath expression parser.
I started the day checking the web looking for a quick and easy XPath parser for C++. Now, before I begin, I just want to say that I found plenty of XML libraries out there but nothing that quite suited my needs. What got me started on this XPath adventure, you ask? Well, it all started because I am currently writing a dev tool which requires some external settings. Usually, I would place these app settings in the Registry, but on this occasion (as I had a little time), I thought I would have a go at using XML to store my application settings. All was going swimmingly, and I had got to the part where I had written my settings.xml file, and now I wanted to read it back in and navigate my settings file. After doing some internet prodding, it seems that XPath is the easiest and simplest way to do this.
I had a quick play with the .NET classes using C++/CLI and found them a breeze to use, but I wanted to keep my application to be fully C++ native on this occasion, which is why I decided to write this quick and dirty XPathParser class.
XPathParser
Using the XPathParser class is very straightforward.
First, add the source files (XPathParser.cpp and XPathParser.h) into the directory you wish to use them from, and then add the files to your Visual Studio project. Include XPathParser.h in the file you want to use the class:
#include "XPathParser.h"
The next thing to do is to include the namespace XPathNS.
XPathNS
using namespace XPathNS;
At this point, we are ready to rock and roll. All we need to do is instantiate an instance of the XPathParser, passing through the name of the XML file we are going to throw XPath expressions at.
XPthParser xPath( "books.xml" )
If you have made it this far, you'd be foolish to stop coding now. Now, we get to try out some XPath expressions (just like below):
// drill down and select all author elements
std::vector<XMLNode> nodeList1 = xPath.selectNodes( "//catalog//book//author" );
// select all author elements
std::vector<XMLNode> nodeList2 = xPath.selectNodes( "//author" );
// select all price elements
std::vector<XMLNode> nodeList3 = xPath.selectNodes( "//price" );
// select all books elements
std::vector<XMLNode> nodeList4 = xPath.selectNodes( "//book" );
// select all attributes named id
std::vector<XMLNode> nodeList5 = xPath.selectNodes( "//@id" );
// select the last book element
std::vector<XMLNode> nodeList6 = xPath.selectNodes( "//book[last()]" );
// select all book elements with id equal to bk103
std::vector<XMLNode> nodeList7 = xPath.selectNodes( "//book[@id='bk103']" );
// select all book titles with price > 35 quid
std::vector<XMLNode> nodeList8 =
xPath.selectNodes( "/catalog/book[price>35.00]/title" );
// select the first node matching author
XMLNode node1 = xPath.selectSingleNode( "//author" );
Simple as hey! Are we forgetting something? Ahh yes.. Each XPath expression yields a list of XMLNodes. An XMLNode is made up of the "XML name", the actual "XML value", and "any XML attributes" associated with the XMLNode. For reference, here is what the XMLNode class looks like:
XMLNode
struct XMLAttribute
{
std::string value_;
std::string name_;
};
struct XMLNode
{
std::string xml_;
std::string value_;
std::string name_;
std::vector<XMLAttribute> nodeAttributes_;
};
Getting back to some sample code, here is a complete example of using the XPathParser class:
using namespace XPathNS;
XPathParser xPath( "books.xml" )
std::vector<XMLNode> nodeList =
xPath.selectNodes( "/catalog/book[price>35.00]/title" );
// now lets output our nodeList
for ( int loopCnt = 0; loopCnt < nodeList.size(); loopCnt++ )
{
XMLNode node = nodeList[loopCnt];
std::cout << "name : " << node.name_ << std::endl;
std::cout << "value: " << node.value_ << std::endl;
std::cout << "num attributes : " << node.nodeAttributes.size() << std::endl;
for ( int attribCnt = 0; attribCnt < node.nodeAttributes.size(); attribCnt++ )
{
XMLAttribute attrib = node.nodeAttributes[attribCnt];
std::cout << "attribute name : " << attrib.name_ << std::endl;
std::cout << "attribute value: " << attrib.value_ << std::endl;
}
}
Just to note, in real life, the XPathParser class is nothing more than a wrapper for MSXML 6.0, which is why this article is aptly titled quick & dirty. More information can be found about MSXML at the following website:.
The sample XML file that I used was from the MSDN website:.
The sample application written using Visual Studio 2005 SP2.
No history. | http://www.codeproject.com/Articles/18693/Quick-Dirty-XML-XPath-Parser?msg=2815167 | CC-MAIN-2015-40 | refinedweb | 712 | 58.38 |
On Fri, 2007-10-12 at 17:50 +0200, Jan Blunck wrote:> In case of somebody opens a file with dentry_open(dentry, NULL, ...) we don't> want to stumble on the NULL pointer mnt in struct file. ...> +++ b/fs/namespace.c> @@ -253,6 +253,9 @@ void mnt_drop_write(struct vfsmount *mnt> int must_check_underflow = 0;> struct mnt_writer *cpu_writer;> > + if (!mnt)> + return;I kinda wish we'd fix these in the callers. I know we do somethingsimilar to this with mntput(), but I worry a bit that this justdiscourages people from using the right interfaces.Do you have a case where we're actually getting a NULL mount in here?We had at least one in reiser4 that really revealed some nastiness inthe fs that needed fixing.-- Dave-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/10/12/279 | CC-MAIN-2015-22 | refinedweb | 153 | 65.01 |
weda82
weda82
@mikael Thanks for your reply.
You are right. I was missing the brackets at "nowPlayingItem". I already had them for title, but you need both.
weda82
Thank you for your ideas and I am sorry for my late reply.
I tried the previous suggestion and I think the query is not working properly. If I iterate the query result it returns all my media items. But I also do not see the problem. As far as I understood the Apple documentation, it looks correct to my.
I tried a different approach:
from objc_util import * def main(): NSBundle.bundleWithPath_( '/System/Library/Frameworks/MediaPlayer.framework').load() matchingItem = ["928428096"] # StoreId of "Songs of innocence" MPMusicPlayerController = ObjCClass('MPMusicPlayerController') player = MPMusicPlayerController.systemMusicPlayer() player.setQueueWithStoreIDs(matchingItem) player.play() main()
I got the StoreId from this web site:
This is working, but not as comfortable as I was looking for.
weda82
Hello
I am looking for a way to specify an existing album from my Music app and start playing it.
Is this possible with Pythonista 3 on iOS 12?
I appreciate any ideas or solutions.
Thank you | https://forum.omz-software.com/user/weda82 | CC-MAIN-2020-29 | refinedweb | 183 | 51.34 |
Initially only a few TBs of space is needed for a storage solution (<10TBs).
My initial plan is to have a NAS with other servers mounting the data stored on the NAS device, pushing and pulling data from it. Because this will be a small deployment the cost of a SAN can't be justified, and growth expectancy is unknown as of yet. So it will be NAS to being with, but it needs to be expandable.
I see a few problems with this design though;
Firstly; SMB/CIFS is not a good with multiple servers having the same data store mounted, NFS seems like a better option here. Although, as far as I know it's my only option. It would be a native Linux deployment, so are any better protocols available to me other than NFS, or is that my only choice here?
Secondly; As the NAS device gets short of either I/O capacity or space, which ever comes first, another will have to be added (and this process will repeat). How can I drop another NAS onto the network and extending the existing storage share (as far as the view from the other servers is concerned) to include this addition storage space? What is the "NAS equivalent" of adding another storage host in a SAN, and expanding the file system across it? (As far as I know this isn't possible, but I'm asking in case I'm wrong!).
Presumably what I have described in the scenario above, once the first NAS is at capacity, is the basic requirement for a SAN, is this correct? Would a more scalable approach be to add another NAS, and have the servers mount both storage shares, and have the application support the use of multiple storage spaces, rather then trying to implement an ever growing storage space?
SMB/CIFS is perfectly fine with multiple servers having the same data store mounted. It's a plenty concurrent file system that, despite being slower and higher-latency than NFS, will deal just fine with a good number of concurrent connections. The bigger concern may be user access, since all traffic to your CIFS mount will go across as the user that authenticated the mount, in contrast to NFS. In general, I do consider NFS the more robust solution for server-server file sharing.
If you're planning for expandability in a single namespace, it's much cheaper to scale up than scale out. Most vendors will provide some kind of SAS-based disk array solution. These can typically be daisy-chained, and you can run a single coherent filesystem across them using LVM or a similar volume manager (keeping in mind that if one disk shelf fails it will trash your entire volume, so you probably want multiple paths to your storage). This is probably the most cost-effective solution for you, but there's a ton of options. Your choice of filesystem does matter, so keep that in mind. I'm a fan of ZFS on Solaris 11, but your choices are more diverse if you're dedicated to using a free software solution.
If you're dead-set on scaling out, there's a number of parallel filesystems like Gluster and Ceph out there, but they're at varying degrees of maturity and compatibility and I wouldn't recommend them for general-purpose file sharing at this point.
If you use iSCSI you can turn your NAS into a SAN. After that use it with cLVM and ocfs to concurrently mount it on your systems. cLVM will give you the flexibility to expand at will.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
309 times
active | http://serverfault.com/questions/357401/how-to-get-san-like-flexibility-from-a-nas | CC-MAIN-2015-18 | refinedweb | 629 | 67.28 |
Using the
@stencil decorator¶
Stencils are a common computational pattern in which array elements
are updated according to some fixed pattern called the stencil kernel.
Numba provides the
@stencil decorator so that users may
easily specify a stencil kernel and Numba then generates the looping
code necessary to apply that kernel to some input array. Thus, the
stencil decorator allows clearer, more concise code and in conjunction
with the parallel jit option enables higher
performance through parallelization of the stencil execution.
Basic usage¶
An example use of the
@stencil decorator:
from numba import stencil @stencil def kernel1(a): return 0.25 * (a[0, 1] + a[1, 0] + a[0, -1] + a[-1, 0])
The stencil kernel is specified by what looks like a standard Python function definition but there are different semantics with respect to array indexing. Stencils produce an output array of the same size and shape as the input array although depending on the kernel definition may have a different type. Conceptually, the stencil kernel is run once for each element in the output array. The return value from the stencil kernel is the value written into the output array for that particular element.
The parameter
a represents the input array over which the
kernel is applied.
Indexing into this array takes place with respect to the current element
of the output array being processed. For example, if element
(x, y)
is being processed then
a[0, 0] in the stencil kernel corresponds to
a[x + 0, y + 0] in the input array. Similarly,
a[-1, 1] in the stencil
kernel corresponds to
a[x - 1, y + 1] in the input array.
Depending on the specified kernel, the kernel may not be applicable to the borders of the output array as this may cause the input array to be accessed out-of-bounds. The way in which the stencil decorator handles this situation is dependent upon which func_or_mode is selected. The default mode is for the stencil decorator to set the border elements of the output array to zero.
To invoke a stencil on an input array, call the stencil as if it were a regular function and pass the input array as the argument. For example, using the kernel defined above:
>>> import numpy as np >>> input_arr = np.arange(100).reshape((10,]]) >>> output_arr = kernel1(input_arr) array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 11., 12., 13., 14., 15., 16., 17., 18., 0.], [ 0., 21., 22., 23., 24., 25., 26., 27., 28., 0.], [ 0., 31., 32., 33., 34., 35., 36., 37., 38., 0.], [ 0., 41., 42., 43., 44., 45., 46., 47., 48., 0.], [ 0., 51., 52., 53., 54., 55., 56., 57., 58., 0.], [ 0., 61., 62., 63., 64., 65., 66., 67., 68., 0.], [ 0., 71., 72., 73., 74., 75., 76., 77., 78., 0.], [ 0., 81., 82., 83., 84., 85., 86., 87., 88., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> input_arr.dtype dtype('int64') >>> output_arr.dtype dtype('float64')
Note that the stencil decorator has determined that the output type
of the specified stencil kernel is
float64 and has thus created the
output array as
float64 while the input array is of type
int64.
Stencil Parameters¶
Stencil kernel definitions may take any number of arguments with the following provisions. The first argument must be an array. The size and shape of the output array will be the same as that of the first argument. Additional arguments may either be scalars or arrays. For array arguments, those arrays must be at least as large as the first argument (array) in each dimension. Array indexing is relative for all such input array arguments.
Kernel shape inference and border handling¶
In the above example and in most cases, the array indexing in the
stencil kernel will exclusively use
Integer literals.
In such cases, the stencil decorator is able to analyze the stencil
kernel to determine its size. In the above example, the stencil
decorator determines that the kernel is
3 x 3 in shape since indices
-1 to
1 are used for both the first and second dimensions. Note that
the stencil decorator also correctly handles non-symmetric and
non-square stencil kernels.
Based on the size of the stencil kernel, the stencil decorator is
able to compute the size of the border in the output array. If
applying the kernel to some element of input array would cause
an index to be out-of-bounds then that element belongs to the border
of the output array. In the above example, points
-1 and
+1 are
accessed in each dimension and thus the output array has a border
of size one in all dimensions.
The parallel mode is able to infer kernel indices as constants from simple expressions if possible. For example:
@njit(parallel=True) def stencil_test(A): c = 2 B = stencil( lambda a, c: 0.3 * (a[-c+1] + a[0] + a[c-1]))(A, c) return B
Stencil decorator options¶
Note
The stencil decorator may be augmented in the future to provide additional
mechanisms for border handling. At present, only one behaviour is
implemented,
"constant" (see
func_or_mode below for details).
neighborhood¶
Sometimes it may be inconvenient to write the stencil kernel
exclusively with
Integer literals. For example, let us say we
would like to compute the trailing 30-day moving average of a
time series of data. One could write
(a[-29] + a[-28] + ... + a[-1] + a[0]) / 30 but the stencil
decorator offers a more concise form using the
neighborhood
option:
@stencil(neighborhood = ((-29, 0),)) def kernel2(a): cumul = 0 for i in range(-29, 1): cumul += a[i] return cumul / 30
The neighborhood option is a tuple of tuples. The outer tuple’s length is equal to the number of dimensions of the input array. The inner tuple’s lengths are always two because each element of the outer tuple corresponds to minimum and maximum index offsets used in the corresponding dimension.
If a user specifies a neighborhood but the kernel accesses elements outside the specified neighborhood, the behavior is undefined.
func_or_mode¶
The optional
func_or_mode parameter controls how the border of the output array
is handled. Currently, there is only one supported value,
"constant".
In
constant mode, the stencil kernel is not applied in cases where
the kernel would access elements outside the valid range of the input
array. In such cases, those elements in the output array are assigned
to a constant value, as specified by the
cval parameter.
cval¶
The optional cval parameter defaults to zero but can be set to any
desired value, which is then used for the border of the output array
if the
func_or_mode parameter is set to
constant. The cval parameter is
ignored in all other modes. The type of the cval parameter must match
the return type of the stencil kernel. If the user wishes the output
array to be constructed from a particular type then they should ensure
that the stencil kernel returns that type.
standard_indexing¶
By default, all array accesses in a stencil kernel are processed as
relative indices as described above. However, sometimes it may be
advantageous to pass an auxiliary array (e.g. an array of weights)
to a stencil kernel and have that array use standard Python indexing
rather than relative indexing. For this purpose, there is the
stencil decorator option
standard_indexing whose value is a
collection of strings whose names match those parameters to the
stencil function that are to be accessed with standard Python indexing
rather than relative indexing:
@stencil(standard_indexing=("b",)) def kernel3(a, b): return a[-1] * b[0] + a[0] + b[1]
StencilFunc¶
The stencil decorator returns a callable object of type
StencilFunc.
StencilFunc objects contains a number of attributes but the only one of
potential interest to users is the
neighborhood attribute.
If the
neighborhood option was passed to the stencil decorator then
the provided neighborhood is stored in this attribute. Else, upon
first execution or compilation, the system calculates the neighborhood
as described above and then stores the computed neighborhood into this
attribute. A user may then inspect the attribute if they wish to verify
that the calculated neighborhood is correct.
Stencil invocation options¶
Internally, the stencil decorator transforms the specified stencil kernel into a regular Python function. This function will have the same parameters as specified in the stencil kernel definition but will also include the following optional parameter.
out¶
The optional
out parameter is added to every stencil function
generated by Numba. If specified, the
out parameter tells
Numba that the user is providing their own pre-allocated array
to be used for the output of the stencil. In this case, the
stencil function will not allocate its own output array.
Users should assure that the return type of the stencil kernel can
be safely cast to the element-type of the user-specified output array
following the Numpy ufunc casting rules.
An example usage is shown below:
>>> import numpy as np >>> input_arr = np.arange(100).reshape((10, 10)) >>> output_arr = np.full(input_arr.shape, 0.0) >>> kernel1(input_arr, out=output_arr) | https://numba.readthedocs.io/en/stable/user/stencil.html | CC-MAIN-2020-40 | refinedweb | 1,510 | 50.26 |
Abstract
This PEP introduces a new built-in function, enumerate() to simplify a commonly used looping idiom. It provides all iterable collections with the same advantage that iteritems() affords to dictionaries -- a compact, readable, reliable index notation.
Rationale
Python 2.2 introduced the concept of an iterable interface as proposed in PEP 234 . The availability of generators makes it possible to improve on the loop counter ideas in PEP 212 [2]. Those ideas provided a clean syntax for iteration with indices and values, but did not apply to all iterable objects. Also, that approach did not have the memory friendly benefit provided by generators which do not evaluate the entire sequence all at once. The new proposal is to add a built-in function, enumerate() which was made possible once iterators and generators became available. It provides all iterables with the same advantage that iteritems() affords to dictionaries -- a compact, readable, reliable index notation. Like zip(), it is expected to become a commonly used looping idiom. This suggestion is designed to take advantage of the existing implementation and require little additional effort to incorporate. It is backwards compatible and requires no new keywords. The proposal will go into Python 2.3 when generators become final and are not imported from __future__.
BDFL Pronouncements
The new built-in function is ACCEPTED.
Specification for a new built-in:
def enumerate(collection): 'Generates an indexed series: (0,coll[0]), (1,coll[1]) ...' i = 0 it = iter(collection) while 1: yield (i, it.next()) i += 1 Note A: PEP 212 Loop Counter Iteration [2] discussed several proposals for achieving indexing. Some of the proposals only work for lists unlike the above function which works for any generator, xrange, sequence, or iterable object. Also, those proposals were presented and evaluated in the world prior to Python 2.2 which did not include generators. As a result, the non-generator version in PEP 212 had the disadvantage of consuming memory with a giant list of tuples. The generator version presented here is fast and light, works with all iterables, and allows users to abandon the sequence in mid-stream with no loss of computation effort. There are other PEPs which touch on related issues: integer iterators, integer for-loops, and one for modifying the arguments to range and xrange. The enumerate() proposal does not preclude the other proposals and it still meets an important need even if those are adopted -- the need to count items in any iterable. The other proposals give a means of producing an index but not the corresponding value. This is especially problematic if a sequence is given which doesn't support random access such as a file object, generator, or sequence defined with __getitem__. Note B: Almost all of the PEP reviewers welcomed the function but were divided as to whether there should be any built-ins. The main argument for a separate module was to slow the rate of language inflation. The main argument for a built-in was that the function is destined to be part of a core programming style, applicable to any object with an iterable interface. Just as zip() solves the problem of looping over multiple sequences, the enumerate() function solves the loop counter problem. If only one built-in is allowed, then enumerate() is the most important general purpose tool, solving the broadest class of problems while improving program brevity, clarity and reliability. Note C: Various alternative names were discussed: iterindexed()-- five syllables is a mouthful index() -- nice verb but could be confused the .index() method indexed() -- widely liked however adjectives should be avoided indexer() -- noun did not read well in a for-loop count() -- direct and explicit but often used in other contexts itercount() -- direct, explicit and hated by more than one person iteritems() -- conflicts with key:value concept for dictionaries itemize() -- confusing because amap.items() != list(itemize(amap)) enum() -- pithy; less clear than enumerate; too similar to enum in other languages where it has a different meaning All of the names involving 'count' had the further disadvantage of implying that the count would begin from one instead of zero. All of the names involving 'index' clashed with usage in database languages where indexing implies a sorting operation rather than linear sequencing. Note D: This function was originally proposed with optional start and stop arguments. GvR pointed out that the function call enumerate(seqn,4,6) had an alternate, plausible interpretation as a slice that would return the fourth and fifth elements of the sequence. To avoid the ambiguity, the optional arguments were dropped even though it meant losing flexibility as a loop counter. That flexibility was most important for the common case of counting from one, as in: for linenum, line in enumerate(source,1): print linenum, line Comments from GvR: filter and map should die and be subsumed into list comprehensions, not grow more variants. I'd rather introduce built-ins that do iterator algebra (e.g. the iterzip that I've often used as an example). I like the idea of having some way to iterate over a sequence and its index set in parallel. It's fine for this to be a built-in. I don't like the name "indexed"; adjectives do not make good function names. Maybe iterindexed()? Comments from Ka-Ping Yee: I'm also quite happy with everything you proposed ... and the extra built-ins (really 'indexed' in particular) are things I have wanted for a long time. Comments from Neil Schemenauer: The new built-ins sound okay. Guido may be concerned with increasing the number of built-ins too much. You might be better off selling them as part of a module. If you use a module then you can add lots of useful functions (Haskell has lots of them that we could steal). Comments for Magnus Lie Hetland: I think indexed would be a useful and natural built-in function. I would certainly use it a lot. I like indexed() a lot; +1. I'm quite happy to have it make PEP 281 obsolete. Adding a separate module for iterator utilities seems like a good idea. Comments from the Community: The response to the enumerate() proposal has been close to 100% favorable. Almost everyone loves the idea. Author response: Prior to these comments, four built-ins were proposed. After the comments, xmap xfilter and xzip were withdrawn. The one that remains is vital for the language and is proposed by itself. Indexed() is trivially easy to implement and can be documented in minutes. More importantly, it is useful in everyday programming which does not otherwise involve explicit use of generators. This proposal originally included another function iterzip(). That was subsequently implemented as the izip() function in the itertools module.
References
[1] PEP 255 Simple Generators [2] PEP 212 Loop Counter Iteration [3] PEP 234 Iterators
This document has been placed in the public domain. | http://www.python.org/dev/peps/pep-0279/ | crawl-002 | refinedweb | 1,150 | 54.52 |
Firebase is a mobile and web application development platform, and Firebase Storage provides secure file uploads and downloads for Firebase apps. In this post, you'll build an Android application with the ability to upload images to Firebase Storage.
Firebase Setup
If you don't have a Firebase account yet, you can create one at the Firebase home page.
Once your account is set up, go to your Firebase console, and click the Add Project button to add a new project.
Enter your project details and click the Create Project button when done. On the next page, click on the link to Add Firebase to your Android app.
Enter your application package name. My application package is com.tutsplus.code.android.tutsplusupload. Note that the package name is namespaced with a unique string that identifies you or your company. An easy way to find this is to open your
MainActivity file and copy the package name from the top.
When done, click on Register App. On the next page, you will be given a google-services.json to download to your computer. Copy and paste that file into the app folder of your application. (The path should be something like TutsplusUpload/app.)
Set Firebase Permissions
To allow your app access to Firebase Storage, you need to set up permissions in the Firebase console. From your console, click on Storage, and then click on Rules.
Paste the rule below and publish.
service firebase.storage { match /b/{bucket}/o { match /{allPaths=**} { allow read, write: if true; } } }
This will allow read and write access to your Firebase storage.
Create the Application
Open up Android Studio, and create a new project. You can call your project anything you want. I called mine TutsplusUpload.
Before you proceed, you'll need to add a couple of dependencies. On the left panel of your Android Studio, click on Gradle Scripts.
Open build.gradle (Project: TutsplusUpload), and add this line of code in the dependencies block.
classpath 'com.google.gms:google-services:3.0.0'
Next, open build.gradle (Module: app) to add the dependencies for Firebase. These go in the dependencies block also.
compile 'com.google.firebase:firebase-storage:9.2.1' compile 'com.google.firebase:firebase-auth:9.2.1'
Finally, outside the dependencies block, add the plugin for Google Services.
apply plugin: 'com.google.gms.google-services'
Save when done, and it should sync.
Set Up the
MainActivity Layout
The application will need one activity layout. Two buttons will be needed—one to select an image from your device, and the other to upload the selected image. After selecting the image you want to upload, the image will be displayed in the layout. In other words, the image will not be set from the layout but from the activity.
In your
MainActivity layout, you will use two layouts—nesting the linear layout inside the relative layout. Start by adding the code for your relative layout.
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns: </RelativeLayout>
The
RelativeLayout takes up the whole space provided by the device. The
LinearLayout will live inside the
RelativeLayout, and will have the two buttons. The buttons should be placed side by side, thus the orientation to be used for the
LinearLayout will be horizontal.
Here is the code for the linear layout.
<LinearLayout android: <Button android: <Button android: </LinearLayout>
From the above code, you can see that both buttons have ids assigned. The ids will be used to target the button from the main activity such that when the button gets clicked, an interaction is initiated. You will see that soon.
Below the
LinearLayout, add the code for the
ImageView.
<ImageView android:
You can also see that the
ImageView has an
id; you will use this to populate the layout of the selected image. This will be done in the main activity.
Get
MainActivity Up
Navigate to your
MainActivity, and start by declaring fields. These fields will be used to initialize your views (the buttons and
ImageView), as well as the URI indicating where the image will be picked from. Add this to your main activity, above the
onCreate method.
private Button btnChoose, btnUpload; private ImageView imageView; private Uri filePath; private final int PICK_IMAGE_REQUEST = 71;
PICK_IMAGE_REQUEST is the request code defined as an instance variable.
Now you can initialize your views like so:
//Initialize Views btnChoose = (Button) findViewById(R.id.btnChoose); btnUpload = (Button) findViewById(R.id.btnUpload); imageView = (ImageView) findViewById(R.id.imgView);
In the above code, you are creating new instances of
Button and
ImageView. The instances point to the buttons you created in your layout.
You have to set a listener that listens for interactions on the buttons. When an interaction happens, you want to call a method that triggers either the selection of an image from the gallery or the uploading of the selected image to Firebase.
Underneath the initialized views, set the listener for both buttons. The listener looks like this.
btnChoose.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { chooseImage(); } }); btnUpload.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { uploadImage(); } });
This should be in the
onCreate() method. As I mentioned above, both buttons call a different method. The Choose button calls the
chooseImage() method, while the Upload button calls the
uploadImage() method. Let's add those methods. Both methods should be implemented outside the
onCreate() method.
Let's start with the method to choose an image. Here is how it should look:
private void chooseImage() { Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select Picture"), PICK_IMAGE_REQUEST); }
When this method is called, a new
Intent instance is created. The intent type is set to image, and its action is set to get some content. The intent creates an image chooser dialog that allows the user to browse through the device gallery to select the image.
startActivityForResult is used to receive the result, which is the selected image. To display this image, you'll make use of a method called
onActivityResult.
onActivityResult receives a request code, result code, and the data. In this method, you will check to see if the request code equals
PICK_IMAGE_REQUEST, with the result code equal to
RESULT_OK and the data available. If all this is true, you want to display the selected image in the
ImageView.
Below the
chooseImage() method, add the following code.
@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == PICK_IMAGE_REQUEST && resultCode == RESULT_OK && data != null && data.getData() != null ) { filePath = data.getData(); try { Bitmap bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), filePath); imageView.setImageBitmap(bitmap); } catch (IOException e) { e.printStackTrace(); } } }
Uploading the File to Firebase
Now we can implement the method for uploading the image to Firebase. First, declare the fields needed for Firebase. Do this below the other fields you declared for your class.
//Firebase FirebaseStorage storage; StorageReference storageReference;
storage will be used to create a
FirebaseStorage instance, while
storageReference will point to the uploaded file. Inside your
onCreate() method, add the code to do that—create a
FirebaseStorage instance and get the storage reference. References can be seen as pointers to a file in the cloud.
storage = FirebaseStorage.getInstance(); storageReference = storage.getReference();
Here is what the
uploadImage() method should look like.
private void uploadImage() { if(filePath != null) { final ProgressDialog progressDialog = new ProgressDialog(this); progressDialog.setTitle("Uploading..."); progressDialog.show(); StorageReference ref = storageReference.child("images/"+ UUID.randomUUID().toString()); ref.putFile(filePath) .addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() { @Override public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) { progressDialog.dismiss(); Toast.makeText(MainActivity.this, "Uploaded", Toast.LENGTH_SHORT).show(); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { progressDialog.dismiss(); Toast.makeText(MainActivity.this, "Failed "+e.getMessage(), Toast.LENGTH_SHORT).show(); } }) .addOnProgressListener(new OnProgressListener<UploadTask.TaskSnapshot>() { @Override public void onProgress(UploadTask.TaskSnapshot taskSnapshot) { double progress = (100.0*taskSnapshot.getBytesTransferred()/taskSnapshot .getTotalByteCount()); progressDialog.setMessage("Uploaded "+(int)progress+"%"); } }); } }
When the
uploadImage() method is called, a new instance of
ProgressDialog is initialized. A text notice showing the user that the image is being uploaded gets displayed. Then a reference to the uploaded image,
storageReference.child(), is used to access the uploaded file in the images folder. This folder gets created automatically when the image is uploaded. Listeners are also added, with toast messages. These messages get displayed depending on the state of the upload.
Set Permission in the App
Finally, you need to request permission that your application will make use of. Without this, users of your application will not be able to browse their device gallery and connect to the internet with your application. Doing this is easy—simply paste the following in your AndroidManifest file. Paste it just above the
application element tag.
<uses-permission android: <uses-permission android:
This requests for permission to use the internet and read external storage.
Testing the App
Now go ahead and run your application! You should be able to select an image and successfully upload it to Firebase. To confirm the image uploaded, go back to your console and check in the Files part of your storage.
Conclusion
Firebase provides developers with lots of benefits, and file upload with storage is one of them. Uploading images from your Android application requires you to work with Activities and Intents. By following along with this tutorial, your understanding of Activities and Intents has deepened. I hope you enjoyed it!
Check out some of our other posts for more background about Activities and Intents, or take a look at some of our other tutorials on using Firebase with Android!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/image-upload-to-firebase-in-android-application--cms-29934 | CC-MAIN-2018-17 | refinedweb | 1,604 | 51.04 |
2086Re: window
Expand Messages
- Mar 30, 2011What has worked well for me is to create a namespace based on one's domain name, or a domain name that one owns.
This is discussed in some detail by David Flanagan in his book, JavaScript: The Definitive Guide (O'Reilly), Paragraph 10.1. See Example 10.1, Creating a namespace based on a domain name.
In short, assuming domain name, 'mydomain.com', after going through the coding wickets, you end up with something like this:
// at global scope...
com.mydomain = {};
Then, to add (or "require") a new module 'abc' to your namespace:
if (!com.mydomain.abc) {com.mydomain.abc = (function () {/* lots of code that returns something */}());}
--- In jslint_com@yahoogroups.com,)?
>
> On Tue, Mar 29, 2011 at 8:57 AM, Douglas Crockford <douglas@...>wrote:
>
> >
> >
> > --- In jslint_com@yahoogroups.com, Erik Eckhardt <erik@> wrote:
> >
> > > Is there some place an interested person could read about the good vs.
> > bad
> > > uses of `window`?
> >
> > The principle misuse is to access global variables. Global variables should
> > be avoided.
> >
> >
> >
>
>
> [Non-text portions of this message have been removed]
>
- << Previous post in topic Next post in topic >> | https://groups.yahoo.com/neo/groups/jslint_com/conversations/messages/2086 | CC-MAIN-2015-32 | refinedweb | 186 | 68.47 |
Simple Example to Illustrate Chain Of Responsibility Design Pattern
GoF Design pattern book states the intent of Chain of Responsibility pattern as: that the origin of the request need not worry who and how its request is being handled as long as it gets the expected outcome. By decoupling the origin of the request and the request handler we make sure that both can change easily and new request handlers can be added without the origin of the request i.e client being aware of the changes. In this pattern we create a chain of objects such that each object will have a reference to another object which we call it as successor and these objects are responsible for handling the request from the client. All the objects which are part of the chain are created from classes which confirm to a common interface there by the client only needs to be aware of the interface and not necessarily the types of its implementations. The client assigns the request to first object part of the chain and the chain is created in such a way that there would be atleast one object which can handle the request or the client would be made aware of the fact that its request couldn’t be handled.
With this brief introduction I would like to put forth a very simple example to illustrate this pattern. In this example we create a chain of file parsers such that depending on the format of the file being passed to the parser, the parser has to decide whether its going to parse the file or pass the request to its successor parser to take action. The parser we would chain are: Simple text file parser, JSON file parser, CSV file parser and XML file parser. The parsing logic in each of these parser doesn’t parse any file, instead it just prints out a message stating who is handing the request for which file. We then populate file names of different formats into a list and then iterate through them passing the file name to the first parser in the list.
Lets define the Parser class, first let me show the class diagram for Parser class:
The Java code for the same is:
public class Parser { private Parser successor; public void parse(String fileName){ if ( getSuccessor() != null ){ getSuccessor().parse(fileName); } else{ System.out.println("Unable to find the correct parser for the file: "+fileName); } } protected boolean canHandleFile(String fileName, String format){ return (fileName == null) || (fileName.endsWith(format)); } Parser getSuccessor() { return successor; } void setSuccessor(Parser successor) { this.successor = successor; } }
We would now create different handlers for parsing different file formats namely- Simple text file, JSON file, CSV file, XML file and these extend from the Parser class and override the parse method. I have kept the implementation of different parser simple and these methods evaluate if the file has the format they are looking for. If a particular handler is unable to process the request i.e. the file format is not what it is looking for then the parent method handles such requests. The handler method in the parent class just invokes the same method on the successor handler.
The simple text parser:
public class TextParser extends Parser{ public TextParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, ".txt")){ System.out.println("A text parser is handling the file: "+fileName); } else{ super.parse(fileName); } } }
The JSON parser:
public class JsonParser extends Parser { public JsonParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, ".json")){ System.out.println("A JSON parser is handling the file: "+fileName); } else{ super.parse(fileName); } } }
The CSV parser:
public class CsvParser extends Parser { public CsvParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, ".csv")){ System.out.println("A CSV parser is handling the file: "+fileName); } else{ super.parse(fileName); } } }
The XML parser:
public class XmlParser extends Parser { @Override public void parse(String fileName) { if ( canHandleFile(fileName, ".xml")){ System.out.println("A XML parser is handling the file: "+fileName); } else{ super.parse(fileName); } } }
Now that we have all the handlers setup, we need to create a chain of handlers. In this example the chain we create is: TextParser -> JsonParser -> CsvParser -> XmlParser. And if XmlParser is unable to handle the request then the Parser class throws out a message stating that the request was not handled. Lets see the code for the client class which creates a list of files names and then creates the chain which I just described.
import java.util.List; import java.util.ArrayList; public class ChainOfResponsibilityDemo { /** * @param args */ public static void main(String[] args) { //List of file names to parse. List<String> fileList = populateFiles(); //No successor for this handler because this is the last in chain. Parser xmlParser = new XmlParser(); //XmlParser is the successor of CsvParser. Parser csvParser = new CsvParser(xmlParser); //CsvParser is the successor of JsonParser. Parser jsonParser = new JsonParser(csvParser); //JsonParser is the successor of TextParser. //TextParser is the start of the chain. Parser textParser = new TextParser(jsonParser); //Pass the file name to the first handler in the chain. for ( String fileName : fileList){ textParser.parse(fileName); } } private static List<String> populateFiles(){ List<String> fileList = new ArrayList<>(); fileList.add("someFile.txt"); fileList.add("otherFile.json"); fileList.add("xmlFile.xml"); fileList.add("csvFile.csv"); fileList.add("csvFile.doc"); return fileList; } }
In the file name list above I have intentionally added a file name for which there is no handler created. Running the above code gives us the output:
A text parser is handling the file: someFile.txt A JSON parser is handling the file: otherFile.json A XML parser is handling the file: xmlFile.xml A CSV parser is handling the file: csvFile.csv Unable to find the correct parser for the file: csvFile.doc
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Shailendra Singh replied on Wed, 2012/10/03 - 2:14am
To make it full proof Parser class should be declared as abstract
Mohamed Sanaulla replied on Wed, 2012/10/03 - 10:28am
in response to:
Shailendra Singh
Alessandro Carraro replied on Thu, 2012/10/04 - 7:39am
Why canHandleFile returns true if filename==null?
Btw, I'd rather make canHandleFile abstract (or return false in Parser) and let parsers override it (for example, file extension could not be aviable, or not sufficient, think about media player that select the decoder by the first magic bytes of the file) | http://java.dzone.com/articles/simple-example-illustrate | CC-MAIN-2014-10 | refinedweb | 1,093 | 52.7 |
Here we show a simple example of how to use k-means clustering. We will look at crime statistics from different states in the USA to show which are the most and least dangerous.
We get our data from here.
(This tutorial is part of our Apache Spark Guide. Use the right-hand menu to navigate.)
The data looks like this. The columns are state, cluster, murder rate, assault, population, and rape. It already includes a cluster column which we will drop. That is because you can cluster data points into as many clusters as you like. This data set used some other value. Also notice that the population is a normalized value, meaning it is not the actual population. This makes it a small scale number, which is a common approach to statistics.
Delete the column headings in order to read in the data. Later we will put them back.
Below is the Scala code. We paste it into a Zeppelin notebook since that does graphs nicely. (You can read about using Zeppelin with Apache Spark in a previous post we wrote here.)
First we have our normal imports:
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel} import org.apache.spark.mllib.linalg.Vectors import sqlContext.implicits._ import org.apache.spark.sql.types._
Then we create the crime data rdd from the crime_data.csv file, which we copied to Hadoop.
var rdd = sc.textFile("hdfs://localhost:9000/data/crime_data.csv")
Now, data sent to a machine learning algorithm has to be numbers. So this algorithm will convert a string to bytes and then to a double. Then we loop across each byte group and sum them to make a single total number that will be the encoded value of the text field state.
def stoDouble (s : String): Double = { return s.map(_.toByte.doubleValue()).reduceLeft( (x,y) => x + y) }
We obviously need to translate that number back into text so we can see what state we are dealing with. So we make a case class. You do that when you want to create a dataframe. You create a dataframe when you want to use SQL, which is easy to work with.
case class StateCode(State:String, Code:Double) var lines = rdd.map(l => l.split(",")) var states = lines.map(l => StateCode(l(0),stoDouble(l(0)))).toDF() states.show() states.createOrReplaceTempView("states")
Here we take each line in the input data file and make it an array of doubles.
def makeDouble (s: String): Array[Double] = { var str = s.split(",") var a = stoDouble(str(0)) return Array(a,str(2).toDouble,str(3).toDouble,str(4). toDouble,str(5).toDouble) } var crime = rdd.map(m => makeDouble(m))
Now we make an object called a Dense Vector. The Spark k-means classification algorithm requires that format. Then we run the train method to cause the machine learning algorithm to group the states into clusters based upon the crime rates and population. We tell it to use five clusters. We could have said 10 or any number. The larger the number of clusters, the more you have divided your data.
val crimeVector = crime.map(a => Vectors.dense(a(0),a(1),a(2),a(3),a(4))) val clusters = KMeans.train(crimeVector,5,10)
Now we create another case class so that the end results will be in a data frame with names columns. Here we have all the date plus the Dense Vector object as well the prediction that Spark will make based upon the k-means algorithm. A point to note here is that Spark has no problem in making a dense vector an SQL column. Notice that the clusters value from the training set is used to make the prediction. We add that to the Crime case class as well.
case class Crime (Code:Double, Murder:Double, Assault:Double, UrbanPop:Double,Rape:Double, PredictionVector:org.apache.spark.mllib.linalg.Vector, Prediction:Double) val crimeClass = crimeVector.map(a => Crime(a(0), a(1), a(2), a(3), a(4), a ,clusters.predict(a))).toDF() crimeClass.show() crimeClass.createOrReplaceTempView("crimes")
Finally, we have to join the two tables of state and prediction because we want to be able to show the state name and not the number we converted it too. Notice the %sql line. In a Zeppelin notebook that means we want to use SQL. We can do that because in crimeClass.createOrReplaceTempView(“crimes”) we created a temporary view that we can query with SQL. When you use %sql Zeppelin will automatically give you the option to create different graphs.
%sql select state, prediction, murder, assault, rape, urbanpop from states, crimes where states.code = crimes.code order by state
Here we output the option as a table. We could have made a pie chart, bar graph, or other. As you can see, stay away from Alabama as it is more dangerous than, for example, Colorado. | https://www.bmc.com/blogs/k-means-clustering-apache-spark/ | CC-MAIN-2022-05 | refinedweb | 817 | 68.87 |
22. UV Sensor(EF05021)
22.1. Introduction
It is able to measure the total UV intensity of the sunlight.
22.2. Characteristic
Designed in RJ11 connections, easy to plug.
22.3. Specification
22.4. Outlook
22.5. Quick to Start
22.5.1. Materials Required and Diagram
Connect the UV sensor to J1 port and the OLED to the IIC port in the Nezha expansion board as the picture shows.
22.6. MakeCode Programming
22.
22.6.2. Step 2
22.6.3. Code as below:
22.6.4. Link
Link:
You may also download it directly below:
22.6.5. Result
The detected value from the UV sensor displays on the OLED screen.
22.7. Python Programming
22.7.1. Step 1
Download the package and unzip it: PlanetX_MicroPython
Go to Python editor
We need to add enum.py and uvlevel.py for programming. Click “Load/Save” and then click “Show Files (1)” to see more choices, click “Add file” to add enum.py and uvlevel.py from the unzipped package of PlanetX_MicroPython.
22.7.2. Step 2
22.7.3. Reference
from microbit import * from enum import * from uvlevel import * uvlevel = UVLEVEL(J1) while True: display.scroll(int(uvlevel.get_uvlevel()))
22.7.4. Result
The detected value from the UV sensor displays on the micro:bit. | https://www.elecfreaks.com/learn-en/microbitplanetX/Plant_X_EF05021.html | CC-MAIN-2022-27 | refinedweb | 218 | 70.7 |
Building the Right Environment to Support AI, Machine Learning and Deep Learning
Watch→
You also can re-open an existing class definition and inject your own methods in, or override ones that are already there. For example, I could redefine the .to_s method of the Fixnum class, which is the class that all integers take on by default, to return something like "I, for one, welcome our robot overlords" every time. (The wisdom of doing something like this is of course questionable.) For the sake of demonstrating how very open the entire structure of Ruby is, here's how you can do this:
Fixnum.class_eval do
def to_s
"I, for one, welcome our robot overlords."
end
end
q = 123
puts q.to_s
Of course, don't try this at home, or at least don't do it in production code.
For Further Study
The Ruby language offers many more features and goodies than I could cover in one article, but absorbing them a few at a time from here on will be no harder than what this tutorial has shown. For a quick perusal of how Ruby generally compares to C# and Java, see Table 1 below.
One of the particularly nice things about Ruby is the voluminous mass of documentation generated by the ever-expanding Ruby user community. If you would like to learn some more about Ruby, you should check out the industry standard tongue-in-cheek guide known as Why's (Poignant) Guide to Ruby. If humor isn't your ideal way to learn a language, then you might want to see the more formal Programming Ruby, which is actually quite authoritative on the subject. No matter what your preferences are, you can get wherever you want to go with Ruby from the official Ruby Language. | http://www.devx.com/RubySpecialReport/Article/34470/0/page/4 | CC-MAIN-2020-16 | refinedweb | 299 | 67.49 |
help
code required
page
Will you plz help me out by providing me with the code
Editor help required. - Java Server Faces Questions
Editor help required. I am having problem with editor used to design JSP page in JSF applications. I want to open JSP page with Web Page Editor... are they? Please Help me.... Thanks In advance
code required in servlet
code required in servlet hello... you provided me the answer with javascript.. i want to check which button is clicked in the servlet.. i have only... help
property datasource required - Spring
property datasource required Hi I am using java springs I am using mysql as backend I am getting one error when i compile the application.property datasource required.
This is my dispatcher-servlet.xml
help!!!!!!!!!!!!!!!!!! /import java.awt.;
import java.sql.*;
import... con =DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root...");
Connection con =DriverManager.getConnection("jdbc:mysql://localhost:3306/test
jQuery required validation
jQuery required Validation:
jQuery "required" is the method...;/option>.
Getting Started with jQuery required :
In this example there are the following steps which are used to make jQuery
required. This example Makes
MySQL Not In
MySQL Not In
MySQL Not In is used to updates only those records of the table whose fields
'id...
The Tutorial illustrate an example from 'MySQL Not In'. To understand and
grasp
Mysql Create Table
Mysql Create Table
Mysql Create Table is used to create a table in database.
Understand with Example
The section of Tutorial will help you to create a table in database
sample JSP&Servlet application required? - JSP-Servlet
sample JSP&Servlet application required? hey all
iam new... tutorial or a project that integrate both jsp&servlets
any help? Hi Friend... will be helpful for you.
Thanks thanks for your help
i need more
Need help on mySQL 5 - SQL
Need help on mySQL 5 Dear Sir,
I'm a beginner of mySQL 5 . I need help on mySQL5 command.
This is the table which i created is called... |
| | |
+---------+------------------+
Thanks for your help
HELP! Design Review & Risk Management topics (in Java projects) required - Development process
HELP! Design Review & Risk Management topics (in Java projects) required Hi,
I have to take 2 one hour sessions on the following topics... topics/examples/case studies for the same?
Any help will be much appreciated
open source help desk
Open Source Help Desk
Open Source Help Desk Software
As my help desk... of the major open source help desk software offerings. I?m not doing... source help desk applications out there. Not the ones where the site hasn?t been
How to install Hibernate-3 and jars required for eclipse environment. And is jboss jars are necessary..?
How to install Hibernate-3 and jars required for eclipse environment... required? I am searching in Google so may jars there some are related jboss... to install hibernate-3 in eclipse environment without jboss.?
Please help me. I
Mysql From
Mysql From
Mysql... with Example
The Tutorial illustrate an example from 'Mysql From'. To understand
example we create a table 'stu' with required fieldnames and datatypes
Help on mySQL 5 command line - SQL
Help on mySQL 5 command line Dear Sir,
I had created a table for mySQL 5 called dictionary as shown below:
mysql> select * from dictionary;
+----+------+-------------------------?
| no | word | explanation | num
help on mySQL 5 command Line - SQL
help on mySQL 5 command Line Dear Sir,
Sorry for my mistake, please ignore the first post of my question
I had created a table for mySQL 5 called dictionary as shown below:
mysql> select * from dictionary
MySQL Download
MySQL Download
MySQL
MySQL - SQL (Structured Query Language... database. MySQL is a key part of LAMP (Linux, Apache, MySQL, PHP / Perl / Python
code required.!
Java Script phone number entered by person should be in numeric only. code required! i am using javascript coding on jsp page. i want that the phone number entered by person should be in numeric only..can i get code
MySQL add month
learned how to add months with the help of DATE_ADD
function of MySQL...MySQL add month
In this section we will learn how to add 1 month, 2 month and 12 month to a
date object in MySQL.
The MySQL DATE_ADD function is used
Installing MySQL on Windows
the release number.
MySQL used these registry keys to help external tools... not required to run server and you want to install MySQL using the
noinstall...
Installing MySQL on Windows
Mysql Group By
Mysql Group By
Mysql Group By statement is conjecture term used with the aggregate... with Example
The Tutorial focuses you to understand an example from 'Mysql Group
Mysql Group By
Mysql Group By
Mysql Group By statement is conjecture term used with the aggregate functions...
The Tutorial focuses you to understand an example from 'Mysql Group Date Input
Mysql Date Input
Mysql Date Input is used to give the input date and convert the input into
the required format.
Understand with Example
The Tutorial helps you to understand
How to access (MySQL)database from J2ME?
How to access (MySQL)database from J2ME? I am new to J2ME. I am using NetBeans.
Can anyone help me?
How to access (MySQL)database from J2ME?
( I.... But for running servlet required web server.
Is that any another way? )
Please
mysql - SQL
mysql hi
i want to export my data from mysql to excel using php script. pls help me Hi Friend,
Please visit the following link:
Thanks
Java and MySQL
need to know to things:
How do I write reports using information in an MySQL database .
How get multiple MySQL database rows and assign them to variables r an array in java. Please help
Mysql As
Mysql As
Mysql... from 'Mysql As'.
To understand this example we create a table 'stu'. The create table is
used to create a table 'Stu' with required fieldname and data type
mysql - WebSevices
mysql Hello,
mysql storing values in column with zero... when I want to store this value in mysql it store 0 first and than 60.
If it is clear!!!Please help me out
Thanks in Advance
Warm Regards
Ahmed
Need help to create Struts 2 MySQL based HRMS application.
Need help to create Struts 2 MySQL based HRMS application. Hi Sir,
I am doing my project based on Human Resource Management system using struts 2... given here. Some are working but some are not. Can you please help me to create to learn Books
MySQL Books
List of many MySQL books.
MySQL Books
Hundreds of books and publications related to MySQL are in circulation that can help you get the most of out MySQL. These books
Mysql Alter Column
Mysql Alter Column
Mysql Alter Column is used to redefine the table and change the datatype... from 'Mysql Alter Column'. To understand
an example from 'Mysql Alter Column', we PHP Search
specified in Where Clause. The mysql_fetch_array ( )
help you to fetch... MySQL PHP Search
MySQL Search is used to search and return a records from a table
E BOOK REQUIRED
E BOOK REQUIRED Wicket in Action pdf required Urgently.
Thnx
java & mysql
to your MySQL server version for the right syntax to use near '' at line 1"
my...("com.mysql.jdbc.Driver");
Connection con=DriverManager.getConnection("jdbc:mysql... not assigning the string to "pname"?
can anyone plz help me
mysql - WebSevices
mysql Hello Rose India,
Thanks for ur wondefull advice:-)Special thanks to Mr.Deepak.
I have solved the problem with ur help!!!
Keep up the good work!!!!
Once again.Thanks a lot for ur help:-)
Warm Regards
Ahmed
jsp_mysql
jsp_mysql hi,,
plz help me by providing the code for displaying SELECTED columns from mysql table which are given dynamically through checkboxes...;
<html>
<head><title>Read from mySQL Database</title>
mysql - WebSevices
any aone has script for this query.Please help me out
Thanks in Adavance
Warem... which will help you in solving your problem. You can make use of it to find your... = DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","root");
try
jsp and mysql
and stuck with it.. plz can any one help me.. Thank you in advance
...").newInstance();
Connection con = DriverManager.getConnection("jdbc:mysql
required treenodes checkboxes are not clicking if u change the url - Java Server Faces Questions
browser and receiving in d Library.java. By d help of set and get method d required...required treenodes checkboxes are not clicking if u change the url ... changes in the screen .
can anybody help asap.
Library.java
Mysql Date Search
Mysql Date Search
In this Tutorial you will learn how to' search Date in Mysql'. To understand
this example, we create a table 'employee' with required fieldname and datatype
Mysql Date
Mysql Date
Mysql Date retrieve the date with the help of timestamp. Timestamp is used to
catch all dates and times.
Understand with Example
The Tutorial explains you 'Mysql
Mysql Last Row
Mysql Last Row
Mysql Last Row is used to return the last records from table.
Understand with Example
The Tutorial illustrate an example from 'Mysql Last Row'. To understand
Mysql Full Join
Mysql Full Join
Mysql Full Join is used to return all the records from both left and right
outer...
values for those missing matches on either side. The Full Join in Mysql
mysql_connect arguments
mysql_connect arguments How many arguments a mysql_connection function required to connect to the database?
?mysql_connect? function takes three arguments
Server - localhost
Username - admin
Password - 1admin
Urgent Solution required
Urgently required SEO
Freelance SEO
Position Vacant: Urgently required SEO
Job Description ...;
Website:
Reference ID: Urgently required
Using MYSQL Database with JSP & Servlets.
Typing help on the mysql prompt shows online
Using MYSQL Database with JSP & Servlets.
MYSQL
steps required to execute a query in JDBC
steps required to execute a query in JDBC What are the steps required to execute a query in JDBC
mysql jsp - Java Beginners
mysql jsp how to insert values to mysql?
i want to insert values from a combo box to mysql... how to perform that. pls help me in urgent. Hi friend,
Plz give full source code where you having the problem
Mysql-Php code
Mysql-Php code I want to mysql script for query data from two tables in the mysql database trough php code. Can you help me
Hi Friend,
Please visit the following link:
Php Mysql
Thanks
Hi Friend
MySQL Commands
of commands. To see list of mysql commands, you type help or \h on the mysql>... MySQL Commands
In this section, we are going to read about the mysql commands. Each
jsp, mysql - Java Beginners
jsp, mysql i want to store / insert the selected value in the combo box (jsp), into the mysql. how to do it?... pls help me in urgent
JAVA & MYSQL - JDBC
;Hi Friend,
Please visit the following page for working example of MySQL backup. This may help you in solving your problem. & MYSQL How can we take backup of MySQL 5.0 database by using
Mysql Time Now
Mysql Time Now
Time Now is used to return you a current time in Mysql.
Understand with Example
The Tutorial help you to make an example from Time Now in Mysql. In this
Tutorial
please help me.
please help me. I have three table in mysql,and i hava create a excel sheet and add this sheet.but my question is in every sheet i can display one one table result.how can i do
Mysql Last
Mysql Last
Mysql Last is used TO limit Mysql query result that fall in a specified
range... provides you an example from 'Mysql Last'. To
elaborate this example we create
MySQL Alter Tutorial, Database MySQL MySQL Crash Tutorial Tutorial
Mysql Alter
... into a table.
Mysql Alter View
View...;
Mysql add primary key
Mysql add Primary Key is used to add primary key
Introduction to MySQL
Introduction to MySQL
MySQL is supported, distributed and developed by MySQL AB. It is most popular open source SQL database management system.
Our MySQL tutorial is available
MySql - JSP-Servlet
MYSQL and i am using SQLYOG editor enterprise 7.11. I got the serial number also... it. Please help me . Still i have only two days to expire.
Thanks. Hi friend,
Thanks
SQL help - SQL
SQL help Hi Deepak,
Can u pls tell me what is the difference between PL/SQL and MYSQL?
Is there any difference in generating the code
MySQL Front
difficult to master the complex commands required by the MySQL console...
MySQL Front
In this page you will find many tools that you can use as MySQL Front to work
with the MySQL
mysql installation problem - JDBC
mysql installation problem Hi,
when i installing mysql server on my pc in
MySQL Server Instance Configuration Wizard,I enter the root pw... a link . I hope that, this link will help you.
Please visit for more
JSP - MySQL - Connection
JSP - MySQL - Connection I can't connect mysql using jsp in my... successfully to the server was 0 milliseconds ago. """"
Can you help me to resolve this problem.I set classpath for the mysql connector jar (/home/aghiltu/apache
Connecting to MySQL
();
bds.setDriverClassName("com.mysql.jdbc.Driver");
bds.setUrl("jdbc:mysql...
In the lib folder place all required file ie. commons-collections.jar,
commons-dbcp.jar, commons-pool.jar, j2ee.jar and
mysql-connector-java-5.1.7-bin.jar
Mysql Insert
Mysql Insert
Mysql Insert is used to insert the records or rows to the table.
Understand with Example
The Tutorial illustrate an example from 'Mysql Insert'.To grasp
mysql with jsp - Java Beginners
mysql with jsp i wanted to insert a combo box value to mysql table usgin jsp. how to perform that. can anybody help me in urgent. Hi friend,
Code to help in solving the problem :
Date Format required to change in java
Date Format required to change in java can we change the format of date like sql as per our requirements???If yes then please give the detail solution
MySQL Tools
MySQL Tools
MySQL Migration Toolkit
The MySQL Migration Toolkit is a powerful framework that enables you to quickly migrate your proprietary databases to MySQL. Using a Wizard
mysql table construction - SQL
mysql table construction In MySql there is no pivot function.
using a function or Stored Procedure,
I want to loop through a column of data...
t,12,15,16,17
Please help.
MySQL For Window
, instructions on how to get started. This MySQL help page also gives more advanced... it here to help defray my costs incurred providing this important page.
MySQL...
MySQL For Window
MySQL
Connecting to MYSQL Database in Java
Connecting to MYSQL Database in Java I've tried executing the code... please help here coz I have tried to locate the problem but I can't find...("MySQL Connect Example.");
Connection conn = null;
String url
mysql problem - JDBC
mysql problem hai friends
please tell me how to store the videos in mysql
plese help me as soon as possible
thanks in advance
... = "jdbc:mysql://localhost:3306/test";
Connection con=null;
try
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/20369 | CC-MAIN-2013-20 | refinedweb | 2,535 | 74.49 |
(For more resources on Oracle, see here.)
APEX introduced AJAX supports in version 2.0 (the product was called HTML DB back then). The support includes a dedicated AJAX framework that allows us to use AJAX in our APEX applications, and it covers both the client and the server sides.
AJAX support on the client side
The APEX built-in JavaScript library includes a special JavaScript file with the implementation of the AJAX client-side components. In earlier versions this file was called htmldb_get.js, and in APEX 3.1, it was changed to apex_get_3_1.js.
In version 3.1, APEX also started to implement JavaScript namespace in the apex_ns_3_1.js file. Within the file, there is a definition to an apex.ajax namespace.
I'm not mentioning the names of these files just for the sake of it. As the AJAX framework is not officially documented within the APEX documentation, these files can be very important and a useful source of information.
By default, these files are automatically loaded into every application page as part of the #HEAD# substitution string in the Header section of the page template. This means that, by default, AJAX functionality is available to us on every page of our application, without taking any extra measures.
The htmldb_Get object
The APEX implementation of AJAX is based on the htmldb_Get object and as we'll see, creating a new instance of htmldb_Get is always the first step in performing an AJAX request.
The htmldb_Get constructor function has seven parameters:
function htmldb_Get(obj,flow,req,page,instance,proc,queryString)
1—obj
The first parameter is a String that can be set to null, a name of a page item (DOM element), or an element ID.
- Setting this parameter to null will cause the result of the AJAX request to be assigned in a JavaScript variable. We should use this value every time we need to process the AJAX returned result, like in the cases where we return XML or JSON formatted data, or when we are relaying on the returned result, further in our JavaScript code flow.
The APEX built-in JavaScript library defines, in the apex_builder.js file, (which is also loaded into every application page, just like apex_ get_3_1.js), a JavaScript global variable called gReturn. You can use this variable and assign it the AJAX returned result.
- Setting this parameter to the name (ID) of a page item will set the item value property with the result of the AJAX call. You should make sure that the result of the AJAX call matches the nature of the item value property. For example, if you are returning a text string into a text item it will work just fine. However, if you are returning an HTML snippet of code into the same item, you'll most likely not get the result you wanted.
- Setting this parameter to a DOM element ID, which is not an input item on the page, will set its innerHTML property to the result of the AJAX call.
Injecting HTML code, using the innerHTML property, is a cross-browser issue. Moreover, we can't always set innerHTML along the DOM tree. To avoid potential problems, I strongly recommend that you use this option with <div> elements only.
2—flow
This parameter represents the application ID.
If we are calling htmldb_Get() from an external JavaScript file, this parameter should be set to $v('pFlowId') or its equivalent in version 3.1 or before ($x('pFlowId').value or html_GetElement('pFlowId').value ). This is also the default value, in case this parameter is left null.
If we are calling htmldb_Get() as part of an inline JavaScript code we can use the Substitution String notation &APP_ID. (just to remind you that the trailing period is part of the syntax).
Less common, but if you are using Oracle Web Toolkit to generate dynamic code (for dynamic content) that includes AJAX, you can also use the bind variable notation :APP_ID. (In this case, the period is just a punctuation mark.)
3—req
This String parameter stands for the REQUEST value. Using the keyword APPLICATION_PROCESS with this parameter allows us to name an application level On Demand—PL/SQL Anonymous Block process that will be fired as part of the AJAX server-side processing. For example: 'APPLICATION_PROCESS=demo_code'. This parameter is case sensitive, and as a String, should be enclosed with quotes.
If, as part of the AJAX call, we are not invoking an on-demand process, this parameter should be set to null (which is its default value).
4—page
This parameter represents an application page ID.
The APEX AJAX process allows us to invoke any application page, to run it in the background, on the server side, and then clip portions of the generated HTML code for this page into the AJAX calling page. In these cases, we should set this parameter to the page ID that we want to pull from.
The default value of this parameter is 0 (this stands for page 0). However, this value can be problematic at times, especially when page 0 has not been defined on the application, or when there are inconsistencies between the Authorization scheme, or the page Authentication (such as Public and Required Authentication) of page 0 and the AJAX calling page. These inconsistencies can fail the execution of the AJAX process.
In cases where you are not pulling information from another page, the safe bet is to set this parameter to the page ID of the AJAX calling page, using $v('pFlowStepId') or its equivalent for versions earlier than 3.1. In the case of an inline code, the &APP_PAGE_ID. Substitution String can also be used.
Using the calling page ID as the default value for this parameter can be considered a "good practice" even for upcoming APEX versions, where implementation of page level on-demand process will probably be introduced. I hope you remember that as of version 3.2, we can only define on-demand processes on the application level.
5—instance
This parameter represents the APEX session ID, and should almost always be left null (personally, I never encountered the need to set it otherwise). In this case, it will be populated with the result of $v('pInstance') or its earliest versions.
6—proc
This String parameter allows us to invoke a stored or packaged procedure on the database as part of the AJAX process.
The common behavior of the APEX AJAX framework is to use the application level On Demand PL/SQL Anonymous Block process as the logic of the AJAX server-side component. In this case, the on-demand process is named through the third parameter—req—using the keyword APPLICATION_PROCESS, and this parameter—proc—should be left null. The parameter will be populated with its default value of 'wwv_flow.show'(the single quotes are part of the syntax, as this is a String parameter).
However, the APEX AJAX framework also allows us to invoke an external (to APEX) stored (or packaged) procedure as the logic of the AJAX server side. In this case, we can utilize an already existing logic in the database. Moreover, we can benefit from the "regular" advantages of stored procedures, such as a pre-complied code, for better performance, or the option to use wrapped PL/SQL packages, which can protect our business logic better (the APEX on-demand PL/SQL process can be accessed on the database level as clear text).
The parameter should be formatted as a URL and can be in the form of a relative URL. In this case, the system will complete the relative URL into a full path URL based on the current window.location.href property.
As with all stored or packaged procedures that we wish to use in our APEX application, the user (and in the case of using DAD, the APEX public user) should have the proper privileges on the stored procedure.
In case the stored procedure, or the packaged procedure, doesn't have a public synonym defined for it then the procedure name should be qualified with the owner schema. For example, with inline code we can use:
'#OWNER#.my_package.my_proc'
For external code, you should retrieve the owner and make it available on the page (e.g. assign it to a JavaScript global variable) or define a public synonym for the owner schema and package.
7—queryString
This parameter allows us to add parameters to the stored (packaged) procedure that we named in the previous parameter—proc. As we are ultimately dealing with constructing a URL, that will be POSTed to the server, this parameter should take the form of POST parameters in a query string—pairs of name=value, delimited by ampersand (&).
Let's assume that my_proc has two parameters: p_arg1 and p_arg2. In this case, the queryString parameter should be set similar to the following:
'p_arg1=Hello&p_arg2=World'
As we are talking about components of a URL, the values should be escaped so their code will be a legal URL. You can use the APEX built-in JavaScript function htmldb_Get_escape() to do that.
If you are using the req parameter to invoke an APEX on-demand process with your AJAX call, the proc and queryString parameters should be left null. In this case, you can close the htmldb_Get() syntax right after the page parameter. If, on the other hand, you are invoking a stored (packaged) procedure, the req parameter should be set to null.
(For more resources on Oracle, see here.)
Code examples
Let's see some examples of how to use the htmldb_Get class with various scenarios:
var ajaxReq = new htmldb_Get(null, $v('pFlowId'),
'APPLICATION_PROCESS=demo_code',0);
- With this code, we create a new object instance of htmldb_Get and assign it to ajaxReq.
- he first parameter is null, and that means that the returned AJAX response should be assigned to a JavaScript variable.
- Next we set the Application ID using the $v('pFlowId') function. Using this version of the function means that we are on APEX 3.1 or higher instance, and that this code fits either inline code or external JavaScript file.
- We set the third parameter—req—to point to an application level On Demand PL/SQL Anonymous Block process, called demo_code, as the logic of the AJAX server-side component.
- We specifically set the page parameter to a value of 0 (zero).
- As we don't need any of the following parameters, we just closed the parameter list.
Although setting the page parameter to 0 is a very common practice, I mentioned earlier that this is not the best choice, as it can be problematic at times. I consider the following code, with the same functionality, to be the "best practice":
var ajaxReq = new htmldb_Get(null, $v('pFlowId'),
'APPLICATION_PROCESS=demo_code', $v('pFlowStepId'));
Let's review the following code:
var ajaxReq = new htmldb_Get('P10_COMMENTS', $v('pFlowId'),
'APPLICATION_PROCESS=demo_code', $v('pFlowStepId'));
With this code, we set the first parameter to 'P10_COMMENTS', which is a page text item. This means that the returned AJAX response will be assigned directly to the P10_COMMENTS text item. It is our responsibility, as developers, to make sure that the returned response is a simple text.
The following code looks almost the same as the previous one:
var ajaxReq = new htmldb_Get('cal1', $v('pFlowId'),
'APPLICATION_PROCESS=demo_code', $v('pFlowStepId'));
However, in this case we set the first parameter to 'cal1', which is the ID of a <div> tag on the application page. This means that the returned AJAX response will be set as the value of the innerHTML attribute of this <div>. It is our responsibility, as developers, to make sure that the returned response is a valid HTML code that can fit the <div> innerHTML.
In the following example we are invoking a packaged procedure as part of the AJAX process:
function formatItem(pThis) {
var params = 'p_arg1=' + htmldb_Get_escape($v(pThis));
var get = new htmldb_Get(null, null, null, &APP_PAGE_ID.,
null, '#OWNER#.my_package.my_proc', params);
. . .
}
- The JavaScript function takes pThis as a parameter, and later we are passing it into the packaged procedure as its parameter.
- The use of #OWNER# and &APP_PAGE_ID. implies that this snippet of code is part of an inline JavaScript code.
- We are invoking the my_proc procedure stored in the my_package package. In the example, we are using a full-qualified name of the procedure.
- The my_proc procedure accepts one parameter—p_arg1. The first line in our code retrieves the value of a DOM node based on the pThis parameter of the formatItem() function—$v(pThis). As we are going to use this value in a URL, we are escaping it—htmldb_Get_escape($v(pThis)). Now, we can complete the build of the quertString parameter for the htmldb_Get as a valid pair of an item name and a value, and assign it to the JavaScript variable params. We are using this variable as the last parameter of our call to a new htmldb_Get instance.
If the system doesn't find a complete match between the parameters of the stored (packaged) procedure we are invoking and the value of the queryString parameter, it will produce a 404 HTML error message telling us that the procedure we are calling for is not found on the server. As you know, this is not the case, and you should check the arguments list first.
- As we are using the sixth parameter—proc—to call our packaged procedure, the third parameter—req—is set to null.
The htmldb_Get methods
The htmldb_Get object has several built-in methods that help us to utilize its functionality, and perform the actual AJAX call. In the following sub-sections, we'll review the more public ones; those we most likely need to use ourselves.
.Add(name,val)
If we are invoking a stored (packaged) procedure we can use the queryString parameter to pass the necessary parameters to the procedure. But what about the on-demand PL/SQL process? It's an anonymous PL/SQL block without parameters. How can we pass necessary data from the JavaScript on the client side to the PL/SQL anonymous block on the server side? In general, the answer is Session State. We can use the add() method to set the Session State of application or page items, and those will be available to us in the PL/SQL code.
The add() method accepts two parameters; the first is a String parameter, representing a name of an application or page item, and the second is its value.
As we are dealing with a method, it should be associated with an object. We are going to use the ajaxReq object we created in our previous examples:
ajaxReq.add('TEMP1',$v('P10_SEARCH'));
In this example, we are using an application item called TEMP1 and we are setting its value with the value of a page item called P10_SEARCH. The value of TEMP1 will be set in Session State, and will be available for us to use in any PL/SQL code in the on-demand PL/SQL process we are invoking as part of the AJAX call. We can reference TEMP1 by using a bind variable notation—:TEMP1—or by using the APEX built-in v('TEMP1') function.
According to our needs, we don't have to only use (temporary) application items. We can also use page items, as used below:
ajaxReq.add('P10_SEARCH',$v('P10_SEARCH'));
In this case, the Session State value of the page item P10_SEARCH will be set according to its current DOM value, i.e. the value of the item that is being displayed on screen.
Setting Session State with the add() method does not depend on actually invoking an on-demand PL/SQL process. We can use the AJAX framework to just set Session State from JavaScript code without any other server-side activity. This can be used as the client-side equivalent to the APEX API procedure APEX_UTIL.SET_SESSION_STATE(p_name, p_value).
.AddParam(name,val)
APEX 3.1 introduced 10 new pre-defined global package variables, which we can use with our AJAX calls without the need to define specific, temporary by nature, application items. In the client side we can reference them as x01 to x10, and in the server side, within the on-demand PL/SQL process we are invoking in the AJAX call, we should use apex_application.g_x01 to apex_application.g_x10.
In some code examples out there, you might see a reference of wwv_flow.g_x01 to wwv_flow.g_x10. That means these new variables are actually global variables in the wwv_flow package, which has a public synonym of apex_application. You can use both references as you see fit.
Now it's also easier to explain the difference between client-side and server-side references. In the client side, we are actually setting wwv_flow.show parameters, x01 to x10, while on the server side, in the PL/SQL code, we reference the actual package global variables, g_x01 to g_x10.
APEX 3.1 also exposed the addParam() method, which we can use to set the values of these new variables so that they will be available to us in the on-demand PL/SQL process that we are invoking in the AJAX process. We are invoking the addParam() method in a similar manner to the add() method, although it's important to remember that they don't have the same functionality. With addParam(), instead of defining a special application item—TEMP1—which we'll only use with the AJAX processes, we can use the following:
ajaxReq.addParam('x01',$v('P10_SEARCH'));
Now we can use this variable in the on-demand PL/SQL process, for example, as part of a WHERE clause:
select col1, . . .
from my_table
where col1 = apex_application.g_x01;
addParam() is not setting Session State. As such, the g_x01 to g_x10 variables have no persistence features. As we are talking about package variables, their scope is only the on-demand PL/SQL process that we are invoking with the AJAX call. After the on-demand PL/SQL anonymous block has run its course, these variables will be initialized, just like any other package variable, upon new database session.
General remarks
The following are some general remarks about the functionality and relationship of add() and addParam():
- Both add() and addParam() are actually creating a single string, ampersand (&) delimited, which comprised of name=value pairs. Ultimately, this string acts as a parameter in one of the methods that initiates the AJAX process (the XMLHttpRequest send() method).
As such, we can call these methods as many times as we need in order to set all the variables we need. For example:'));
. . .
- The addParam() method is not replacing the add() method. Each has its own unique role in the APEX AJAX framework.
We should use the add() method when we want to set the values of application or page items and save these values in Session State.
We can't use add() to set the APEX 3.1 and above x01 to x10 parameters. Doing so will ultimately lead to an error message.
In version 3.1 and above we should use the addParam() methods to set the values of the x01 to x10> parameters.
We can't use addParam() to set the values of the application or page items to be used in the AJAX call. Doing so will ultimately lead to an error message.
- We can't use add() or addParam() to set the parameters of a stored (package) procedure that we want to invoke with AJAX. For that, we must use the queryString parameter of htmldb_Get().
If we set the queryString parameter of htmldb_Get(), the system will ignore any add() or addParam() calls, and their values will not be set.
- There are several more global variables, just like the g_x01 to g_x10, that are already defined in the wwv_flow package. It is not advisable at this point (APEX 3.1/3.2) to use these global variables as temporary variables in the AJAX related on-demand PL/SQL processes. Although it will not break anything in these versions, the APEX development team is going to use them in future versions for some other purposes. Using them now could expose your application to upgrade risks in future APEX versions.
.get(mode, startTag, endTag)
The get() method is the method that implements the AJAX call itself by generating the XMLHttpRequest object, and using its methods with the proper parameters that were constructed with the htmldb_Get object and its add() or addParam() methods.
The get() method implements a synchronize POST AJAX request. Until APEX 3.1, a synchronized AJAX call was the only mode that APEX supported. This means that the JavaScript code always waits for the server-side AJAX response before it continues with the JavaScript code flow.
A synchronized AJAX call, as APEX is using, can cause the Web browser to freeze for a moment while it waits for the server-side response. In most cases, it probably will not be noticeable, but it really depends on the complexity of the server-side logic, the amount of the AJAX-returned data, and the quality and bandwidth of the communication lines.
The mode parameter
The first parameter of get() is a String one, and it can be set to null or to 'XML'. This parameter determines the data format of the AJAX response. If set to null, then the returned data will be a String JavaScript object, which should be assigned to a JavaScript variable.
JSON, in this context, is considered a JavaScript String object, so the mode parameter should be set to null.
If this parameter is set to 'XML', then the returned AJAX response must be formatted as a valid XML fragment. It's our responsibility, as developers, to make sure that the returned data that we are generating on the server side, as part of an on-demand PL/SQL process or a stored (packaged) procedure, is formatted properly. Failing to do so will also fail the AJAX process.
The startTag and endTag parameters
The second and third parameters are only relevant when we are pulling a clip of content from an application page using AJAX. In this case, the first parameter should be set to null and the startTag parameter should be set to a String that marks the starting point of the clipping; the endTag parameter should be set to a String that marks the ending point of the clipping.
Although the startTag and endTag parameters can be set to any string text on the pulled page, they should be unique so that the clipped area will be well-defined. As the clipped code is going to be injected into the AJAX caller page, using an innerHTML property, it's best to start the clipping with an HTML tag and end it with its closing tag. As HTML tags are usually not unique, it's best for us to embed our own unique tags to designate the starting and ending points of the clipping.
Code examples
Now, we can see a complete AJAX call:'));
gReturn = ajaxReq.get();
ajaxReq = null;
The AJAX cycle starts by creating a new instance of htmldb_Get and assigning it to ajaxReq. While doing so, we are setting the req parameter to be 'APPLICATION_PROCESS=demo_code', which means that in this AJAX call we want to invoke an on-demand PL/SQL process called demo_code.
Next, we set two "temporary" variables—x01 and x02—with values from the AJAX calling page. The apex_application.g_x01 and apex_application.g_x02 will be available to us within the demo_code on-demand PL/SQL process.
We are firing the AJAX process by using the get() method. In this case, we are using get() without any parameters, which means that the AJAX returned response will be formatted as a JavaScript String object, hence we are assigning it into the gReturn variable, which I hope you remember is a global JavaScript variable, defined as part of the APEX supplied JavaScript library.
It's considered a "good practice" to set the pointer to the AJAX object to null when it ran its course. That allows the web browser engine to collect its memory and avoids memory leaks. In our example, we assign null to ajaxReq.
Let's review the following code:
var ajaxReq = new htmldb_Get(null, $v('pFlowId'),
'APPLICATION_PROCESS=filter_options', $v('pFlowStepId'));
ajaxReq.add('P10_SEARCH',$v('P10_SEARCH'));
gReturn = ajaxReq.get('XML');
ajaxReq = null;
In this example, we are calling an on-demand PL/SQL process called filter_options. We are setting the value of the page itemP10_SEARCH using the add() method, so it will be available to the filter_options on-demand process.
We are firing the AJAX process by using get('XML'). This means that the AJAX server response, which will be assigned to gReturn, must be formatted as a valid XML fragment. It is our responsibility, as developers, to make sure that the returned information will be formatted properly within the filetr_options on-demand process. Otherwise, the AJAX process will fail.
In the following example, we are using AJAX to clip content from one of our application pages:
var ajaxReq = new htmldb_Get('prev_cal',$v('pFlowId'),null,20);
ajaxReq.add('P20_CALENDAR_DATE',$v('P40_PREV_MONTH'));
ajaxReq.get(null,'<cal:clip>','</cal:clip>');
ajaxReq = null;
In this case, we are using the first parameter of htmldb_Get to determine that the AJAX returned data will be injected into a <div id="prev_cal"> element, using its innerHTML property. The third parameter—req—is set to null, as we are not invoking any on-demand PL/SQL process. The fourth parameter—page—is set to 20. This is the page ID that we want to pull.
In the next line of code, we are using the add() method to set the value of the page item P20_CALENDAR_DATE (on the pulled page) to the value of the page item P40_PREV_MONTH (on the AJAX calling page).
Next, we fire the AJAX process using the get() methods. The first parameter is set to null as it's not relevant in this case. The second parameter is set to '<cal:clip>' and the AJAX process will start clipping the HTML code of page 404 from this tag. The clipping will end with the <cal:clip> tag, the value of the third parameter.
Restrictions with AJAX pulling content
When using AJAX to pull content from another application page, we should avoid clipping code that defines active page elements such as page items, buttons, or pagination components (which I'll address separately, as we can overcome this restriction).
When we create a page element on an application page, this element associates specifically with the page it was created on. This element can't be activated—i.e. submitted or POSTed—on any other page other than the one it was created on, unless we take special measures to allow it.
While clipping the HTML page code, the AJAX process includes all the code between the start and the end tags in the second and third parameters of the get() methods. The clipping process can't differentiate between a code that renders data and code that renders active page elements. The clipped code is injected into the calling AJAX page, using the innerHTML property of one of its DOM elements. If we are not careful enough it can include code to active page element(s). This/these element(s) will be rendered on the AJAX calling page. However they can't be used on their new location, as I mentioned before. If such an element is referenced on the AJAX calling page it will produce an error message saying that this element can't be found on the current page (as in the APEX metadata tables, it's associates with a different page, the one it was created on).
However, sometimes we do need to use page items on the pulled page to use them with the page logic, for example in a WHERE clause or as a starting date to a calendar. One such way of using these page items is to make sure that they are laid out outside the content area we are going to clip. Another option is to not render them on the page. We can do that by conditioning their display to Never. The APEX engine allocates a proper variable for the page item, which can be referenced in the page logic. However, the page item itself will never be rendered on the page, hence its code can't be clipped as part of the AJAX call.
Pulling report with pagination
One of the more common uses of the AJAX capability to pull content from another application page is to pull the content of a report, for example, displaying a filtered report, based on a search term, without submitting the page.
This use of AJAX requires special attention as APEX reports may include, by default, some active elements that need to be taken care of. One element is the option to highlight rows in the report. Another is the report pagination element. Both of these elements include JavaScript functions that use the report region ID (the APEX engine internal number, not the one represented by #REGION_STATIC_ID#). Unfortunately for us, the APEX engine hardcode the region ID into the HTML code of the application page. Moreover, the pagination element also uses the page hidden item pFlowStepId, which holds the application page ID. This value, naturally, is not the same on the AJAX calling page, which runs the pagination element, after it was pulled, and on the original report page, which holds the report query, in which the pagination parameters has a meaning.
A very simple solution to these problems will be to avoid the elements that cause them. Just don't use the report highlight feature and avoid pagination.
.GetAsync(pVar)
The getAsync() method, introduced in APEX 3.1, extends the htmldb_Get functionality to also include an asynchronous AJAX request. An asynchronous AJAX request means that the client side initiates an AJAX request and sends it to the server. The JavaScript code flow continues without waiting for the server-side response. It's up to us, as developers, to monitor the status of the server response and act accordingly.
GetAsync() accepts a single parameter, which represents a function that will be fired each time the server-side response status changes. This is not a regular JavaScript function. It's actually a value of a property—onreadystatechange—of the XMLHttpRequest object that was created by GetAsync() and was assigned to a JavaScript global variable called p. We can use this p variable each time we need to reference the XMLHttpRequest object or one of its properties.
One of the XMLHttpRequest object properties we need to reference, while using asynchronous AJAX request, is the readyState property. This property reflects the status of the server-side response. It can have five different values, starting with 0, and sequentially growing to 4, which state that the server-side response has been completely accepted by the client side. In most cases, this is the status that interests us. However, each time the value of readyState changes, the function stored in onreadystatechange—the function that we used as the parameter of GetAsync()—is fired. Hence, this function should include a code that can handle all the readyState status values and take the proper action for each of them (or as it may be the case for a status other than 4, doing nothing).
Another XMLHttpRequest object property we can use is the responseText. In APEX context, p.responseText holds the server-side AJAX response and we can use it on the client side, just like we are using the synchronous AJAX response.
(For more resources on Oracle, see here.)
The pVar function
The function that we are using as the pVar parameter of GetAsync() can be defined as inline code or as an external (to the getAsync() method) function.
The following is an example of using inline code:
ajaxReq.GetAsync(function(){return;});
In this case, the function doesn't do anything, regardless of the readyState value.
In the following example, we are using an external function:
ajaxReq.GetAsync(checkAll);
function checkAll() {
switch (p.readyState) {
case 1: /* The AJAX request has been set up */
setStatusBar();
break;
case 2: /* The AJAX request has been sent */
case 3: /* The AJAX request is in process */
break;
case 4: /* The AJAX request is complete */
clearStatusBar();
gReturn = p.responseText;
. . .
break;
}
}
In this example, the checkAll function treats the various values of readyState differently. At the beginning of the AJAX process, when readyState is changed to 1, it calls a function called setStatusBar(). It actually ignores the readyState changes to 2 and 3, and when readyState is changed to 4, which is the state that the AJAX request was completed, it calls the clearStatusBar() function and assigns the server-side response to the global gReturn variable. The rest of the code can be of any logic you need to implement using the AJAX request result.
The function we are using as the pVar parameter is not a regular JavaScript function and it can't accept parameters. However, it can call a second JavaScript function, this time a regular JavaScript function, which can accept parameters. For example:
ajaxReq.GetAsync(checkAll);
function checkAll() {
if (p.readyState == 4) {
var p_page = $v('pFlowStepId');
checkOnPage(p_page);
}
}
function checkOnPage(p_page) {
. . .
}
Always remember that the checkAll function will be fired for every status change of readyStatus. This means that if the logic of the function is meant to be run only after the AJAX request is completed, it should be conditioned, as in the above example.
It's important to understand the principles and the reasoning behind the processes we are dealing with here. Please don't try to find any real logic in the above specific code. The checkAll, setStatusBar, clearStatusBar, and checkOnPage functions are all figments of my imagination, and I'm only using them to make a point.
Namespace for the APEX AJAX framework
In version 3.1, APEX started to implement a namespace strategy with its supplied JavaScript library to ensure smooth operation with other external JavaScript libraries. The apex_ns_3_1.js file contains the current APEX JavaScript namespace definitions, and it includes a namespace definition for some APEX AJAX elements – apex.ajax.
In some demo code out there, you can see a reference to apex.ajax.ondemand(). This method is a wrapper to an APEX asynchronous AJAX call. The method accepts two parameters. The first is a string parameter that includes the name of the on-demand PL/SQL process we wish to invoke in the AJAX process. As we are dealing with an asynchronous AJAX call, the second parameter points to the onreadystatechange function—this function is fired each time the readyStatus value changes. Usually this function processes the server-side response.
You are encouraged to review the apex.ajax namespace in the apex_ns_3_1.js file to learn more about it.
AJAX support on the server side
So far, we covered the client-side aspects of the AJAX call. Now it's time to review the options available to us on the server side. These options should implement the logic we are seeking in the AJAX call.
Application on-demand PL/SQL process
The htmldb_Get third parameter—req—allows us to define the server-side PL/SQL process we want to invoke as part of the AJAX call. This process must be an application level On Demand: Run this application process when requested by a page process type of process. For short, we'll just call it on-demand PL/SQL process.
In future versions of APEX a full page level on-demand PL/SQL process might be implemented, but for now (APEX 3.2 and earlier), on-demand processes are available to us only on the application level, although we can call them from the page level.
The on-demand PL/SQL process is a regular PL/SQL anonymous block, and it can contain any valid PL/SQL code we need to implement the logic of the server-side AJAX call.
The following is a simple example of an on-demand PL/SQL process code:
declare
l_status varchar2(10);
begin
begin
select 'non-unique' into l_status
from my_table
where col1 = apex_application.g_x01;
exception
when NO_DATA_FOUND then
l_status := 'unique';
end;
htp.prn(l_status);
end;
This process returns the value 'unique' if the value of apex_application.g_x01 is not found in col1 of the my_table table; it returns 'non-unique' if the value already exists in the table..
The on-demand PL/SQL process must be able to return the AJAX server-side response—the result of the PL/SQL logic—to the JavaScript function that initiated the AJAX call. We can do this by using the Oracle supplied procedure htp.prn. In this context, you can treat the htp.prn procedure as the return statement of the on-demand PL/SQL process.
Unlike the return statement of a function, you can call the htp.prn procedure as many times as you need, and the AJAX server-side response will be compiled from all the htp.prn calls (the response will be a concatenation of all the htp.prn calls).
Stored (packaged) procedure
Although using an application level on-demand PL/SQL process is the most common way to implement the AJAX server-side component, the htmldb_Get() constructor also allows us to invoke stored (packaged) procedures, which were defined outside the APEX environment, as part of the AJAX server-side logic. We can use the htmldb_Get() sixth parameter—proc—to name the stored (packaged) procedure we want to invoke, and the seventh parameter—queryString—to pass the needed parameters to the procedure (as we described in the htmldb_Get() section).
As with all the stored (packaged) procedures we want to invoke, from within our APEX application, the AJAX invoked procedures should also be "APEX compatible", i.e. they should use bind variables and the v() / nv() functions to access APEX items and Session State values.
Handling errors in the AJAX process
The APEX AJAX framework doesn't excel in error handling. The AJAX process doesn't generate an error report on the APEX level. That means that the APEX application is not stopped due to an error in the AJAX process. It is up to us, as developers, to inspect the server-side response and determine if the AJAX process was successful.
One indication of a failed AJAX process is a server-side response of null. This will happen if the APEX engine was not able to run the server-side logic, such as in the case of security or privileges issues (including APEX authentication, authorization, and page access control failures), or any other error that the APEX engine found in the PL/SQL code that doesn't generate a specific database error. In cases where database or Web server (mod_plsql) errors were generated, the server-side response will include them but no other error message will be issued.
Debugging a failed AJAX process
Debugging a failed AJAX process should include several stages. The first one is to use the JavaScript alert() function to display the server-side returned value. If we are lucky, and the returned value includes an error message, we should resolve this first. If, however, the returned response is empty we should move to the next step.
We should determine if the communication between the client side—the specific AJAX calling page—and the server side is working properly. We can do that by setting the on-demand PL/SQL process to a very minimal and simple code. For example:
htp.prn('Hello from the server side');
If the returned value includes this message, the AJAX communication is working fine. If, however, the returned response is still empty it probably means that you have a security issue. The most common error in this category is to initiate an AJAX call from a public page alongside using page 0, which Require Authentication, as the page parameter in htmldb_Get(). The solution in this case is very simple. Replace page 0 with $v('pFlowStepId').
Using $v('pFlowStepId') as the default page parameter for htmldb_Get(), as we recommend, will prevent this type of error.
If the AJAX communication is working fine, and no specific error message is returned from the server, but we are still not getting the AJAX server-side response we expect, it usually means that the PL/SQL code is not working properly. One of the common problems in this case is a syntax error with a bind variable—the code contains a bind variable name that doesn't exist. No error message is generated, but the PL/SQL code doesn't work. In these cases, I recommend that you copy the PL/SQL code to the APEX SQL Commands utility and try to debug it in there. This is also what you need to do if the AJAX process returns a wrong response from the application logic point of view.
Summary
In this article, we reviewed the AJAX framework within APEX. We learned about the basic principles of the AJAX technology and how APEX implements them using both synchronous and asynchronous mode of communication.
Further resources on this subject:
- The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11.1 [book]
- Checkbox Persistence in Tabular Forms (Reports) [article]
- Oracle: RDBMS Log Miner Utility, FRA, and AUM [article]
- Introducing SQL Developer Data Modeler: Part 1 [article]
- Introducing SQL Developer Data Modeler: Part 2 [article] | https://www.packtpub.com/books/content/ajax-implementation-apex | CC-MAIN-2016-07 | refinedweb | 6,912 | 61.16 |
0,4
b(n) = A002620(n+2) = number of multigraphs with loops on 2 nodes with n edges [so g.f. for b(n) is 1/((1-x)^2*(1-x^2))]. Also number of 2-covers of an n-set; also number of 2 X n binary matrices with no zero columns up to row and column permutation. - Vladeta Jovovic, Jun 08 2000
a(n) is also the maximal number of edges that a triangle-free graph of n vertices can have. For n = 2m, the maximum is achieved by the bipartite graph K(m, m); for n = 2m + 1, the maximum is achieved by the bipartite graph K(m, m + 1). - Avi Peretz (njk(AT)netvision.net.il), Mar 18 2001
a(n) is the number of arithmetic progressions of 3 terms and any mean which can be extracted from the set of the first n natural numbers (starting from 1). - Santi Spadaro, Jul 13 2001
This is also the order dimension of the (strong) Bruhat order on the Coxeter group A_{n-1} (the symmetric group S_n). - Nathan Reading (reading(AT)math.umn.edu), Mar 07 2002
Let M_n denote the n X n matrix m(i,j) = 2 if i = j; m(i, j) = 1 if (i+j) is even; m(i, j) = 0 if i + j is odd, then a(n+2) = det M_n. - Benoit Cloitre, Jun 19 2002
Sums of pairs of neighboring terms are triangular numbers in increasing order. - Amarnath Murthy, Aug 19 2002
Also, from the starting position in standard chess, minimum number of captures by pawns of the same color to place n of them on the same file (column). Beyond a(6), the board and number of pieces available for capture are assumed to be extended enough to accomplish this task. - Rick L. Shepherd, Sep 17 2002
For example, a(2) = 1 and one capture can produce "doubled pawns", a(3) = 2 and two captures is sufficient to produce tripled pawns, etc. (Of course other, uncounted, non-capturing pawn moves are also necessary from the starting position in order to put three or more pawns on a given file.) - Rick L. Shepherd, Sep 17 2002
Terms are the geometric mean and arithmetic mean of their neighbors alternately. - Amarnath Murthy, Oct 17 2002
Maximum product of two integers whose sum is n. - Matthew Vandermast, Mar 04 2003
a(n+1) gives number of non-symmetric partitions of n into at most 3 parts, with zeros used as padding. E.g., a(6) = 12 because we can write 5 = 5 + 0 + 0 = 0 + 5 + 0 = 4 + 1 + 0 = 1 + 4 + 0 = 1 + 0 + 4 = 3 + 2 + 0 = 2 + 3 + 0 = 2 + 0 + 3 = 2 + 2 + 1 = 2 + 1 + 2 = 3 + 1 + 1 = 1 + 3 + 1. - Jon Perry, Jul 08 2003
a(n-1) gives number of distinct elements greater than 1 of non-symmetric partitions of n into at most 3 parts, with zeros used as padding, appear in the middle. E.g., 5 = 5 + 0 + 0 = 0 + 5 + 0 = 4 + 1 + 0 = 1 + 4 + 0 = 1 + 0 + 4 = 3 + 2 + 0 = 2 + 3 + 0 = 2 + 0 + 3 = 2 + 2 + 1 = 2 + 1 + 2 = 3 + 1 + 1 = 1 + 3 + 1. Of these, 050, 140, 320, 230, 221, 131 qualify and a(4) = 6. - Jon Perry, Jul 08 2003
Union of square numbers (A000290) and oblong numbers (A002378). - Lekraj Beedassy, Oct 02 2003
Conjectured size of the smallest critical set in a Latin square of order n (true for n <= 8). - Richard Bean (rwb(AT)eskimo.com), Jun 12 2003 and Nov 18 2003
a(n) gives number of maximal strokes on complete graph K_n, when edges on K_n can be assigned directions in any way. A "stroke" is a locally maximal directed path on a directed graph. Examples: n = 3, two strokes can exist, "x -> y -> z" and " x -> z", so a(3) = 2 . n = 4, four maximal strokes exist, "u -> x -> z" and "u -> y" and "u -> z" and "x -> y -> z", so a(4) = 4. - Yasutoshi Kohmoto, Dec 20 2003
Number of symmetric Dyck paths of semilength n+1 and having three peaks. E.g., a(4) = 4 because we have U*DUUU*DDDU*D, UU*DUU*DDU*DD, UU*DDU*DUU*DD and UUU*DU*DU*DDD, where U = (1, 1), D = (1, -1) and * indicates a peak. - Emeric Deutsch, Jan 12 2004
Number of valid inequalities of the form j + k < n + 1, where j and k are positive integers, j <= k, n >= 0. - Rick L. Shepherd, Feb 27 2004
See A092186 for another application.
Also, the number of nonisomorphic transversal combinatorial geometries of rank 2. - Alexandr S. Radionov (rasmailru(AT)mail.ru), Jun 02 2004
a(n+1) is the transform of n under the Riordan array (1/(1-x^2), x). - Paul Barry, Apr 16 2005
1, 2, 4, 6, 9, 12, 16, 20, 25, 30, ... specifies the largest number of copies of any of the gifts you receive on the n-th day in the "Twelve Days of Christmas" song. For example, on the fifth day of Christmas, you have 9 French hens. - Alonso del Arte, Jun 17 2005
a(n) = Sum_{k=0..n} Min{k, n-k}, sums of rows of the triangle in A004197. - Reinhard Zumkeller, Jul 27 2005
a(n+1) is the number of noncongruent integer-sided triangles with largest side n. - David W. Wilson [Comment corrected Sep 26 2006]
A quarter-square table can be used to multiply integers since n*m = a(n+m) - a(n-m) for all integer n, m. - Michael Somos, Oct 29 2006
The sequence is the size of the smallest strong critical set in a Latin square of order n. - G.H.J. van Rees (vanrees(AT)cs.umanitoba.ca), Feb 16 2007
Maximal number of squares (maximal area) in a polyomino with perimeter 2n. - Tanya Khovanova, Jul 04 2007
For n >= 3 a(n-1) is the number of bracelets with n+3 beads, 2 of which are red, 1 of which is blue. - Washington Bomfim, Jul 26 2008
Equals row sums of triangle A122196. - Gary W. Adamson, Nov 29 2008
Also a(n) is the number of different patterns of a 2-colored 3-partition of n. - Ctibor O. Zizka, Nov 19 2014
Also a(n-1) = C(((n+(n mod 2))/2), 2) + C(((n-(n mod 2))/2), 2), so this is the second diagonal of A061857 and A061866, and each even-indexed term is the average of its two neighbors. - Antti Karttunen
Equals triangle A171608 * ( 1, 2, 3, ...). - Gary W. Adamson, Dec 12 2009
a(n) gives the number of nonisomorphic faithful representations of the Symmetric group S_3 of dimension n. Any faithful representation of S_3 must contain at least one copy of the 2-dimensional irrep, along with any combination of the two 1-dimensional irreps. - Andrew Rupinski, Jan 20 2011
a(n+2) gives the number of ways to make change for "c" cents, letting n = floor(c/5) to account for the 5-repetitive nature of the task, using only pennies, nickels and dimes (see A187243). - Adam Sasson, Mar 07 2011
a(n) belongs to the sequence if and only if a(n) = floor(sqrt(a(n))) * ceiling(sqrt(a(n))), that is, a(n) = k^2 or a(n) = k*(k+1), k >= 0. - Daniel Forgues, Apr 17 2011
a(n) is the sum of the positive integers < n that have the opposite parity as n.
Deleting the first 0 from the sequence results in a sequence b = 0, 1, 2, 4, ... such that b(n) is sum of the positive integers <= n that have the same parity as n. The sequence b(n) is the additive counterpart of the double factorial. - Peter Luschny, Jul 06 2011
Third outer diagonal of Losanitsch's Triangle, A034851. - Fred Daniel Kline, Sep 10 2011
Written as a(1) = 1, a(n) = a(n-1) + ceiling (a(n-1)) this is to ceiling as A002984 is to floor, and as A033638 is to round. - Jonathan Vos Post, Oct 08 2011
a(n-2) gives the number of distinct graphs with n vertices and n regions. - Erik Hasse, Oct 18 2011
Construct the n-th row of Pascal's triangle (A007318) from the preceding row, starting with row 0 = 1. a(n) counts the total number of additions required to compute the triangle in this way up to row n, with the restrictions that copying a term does not count as an addition, and that all additions not required by the symmetry of Pascal's triangle are replaced by copying terms. - Douglas Latimer, Mar 05 2012
a(n) is the sum of the positive differences of the parts in the partitions of n+1 into exactly 2 parts. - Wesley Ivan Hurt, Jan 27 2013
a(n) is the maximum number of covering relations possible in an n-element graded poset. For n = 2m, this bound is achieved for the poset with two sets of m elements, with each point in the "upper" set covering each point in the "lower" set. For n = 2m+1, this bound is achieved by the poset with m nodes in an upper set covering each of m+1 nodes in a lower set. - Ben Branman, Mar 26 2013
a(n+2) is the number of (integer) partitions of n into 2 sorts of 1's and 1 sort of 2's. - Joerg Arndt, May 17 2013
Alternative statement of Oppermann's conjecture: For n>2, there is at least one prime between a(n) and a(n+1). - Ivan N. Ianakiev, May 23 2013. [This conjecture was mentioned in A220492, A222030. - Omar E. Pol, Oct 25 2013]
For any given prime number, p, there are an infinite number of a(n) divisible by p, with those a(n) occurring in evenly spaced clusters of three as a(n), a(n+1), a(n+2) for a given p. The divisibility of all a(n) by p and the result are given by the following equations, where m >= 1 is the cluster number for that p: a(2m*p)/p = p*m^2 - m; a(2m*p + 1)/p = p*m^2; a(2m*p + 2)/p = p*m^2 + m. The number of a(n) instances between clusters is 2*p - 3. - Richard R. Forberg, Jun 09 2013
Apart from the initial term this is the elliptic troublemaker sequence R_n(1,2) in the notation of Stange (see Table 1, p.16). For other elliptic troublemaker sequences R_n(a,b) see the cross references below. - Peter Bala, Aug 08 2013
a(n) is also the total number of twin hearts patterns (6c4c) packing into (n+1) X (n+1) coins, the coins left is A042948 and the voids left is A000982. See illustration in links. - Kival Ngaokrajang, Oct 24 2013
Partitions of 2n into parts of size 1, 2 or 4 where the largest part is 4, i.e., A073463(n,2). - Henry Bottomley, Oct 28 2013
a(n+1) is the minimum length of a sequence (of not necessarily distinct terms) that guarantees the existence of a (not necessarily consecutive) subsequence of length n in which like terms appear consecutively. This is also the minimum cardinality of an ordered set S that ensures that, given any partition of S, there will be a subset T of S so that the induced subpartition on T avoids the pattern ac/b, where a < b < c. - Eric Gottlieb, Mar 05 2014
A237347(a(n)) = 3; A235711(n) = A003415(a(n)). - Reinhard Zumkeller, Mar 18 2014
Also the number of elements of the list 1..n+1 such that for any two elements {x,y} the integer (x+y)/2 lies in the range ]x,y[. - Robert G. Wilson v, May 22 2014
Number of lattice points (x,y) inside the region of the coordinate plane bounded by x<=n, 0<y<=x/2. For a(11)=30 there are exactly 30 lattice points in the region below:
6| .
.| . |
5| .__+__+
.| . | | |
4| .__+__+__+__+
.| . | | | | |
3| .__+__+__+__+__+__+
.| . | | | | | | |
2| .__+__+__+__+__+__+__+__+
.| . | | | | | | | | |
1| .__+__+__+__+__+__+__+__+__+__+
.|. | | | | | | | | | | |
0|.__+__+__+__+__+__+__+__+__+__+__+_________
0 1 2 3 4 5 6 7 8 9 10 11 .. n
0 0 1 2 4 6 9 12 16 20 25 30 .. a(n) - Wesley Ivan Hurt, Oct 26 2014
a(n+1) is the greatest integer k for which there exists an n x n matrix M of nonnegative integers with every row and column summing to k, such that there do not exist n entries of M, all greater than 1, and no two of these entries in the same row or column. - Richard Stanley, Nov 19 2014
In a tiling of the triangular shape T_N with row length k for row k = 1, 2, ..., N >=1 (or, alternatively row length N = 1-k for row k) with rectangular tiles, there can appear rectangles (i, j), N >= i >= j >= 1, of a(N+1) types (and their transposed shapes obtained by interchanging i and j). See the Feb 27 2004 comment above from Rick L. Shepherd. The motivation to look into this came from a proposal of Kival Ngaokrajang in A247139. - Wolfdieter Lang, Dec 09 2014
Every positive integer is a sum of at most four distinct quarter-squares; see A257018. - Clark Kimberling, Apr 15 2015
a(n+1) gives the maximal number of distinct elements of an n X n matrix which is symmetric (w.r.t. the main diagonal) and symmetric w.r.t. the main antidiagonal. Such matrices are called bisymmetric. See the Wikipedia link. - Wolfdieter Lang, Jul 07 2015
For 2^a(n+1), n >= 1, the number of binary bisymmetric n X n matrices, see A060656(n+1) and the comment and link by Dennis P. Walsh. - Wolfdieter Lang, Aug 16 2015
a(n) is the number of partitions of 2n+1 of length three with exactly two even entries (see below example). - John M. Campbell, Jan 29 2016
a(n) is the sum of the asymmetry degrees of all 01-avoiding binary words of length n. The asymmetry degree of a finite sequence of numbers is defined to be the number of pairs of symmetrically positioned distinct entries. a(6) = 9 because the 01-avoiding binary words of length 6 are 000000, 100000, 110000, 111000, 111100, 111110, and 111111, and the sum of their asymmetry degrees is 0 + 1 + 2 + 3 + 2 + 1 + 0 = 9. Equivalently, a(n) = Sum(k*A275437(n,k), k>=0). - Emeric Deutsch, Aug 15 2016
a(n) is the number of ways to represent all the integers in the interval [3,n+1] as the sum of two distinct natural numbers. E.g., a(7)=12 as there are 12 different ways to represent all the numbers in the interval [3,8] as the sum of two distinct parts: 1+2=3, 1+3=4, 1+4=5, 1+5=6, 1+6=7, 1+7=8, 2+3=5, 2+4=6, 2+5=7, 2+6=8, 3+4=7, 3+5=8. - Anton Zakharov, Aug 24 2016
a(n+2) is the number of conjugacy classes of involutions (considering the identity as an involution) in the hyperoctahedral group C_2 wreath S_n. - Mark Wildon, Apr 22 2017
a(n+2) is the maximum number of pieces of a pizza that can be made with n cuts that are parallel or perpendicular to each other. - Anton Zakharov, May 11 2017
Also the matching number of the n X n black bishop graph. - Eric W. Weisstein, Jun 26 2017
The answer to a question posed by W. Mantel: a(n) is the maximum number of edges in an n-vertex triangle-free graph. Also solved by H. Gouwentak, J. Teixeira de Mattes, F. Schuh and W. A. Wythoff. - Charles R Greathouse IV, Feb 01 2018
Number of nonisomorphic outer planar graphs of order n >= 3, size n+2, and maximum degree 4. - Christian Barrientos and Sarah Minion, Feb 27 2018
Sergei Abramovich, Combinatorics of the Triangle Inequality: From Straws to Experimental Mathematics for Teachers, Spreadsheets in Education (eJSiE), Vol. 9, Issue 1, Article 1, 2016. See Fig. 3.
G. L. Alexanderson et al., The William Powell Putnam Mathematical Competition - Problems and Solutions: 1965-1984, M.A.A., 1985; see Problem A-1 of 27th Competition.
T. M. Apostol, Introduction to Analytic Number Theory, Springer-Verlag, 1976, page 73, problem 25.
R. L. Graham, D. E. Knuth and O. Patashnik, Concrete Mathematics. Addison-Wesley, Reading, MA, 1990, p. 99.
D. E. Knuth, The art of programming, Vol. 1, 3rd Edition, Addison-Wesley, 1997, Ex. 36 of section 1.2.4.
J. Nelder, Critical sets in Latin squares, CSIRO Division of Math. and Stats. Newsletter, Vol. 38 (1977), p. 4.
N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence).
N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence).
Franklin T. Adams-Watters, Table of n, a(n) for n = 0..10000
Suayb S. Arslan, Asymptotically MDS Array BP-XOR Codes, arXiv:1709.07949 [cs.IT], 2017.
J. A. Bate & G. H. J. van Rees, The Size of the Smallest Strong Critical Set in a Latin Square, Ars Combinatoria, Vol. 53 (1999) 73-83.
M. Benoumhani, M. Kolli, Finite topologies and partitions, JIS 13 (2010) # 10.3.5, Lemma 6 first line.
G. Blom and C.-E. Froeberg, Om myntvaexling (On money-changing) [Swedish], Nordisk Matematisk Tidskrift, 10 (1962), 55-69, 103. [Annotated scanned copy] See Table 4, row 3.
Washington G. Bomfim, Illustration of the bracelets with 8 beads, 2 of which are red, 1 of which is blue.
H. Bottomley, Illustration of initial terms
J. Brandts and C. Cihangir, Counting triangles that share their vertices with the unit n-cube, in Conference Applications of Mathematics 2013 in honor of the 70th birthday of Karel Segeth. Jan Brandts, Sergey Korotov, et al., eds., Institute of Mathematics AS CR, Prague 2013.
Jan Brandts, A Cihangir, Enumeration and investigation of acute 0/1-simplices modulo the action of the hyperoctahedral group, arXiv preprint arXiv:1512.03044 [math.CO], 2015.
P. J. Cameron, BCC Problem List, Problem BCC15.15 (DM285), Discrete Math. 167/168 (1997), 605-615.
P. J. Cameron, Sequences realized by oligomorphic permutation groups, J. Integ. Seqs. Vol. 3 (2000), #00.1.5.
Johann Cigler, Some remarks on Rogers-Szegö polynomials and Losanitsch's triangle, arXiv:1711.03340 [math.CO], 2017.
E. Fix and J. L. Hodges, Jr., Significance probabilities of the Wilcoxon test, Annals Math. Stat., 26 (1955), 301-312.
E. Fix and J. L. Hodges, Significance probabilities of the Wilcoxon test, Annals Math. Stat., 26 (1955), 301-312. [Annotated scanned copy]
A. Ganesan, Automorphism groups of graphs, arXiv preprint arXiv:1206.6279 [cs.DM], 2012. - From N. J. A. Sloane, Dec 17 2012
E. Gottlieb, M. Sheard, An Erdos-Szekeres result for set partitions, Slides from a talk, Nov 14 2014. [A006260 is a typo for A002620]
R. K. Guy, Letters to N. J. A. Sloane, June-August 1968
INRIA Algorithms Project, Encyclopedia of Combinatorial Structures 105
O. A. Ivanov, On the number of regions into which n straight lines divide the plane, Amer. Math. Monthly, 117 (2010), 881-888. See Th. 4.
T. Jenkyns and E. Muller, Triangular triples from ceilings to floors, Amer. Math. Monthly, 107 (Aug. 2000), 634-639.
V. Jovovic, Vladeta Jovovic, Number of binary matrices
Clark Kimberling and John E. Brown, Partial Complements and Transposable Dispersions, J. Integer Seqs., Vol. 7, 2004.
S. Lafortune, A. Ramani, B. Grammaticos, Y. Ohta and K.M. Tamizhmani, Blending two discrete integrability criteria: ..., arXiv:nlin/0104020 [nlin.SI], 2001.
W. Lanssens, B. Demoen, P.-L. Nguyen, The Diagonal Latin Tableau and the Redundancy of its Disequalities, Report CW 666, July 2014, Department of Computer Science, KU Leuven.)
W. Mantel and W. A. Wythoff, Vraagstuk XXVIII, Wiskundige Opgaven, 10 (1907), pp. 60-61.
Rene Marczinzik, Finitistic Auslander algebras, arXiv:1701.00972 [math.RT], 2017 [Page 9, Conjecture].
Mircea Merca, Inequalities and Identities Involving Sums of Integer Functions, J. Integer Sequences, Vol. 14 (2011), Article 11.9.1.
Kival Ngaokrajang, Illustration of twin hearts patterns (6c4c): T, U, V
Brian O'Sullivan and Thomas Busch, Spontaneous emission in ultra-cold spin-polarised anisotropic Fermi seas, arXiv:0810.0231v1 [quant-ph], 2008. [Eq 8a, lambda=2]. Reading, Order Dimension, Strong Bruhat Order and Lattice Properties for Posets
N. Reading, Order Dimension, Strong Bruhat Order and Lattice Properties for Posets, Order, Vol. 19, no. 1 (2002), 73-100.
J. Scholes, 27th Putnam 1966 Prob. A1
N. J. A. Sloane, Classic Sequences
Sam E. Speed, The Integer Sequence A002620 and Upper Antagonistic Functions, Journal of Integer Sequences, Vol. 6 (2003), Article 03.1.4
K. E. Stange, Integral points on elliptic curves and explicit valuations of division polynomials arXiv:1108.3051v3 [math.NT], 2011-2014.
Eric Weisstein's World of Mathematics, Black Bishop Graph
Eric Weisstein's World of Mathematics, Matching Number
Thomas Wieder, The number of certain k-combinations of an n-set, Applied Mathematics Electronic Notes, vol. 8 (2008).
Wikipedia, Bisymmetric Matrix.
Index entries for two-way infinite sequences
Index entries for linear recurrences with constant coefficients, signature (2,0,-2,1).
Index entries for "core" sequences
a(n) = (2*n^2-1+(-1)^(n))/8. - Paul Barry, May 27 2003
G.f.: x^2/((1-x)^2*(1-x^2)).
E.g.f.: exp(x)*(2*x^2+2*x-1)/8+exp(-x)/8.
a(n) = 2*a(n-1) - 2*a(n-3) + a(n-4). - Jaume Oliver Lafont, Dec 05 2008
a(-n) = a(n) for all n in Z.
a(n) = a(n-1) + int(n/2), n > 0. Partial sums of A004526. - Adam Kertesz, Sep 20 2000
a(n) = a(n-1) + a(n-2) - a(n-3) + 1 [with a(-1) = a(0) = a(1) = 0], a(2k) = k^2, a(2k-1) = k(k-1). - Henry Bottomley, Mar 08 2000
0*0, 0*1, 1*1, 1*2, 2*2, 2*3, 3*3, 3*4, ... with an obvious pattern.
a(n) = Sum_{k=1..n} floor(k/2). - Yong Kong (ykong(AT)curagen.com), Mar 10 2001
a(n) = n*floor((n-1)/2) - floor((n-1)/2)*(floor((n-1)/2)+ 1); a(n) = a(n-2) + n-2 with a(1) = 0, a(2) = 0. - Santi Spadaro, Jul 13 2001
Also: a(n) = binomial(n, 2) - a(n-1) = A000217(n-1) - a(n-1) with a(0) = 0. - Labos Elemer, Apr 26 2003
a(n) = Sum_{k=0..n} (-1)^(n-k)*C(k, 2). - Paul Barry, Jul 01 2003
a(n) = (-1)^n * partial sum of alternating triangular numbers. - Jon Perry, Dec 30 2003
a(n) = A024206(n+1) - n. - Philippe Deléham, Feb 27 2004
a(n) = a(n-2) + n - 1, n > 1. - Paul Barry, Jul 14 2004
a(n+1) = Sum_{i=0..n} min(i, n-i). - Marc LeBrun, Feb 15 2005
a(n+1) = Sum_{k = 0..floor((n-1)/2)} n-2k; a(n+1) = Sum_{k=0..n} k*(1-(-1)^(n+k-1))/2. - Paul Barry, Apr 16 2005
a(n) = A108561(n+1,n-2) for n > 2. - Reinhard Zumkeller, Jun 10 2005
1 + 1/(1 + 2/(1 + 4/(1 + 6/(1 + 9/(1 + 12/(1 + 16/(1 + . . ))))))) = 6/(Pi^2 - 6) = 1.550546096730... - Philippe Deléham, Jun 20 2005
For n > 2 a(n) = a(n-1) + ceiling(sqrt(a(n-1))). - Jonathan Vos Post, Jan 19 2006
Sequence starting (2, 2, 4, 6, 9, ...) = A128174 (as an infinite lower triangular matrix) * vector [1, 2, 3, ...]; where A128174 = (1; 0,1; 1,0,1; 0,1,0,1; ...). - Gary W. Adamson, Jul 27 2007
a(n) = Sum_{i=k..n} P(i, k) where P(i, k) is the number of partitions of i into k parts. - Thomas Wieder, Sep 01 2007
a(n) = sum of row (n-2) of triangle A115514. - Gary W. Adamson, Oct 25 2007
For n > 1: gcd(a(n+1), a(n)) = a(n+1) - a(n). - Reinhard Zumkeller, Apr 06 2008
a(n+3) = a(n) + A000027(n) + A008619(n+1) = a(n) + A001651(n+1) with a(1) = 0, a(2) = 0, a(3) = 1. - Yosu Yurramendi, Aug 10 2008
a(2n) = A000290(n). a(2n+1) = A002378(n). - Gary W. Adamson, Nov 29 2008
a(n+1) = a(n) + A110654(n). - Reinhard Zumkeller, Aug 06 2009
a(n) = Sum_{k=0..n} (k mod 2)*(n-k); Cf. A000035, A001477. - Reinhard Zumkeller, Nov 05 2009
a(n-1) = (n*n - 2*n + n mod 2)/4. - Ctibor O. Zizka, Nov 23 2009
a(n) = round((2*n^2-1)/8) = round(n^2/4) = ceiling((n^2-1)/4). - Mircea Merca, Nov 29 2010
n*a(n+2) = 2*a(n+1) + (n+2)*a(n). Holonomic Ansatz with smallest order of recurrence. - Thotsaporn Thanatipanonda, Dec 12 2010
a(n+1) = (n*(2+n) + n mod 2)/4. - Fred Daniel Kline, Sep 11 2011
a(n) = A199332(n, floor((n+1)/2)). - Reinhard Zumkeller, Nov 23 2011
a(n) = floor(b(n)) with b(n) = b(n-1) + n/(1+e^(1/n)) and b(0)= 0. - Richard R. Forberg, Jun 08 2013
a(n) = Sum_{i=1..floor((n+1)/2)} (n+1)-2i. - Wesley Ivan Hurt, Jun 09 2013
a(n) = floor((n+2)/2 - 1)*(floor((n+2)/2)-1 + (n+2) mod 2). - Wesley Ivan Hurt, Jun 09 2013
Sum_{n>=2} 1/a(n) = 1 + Zeta(2) = 1+A013661. - Enrique Pérez Herrero, Jun 30 2013
Empirical: a(n) = floor(n/(e^(4/n)-1). - Richard R. Forberg, Jul 24 2013
a(n) = A007590(n)/2. - Wesley Ivan Hurt, Mar 08 2014
A240025(a(n)) = 1. - Reinhard Zumkeller, Jul 05 2014
0 = a(n)*a(n+2) + a(n+1)*(-2*a(n+2) + a(n+3)) for all integers n. - Michael Somos, Nov 22 2014
a(n) = Sum_{j=1..n} Sum_{i=1..n} ceiling((i+j-n-1)/2). - Wesley Ivan Hurt, Mar 12 2015
a(4n+1) = A002943(n) for all n>=0. - M. F. Hasler, Oct 11 2015
a(n+2)-a(n-2) = A004275(n+1). - Anton Zakharov, May 11 2017
a(n) = floor(n/2)*floor((n+1)/2). - Bruno Berselli, Jun 08 2017
a(3) = 2, floor(3/2)*ceiling(3/2) = 2.
[ n] a(n)
---------
[ 2] 1
[ 3] 2
[ 4] 1 + 3
[ 5] 2 + 4
[ 6] 1 + 3 + 5
[ 7] 2 + 4 + 6
[ 8] 1 + 3 + 5 + 7
[ 9] 2 + 4 + 6 + 8
From Wolfdieter Lang, Dec 09 2014 (Start)
Tiling of a triangular shape T_N, N>=1 with rectangles:
N = 5, n=6: a(6) = 9 because all the rectangles (i, j) (modulo transposition, i.e., interchange of i and j) which are of use are:
(5, 1) ; (1, 1)
(4, 2), (4, 1) ; (2, 2), (2, 1)
; (3, 3), (3, 2), (3, 1)
That is (1+1) + (2+2) + 3 = 9 = a(6). Partial sums of 1, 1, 2, 2, 3, ... (A004526).(End)
Bisymmetric matrices B: 2 X 2, a(3) = 2 from B[1,1] and B[1,2]. 3 X 3, a(4) = 4 from B[1,1], B[1,2], B[1,3], and B[2,2]. - Wolfdieter Lang, Jul 07 2015
From John M. Campbell, Jan 29 2016: (Start)
Letting n=5, there are a(n)=a(5)=6 partitions of 2n+1=11 of length three with exactly two even entries:
(8,2,1) |- 2n+1
(7,2,2) |- 2n+1
(6,4,1) |- 2n+1
(6,3,2) |- 2n+1
(5,4,2) |- 2n+1
(4,4,3) |- 2n+1
(End)
A002620 := n->floor(n^2/4); G002620 := series(x^2/((1-x)^2*(1-x^2)), x, 60);
with(combstruct):ZL:=[st, {st=Prod(left, right), left=Set(U, card=r), right=Set(U, card<r), U=Sequence(Z, card>=1)}, unlabeled]: subs(r=1, stack): seq(count(subs(r=2, ZL), size=m), m=0..57) ; # Zerinvary Lajos, Mar 09 2007
A002620:=-1/(z+1)/(z-1)^3; # Simon Plouffe in his 1992 dissertation, leading zeros dropped
A002620 := n -> add(k, k = select(k -> k mod 2 <> n mod 2, [$1 .. n])): seq(A002620(n), n = 0 .. 57);
# Peter Luschny, Jul 06 2011
f[n_] := Ceiling[n/2]Floor[n/2]; Table[ f[n], {n, 0, 56}] (* Robert G. Wilson v, Jun 18 2005 *)
a = 0; Table[(a = n^2 + n - a)/2, {n, -1, 90}] (* Vladimir Joseph Stephan Orlovsky, Nov 18 2009 *)
a[n_] := a[n] = 2a[n - 1] - 2a[n - 3] + a[n - 4]; a[0] = a[1] = 0; a[2] = 1; a[3] = 2; Array[a, 60, 0] (* Robert G. Wilson v, Mar 28 2011 *)
LinearRecurrence[{2, 0, -2, 1}, {0, 0, 1, 2}, 60] (* Harvey P. Dale, Oct 05 2012 *)
f[n_] := Block[{c = 0, m = n+1}, Do[ If[ MemberQ[ Range[x, y], (x + y)/2], c++ ], {x, m - 1}, {y, x + 1, m}]; c] (* Robert G. Wilson v, May 22 2014 *)
(MAGMA) [ Floor(n/2)*Ceiling(n/2) : n in [0..40]];
(PARI) a(n)=n^2\4
(PARI) t(n)=n*(n+1)/2 for(i=1, 50, print1(", "(-1)^i*sum(k=1, i, (-1)^k*t(k))))
(PARI) a(n)=n^2>>2 \\ Charles R Greathouse IV, Nov 11 2009
(PARI) x='x+O('x^100); concat([0, 0], Vec(x^2/((1-x)^2*(1-x^2)))) \\ Altug Alkan, Oct 15 2015
(Haskell)
a002620 = (`div` 4) . (^ 2) -- Reinhard Zumkeller, Feb 24 2012
(Maxima) makelist(floor(n^2/4), n, 0, 50); /* Martin Ettl, Oct 17 2012 */
(Sage)
def A002620():
x, y = 0, 1
yield x
while true:
x, y = x + y, x//y + 1
a = A002620(); print [a.next() for i in range(58)] # Peter Luschny, Dec 17 2015
(GAP) # using the formula by Paul Barry
A002620 := List([1..10^4], n-> (2*n^2 - 1 + (-1)^n)/8); # Muniru A Asiru_, Feb 01 2018
A087811 is another version of this sequence.
Cf. A024206, A072280, A002984, A007590, A000212, A118015, A056827, A118013, A128174, A000601, A115514, A189151, A063657, A171608, A005044, A030179, A275437, A004526.
Differences of A002623. Complement of A049068.
a(n) = A014616(n-2) + 2 = A033638(n) - 1 = A078126(n) + 1. Cf. A055802, A055803.
Antidiagonal sums of array A003983.
Cf. A033436 - A033444. - Reinhard Zumkeller, Nov 30 2009
Cf. A008233, A008217, A014980, A197081, A197122.
Elliptic troublemaker sequences: A000212 (= R_n(1,3) = R_n(2,3)), A007590 (= R_n(2,4)), A030511 (= R_n(2,6) = R_n(4,6))), A033436 (= R_n(1,4) = R_n(3,4)), A033437 (= R_n(1,5) = R_n(4,5)), A033438 (= R_n(1,6) = R_n(5,6)), A033439 (= R_n(1,7) = R_n(6,7)), A184535 (= R_n(2,5) = R_n(3,5)).
Cf. A077043, A060656 (2^a(n)).
Sequence in context: A088900 A083392 A076921 * A087811 A025699 A224813
Adjacent sequences: A002617 A002618 A002619 * A002621 A002622 A002623
nonn,easy,nice,core
N. J. A. Sloane
approved | https://oeis.org/A002620 | CC-MAIN-2018-17 | refinedweb | 5,209 | 73.17 |
On 7/12/2012 7:20 AM, Tim Watts wrote:
> No offense, but this really isn't a list for teaching Java.
> But what the hell, I'll make one "exception":
>
> public class DAOException extends Exception {
> public DAOException(String msg) {
> super();
> }
>
> public DAOException(String msg) {
> super(msg);
> }
>
> public DAOException(String msg, Throwable cause) {
> super(msg, cause);
> }
>
> public DAOException(Throwable cause) {
> super(cause);
> }
> }
>
> That's all I'll s say on this thread.
I know Tim something was missing.Just 20 mins back I fixed the class,now
I get the cool stack trace,damm it file upload stuff is breaking and
passing null values.I will fix it.
I know this is not a place to learn Java and thats not my intention.Its
time for me to implement robust logging framework,so thought let me ask
here as which is light weight and yet can log exceptions preferably both
handled and unhandled.
and thanks for some headsup.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-users/201207.mbox/%3C4FFE4573.1080707@poonam.org%3E | CC-MAIN-2015-27 | refinedweb | 177 | 56.55 |
Abstract data types in C
What's an abstract data type?
You're well acquainted with data types by now, like integers, arrays, and so on. To access the data, you've used operations defined in the programming language for the data type, for instance by accessing array elements by using the square bracket notation, or by accessing scalar values merely by using the name of the corresponding variables.
This approach doesn't always work on large programs in the real world, because these programs evolve as a result of new requirements or constraints. A modification to a program commonly requires a change in one or more of its data structures. For instance, a new field might be added to a personnel record to keep track of more information about each individual; an array might be replaced by a linked structure to improve the program's efficiency; or a bit field might be changed in the process of moving the program to another computer. You don't want such a change to require rewriting every procedure that uses the changed structure. Thus, it is useful to separate the use of a data structure from the details of its implementation. This is the principle underlying the use of abstract data types.
Here are some examples.
- stack: operations are "push an item onto the stack", "pop an item from the stack", "ask if the stack is empty"; implementation may be as array or linked list or whatever.
- queue: operations are "add to the end of the queue", "delete from the beginning of the queue", "ask if the queue is empty"; implementation may be as array or linked list or heap.
- search structure: operations are "insert an item", "ask if an item is in the structure", and "delete an item"; implementation may be as array, linked list, tree, hash table, ...
There are two views of an abstract data type in a procedural language like C. One is the view that the rest of the program needs to see: the names of the routines for operations on the data structure, and of the instances of that data type. The other is the view of how the data type and its operations are implemented. C makes it relatively simple to hide the implementation view from the rest of the program.
Implementation of abstract data types in C
In C, a complex data type is typically represented by a pointer to the information stored in the data type. A C function can pass the pointer to other functions without knowing the details of what the pointer points to. One may also use parts of a program that have been separately compiled. All that a part of a program need know about functions it calls is their names and the types they return.
Thus, the convention in C is to prepare two files to implement an abstract data type. One (whose name ends in ".c") provides the implementation view; it contains the complete declaration for the data type, along with the code that implements its associated operations. The other (whose name ends in ".h") provides the abstract view; it contains short declarations for functions, pointer types, and globally accessible data.
The abstract data type "stack" might then be represented as follows:
stack.htyped);
stack.c#include "stack.h" #define STACKSIZE 5 struct StackStructType { /* stack is implemented as */ int stackItems [STACKSIZE]; /* an array of items */ int nItems; /* plus how many there are */ }; typedef struct StackStructType *StackType; /* ** Return a pointer to an empty stack. */ StackType InitStack ( ) { char *calloc( ); StackType stack; stack = (StackType) calloc (1, sizeof (struct StackStructType)); stack->nItems = 0; return (stack); } ...
Parts of the program that need to use stacks would then contain a line#include "stack.h"
(Often the ".c" file also includes the corresponding ".h" file, as shown above, to insure consistency of function and type declarations.) The program would call InitStack to set up a stack, receiving in return a pointer of type StackType; it would access the stack by passing this pointer to Push, Pop, etc. Such a program would work no matter what the contents of stack.c, provided only that the implemented functions performed as specified in stack.h.
One more subtlety: It occasionally happens that a module #includes more than one .h file, and the second .h file also #includes the first. This produces compiler complaints about definitions occurring more than once. The way to avoid these complaints is to use C's conditional compiling facility (described in K&R section 4.11.3) to make sure definitions only appear once to the compiler. Here is an example of its use with the stack code above.#ifndef STACK /* any suggestive variable name is fine */ #define STACK /* define it if it's not already defined */ typed); #endif
The first time the stack.h file is encountered, the STACK variable won't (shouldn't) have been defined, so the body of the #ifndef is compiled. Since the body provides a definition of STACK, subsequent inclusions of stack.h will bypass the body, thereby avoiding multiple definitions of InitStack, Push, Pop, and PrintStack. | http://inst.eecs.berkeley.edu/~selfpace/studyguide/9C.sg/Output/ADTs.in.C.html | CC-MAIN-2016-22 | refinedweb | 849 | 62.17 |
This project was showcased at Intel IoT Hackthon 2015, Pune INDIA.
So whats the Idea and reason behind Project :
Project aims to provide medical assistance to rural population with the help of electronic hardware and cloud platform so that doctors can remotely examine the patients and study the reports.Problem:
Still there are many rural areas deprived of primary medical check-ups, services and certified doctors. Many times villagers have to travel urban hospitals in emergency conditions or even for minor medical tests.
So lets go..!
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: What You Need ?
Hardware:
- Intel Edison with Arduino type breakout board
- Temperature Sensor Anyone will work. I used the Groove sensors.
- Heartbeat Sensor Here also I used Groove sensor.But you can use any sensor that gives digital output pulses.
- 16x2Display To display the samples of temperature and heartbeat. I have used Groove RGB LCD. Remember that coding of Groove LCD is different that Regular 16x2 LCD's.
- Groove Base shield V2Just for expanding the ports of board. You can even go ahead without this.
- Power supply for board
2 Micro B to Type A USB cables
Wires n all
Software:
- Drivers and software package of Intel Edison installed on your PC. Download all the software related stuff of Edison on this
- Terminal Window To communicate with Edison
- Notepad For writing Python Script for Edison
- MIT App Inventor 2 This is online Android App developer which doesn't require coding skills. Exciting right? We will make a simple app using this.
- A website Website will be required for observing live analytics and different user interaction. No need to purchase your own domain. Just get a free blog here blogger.com
Other
- Android Mobile
- Windows based Computer
Step 2: Setting Up Intel Edison
Assemble your board
You can follow brief instructions from Intel website itself from here or follow the video here.
Connect the board to your system
Plug in the power supply.Note: If you do not have a DC power supply, you can still power the board through a USB port. A green LED should light up on the expansion board. If it doesn't, check your connection. Find the micro switch in between the USB ports on the expansion board. Switch the microswitch down towards the micro-USB ports, if it isn't already.lug in one of the micro-USB cables to the middle USB connector on the expansion board.Plug in the other end of the USB cable to your computer.
Voila! You've setup the hardware.
Step 3: Interfacing Sensors
So we have interfaced two sensors as mentioned earlier.
So interface sensor to Edison. Connect Groove base shield to the main board.You can even connect sensors directly. But its more easy if you go with base shield
1. Temperature Sensor.
Just connect Groove sensor to the base shield
2. Light Sensor
Here light sensor is used to detect the pulses reflected back from our finger. Adjust the pot suitably by trial and error to get exact toggled pulses of heartbeat. You can attach the sensor to finger by rubber band or anything, so it will be stable. Connect this sensor to Edison base shield.
Step 4: Create Thingspeak Channel
So we have to upload the sensor data to the cloud platform. So that it can be monitored remotely. We have choose thingspeak as a data platform for heartbeat and temperature sensor data.
You can also use other platforms like Microsoft, Amazom AWS or Intel itself.
So lets get started with Thingspeak
What is Thingpseak?
ThingSpeak is an application platform for the Internet of Things. ThingSpeak allows you to build an application around data collected by sensors. Features of ThingSpeak include: real-time data collection, data processing, visualizations, apps, and plugins.
At the heart of ThingSpeak is a ThingSpeak Channel. A channel is where you send your data to be stored. Each channel includes 8 fields for any type of data, 3 location fields, and 1 status field. Once you have a ThingSpeak Channel you can publish data to the channel, have ThingSpeak process the data, and then have your application retrieve the data.
Posting on Thingspeak
You put data into a ThingSpeak Channel by using HTTP POST.
Visit to get detailed information about creating channels.
So once you create a channel, you can see graphs of your data.
Step 5: Coding the Edison
We have used Python script instead of using Arduino, because of readily available libraries. Some groove sensors as well as LCD's are not available in Arduino as of now.
This code simply takes input of both sensors manipulates the sampled data and posts it to Thingspeak channel ID.
Python Code:
import pyupm_i2clcd as lcdd
import mraa import time import httplib import math import time import socket import sys temppin=0 heartbeatpin=2
temp=mraa.Aio(temppin) beat=mraa.Gpio(heartbeatpin)
beat.dir(mraa.DIR_IN)
lcd= lcdd.Jhd1313m1(0, 0x3E, 0x62) count=0 beatrate=0 currtime=time.time() key="your_key_here"
def beatme(args): global count global beatrate global currtime count=count+1 if ((time.time()-currtime)>20): beatrate=60*count/(time.time()-currtime) currtime=time.time() count=0 return
beat.isr(mraa.EDGE_RISING,beatme,beatme)
B=3975 def gettemp(): a=temp.read() resistance=(float)(1023-a)*10000/a temperature=1/(math.log(resistance/10000)/B+1/298.15)-273.15 return temperature
def updateme(f1,f2): global key conn = httplib.HTTPConnection("api.thingspeak.com") query="/update?api_key="+key+("&field1=%d&field2=%d" %(f1,f2)) conn.request("GET", query) r1 = conn.getresponse() preresult = r1.read() observation("temp",f2) observation("beat",f1) return preresult
def observation(var,val): UDP_PORT = 41234 # UDP port iotkit-agent is listening: defined in /etc/iotkit-agent/config.json sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.sendto('{"n":"' + str(var) + '","v":"' + str(val) + '"}', ('localhost', UDP_PORT)) return
while 1: lcd.setCursor(0,0) lcd.write("Temperature:%d" %(gettemp())) lcd.setCursor(1,0) lcd.write("Heartbeat:%d" %(beatrate)) updateme(beatrate,gettemp())
Step 6: Creating Interactive Web Platform
HTML of a specific channel can be placed on a website. Here's how HTML of channel looks like
<iframe width="300" height="250" style="border: 1px solid #cccccc;" src=""></iframe>
To show the data, we need a website.
To access the data in specific channel, HTML DOB is created. By using which we can take a input of channel ID from user and then modify the HTML of thingspeak channel by javascript.
You can see our website for reference rmaps.blospot.in
.Your Name:
Device ID :
Access the data
Step 7: Creating Android App
Creating android app is optional.
Using MIT app inventor 2 you can create android app without coding skills.App inventor exhibits Block programming.You can put the same HTML codes into the app just by creating the WEB view and done
Step 8: Done!
Participated in the
Epilog Contest VII
Participated in the
First Time Author Contest
2 Discussions
3 years ago
Good propotype. But just temperature and heart rate sensors can't help a doctor to help a patient,
4 years ago on Introduction
Cool monitoring system. | https://www.instructables.com/id/Cloud-Health-Monitoring-Intel-IoT/ | CC-MAIN-2019-47 | refinedweb | 1,202 | 59.19 |
VCL Controls:
The Database Radio Group Control
Introduction
When creating a database object such as a table, you may
provide a column that allows a user to set a single value from a small list of
values. For example, you can create a column that would be used to specify the
gender of an applicant. Instead of letting the user enter either Female or Male,
you can create the options yourself and then let the user only select the
desired one. To implement this scenario in a Windows application,
you can use radio buttons.
To support radio buttons in a database, the VCL provides the
DBRadioGroup control. The DBRadioGroup control primarily behaves like a
RadioGroup control. In fact, both descend from the TCustomRadioGroup
class. To use a DBRadioGroup in your application, you can click it from the Data Controls
section of the Component Palette and click the desired container.
Like the RadioGroup control, the radio buttons of a
DBRadioGroup object are represented by the Items property that is a list
based on the TStrings class. This also means that, when creating a table
for your application, you can create a string-based column and you would later
on associate it to a DBRadioGroup object. You probably already know that if you
create a text-based column, it can have just any value. If you intend to
associated such a column to a DBRadioGroup control, it is your responsibility to make
sure that the list of possible strings for this column should (must) be limited.
In fact, when planning your table, you should make sure that the user will not
be allowed to create new options. For example, if you create a column that you
anticipate to have a fixed list of strings, such as Female and Male, keep in
mind that you may not allow the user to add new strings such as Unknown or
Hermaphrodite.
Practical Learning: Introducing the DB Radio Group Control
Characteristics of the DB Radio Group Control
If you add a DBRadioGroup control to your application, at
design time, you should create a list of the available options yourself. To do
this, display the String List Editor from double-clicking the Strings field of
the Items property in the Object Inspector.
As described above, each item of the radio buttons of a
DBRadioGroup control is a member of a TStrings collection and therefore is of
type AnsiString. Such an item is represented by the Value
property of this control. This means that, when the user selects a radio button,
its Value, not its TStrings::ItemIndex property, gets stored in
the corresponding column of the table. This also implies the the value of the
record is saved as a string. This value is the caption of the radio button, as
you would have created it using the Items property. It is important to know that
you can use the properties of the TStrings class to identity the radio
button that the user had clicked but when the record is saved, it is the Value,
which is a string of type AnsiString, which is the caption of the radio
button, that is saved in the field, not the ItemIndex.
While each radio button holds Value string, the group of
values of the radio buttons is stored in the Values property of the DBRadioGroup
control.
Practical
Learning: Using a DB Radio Group Control
//---------------------------------------------------------------------------
#include <vcl.h>
#include <math.h>
#pragma hdrstop
#include "Exercise.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TfrmMain *frmMain;
//---------------------------------------------------------------------------
__fastcall TfrmMain::TfrmMain(TComponent* Owner)
: TForm(Owner)
{
}
//---------------------------------------------------------------------------
void __fastcall TfrmMain::btnCalculateClick(TObject *Sender)
{
double Principal, InterestRate, InterestEarned,
AmountPaid;
int Periods, CompoundType;
Principal = this->dbePrincipal->Text.ToDouble();
InterestRate = this->dbeInterestRate->Text.ToDouble() / 100;
switch( this->grpFrequency->ItemIndex )
{
case 0:
CompoundType = 12;
break;
case 1:
CompoundType = 4;
break;
case 2:
CompoundType = 2;
break;
case 3:
CompoundType = 1;
break;
}
Periods = this->dbePeriods->Text.ToInt();
double i = InterestRate / CompoundType;
int n = CompoundType * Periods;
AmountPaid = Principal * pow(1 + i, n);
InterestEarned = AmountPaid - Principal;
this->dbeInterestEarned->Text = FloatToStrF(InterestEarned, ffFixed, 8, 2);
this->dbeAmountPaid->Text = FloatToStrF(AmountPaid, ffFixed, 8, 2);
}
//--------------------------------------------------------------------------- | http://www.functionx.com/bcb/controls/dbradiogroup.htm | CC-MAIN-2016-07 | refinedweb | 677 | 50.36 |
pyaudio (Part 1)
00:00
Congrats! You’ve made it to the last library we’ll cover for playing audio. In this video, you’ll learn how to use
pyaudio, which also provides bindings for PortAudio. PortAudio keeps it cross-platform, so it will work on many different operating systems.
pyaudio is a bit different from what we’ve talked about so far in that the audio you hear is played by writing to a stream. It might be easier to see this in action, so let’s get to installing it and head to the text editor.
00:28
I’m going to do
pipenv install pyaudio,
00:33
and while that’s going, in the text editor I’m going to
import pyaudio and then also
import wave.
00:42
So, like before, set a
filename, which in my case is
'hello.wav'. And then I’m going to need to set a chunk size as well, so let’s just do
1024. And a chunk is just how many samples are put into each data frame. Now it’s time to open the sound file, so we’ll call that
wf and set that equal to
wave.open(), and then pass in the
filename, and you’ll want to read this as a binary.
01:12
Next, create an interface to PortAudio. So, with
pyaudio, you can then call
PyAudio(). Okay. Now it’s time to create that stream object, so say
stream and set this equal to
p.open(), so it’ll open up that interface. And then you’ll want to set your
format, which is going to be
p.get_format_from_width(), and then with the WAV file, you’ll get that sample width.
01:50
Make a
channels, which will be the
wf.getnchannels(),
02:00
the
rate, which will be
wf.getframerate(), and then
output, set that equal to
True. Okay, so there’s quite a bit going on here.
02:13 I’m actually just going to close this EXPLORER so we can see a little bit better and bring the terminal down.
02:20
There’s two main objects here. There’s the actual WAV file and then there’s the connection to the audio device. So,
stream is going to be through this audio device, and it’s going to need a number of attributes that come from the WAV file.
02:36
So you’ll see that most of the data here like the sample width, the number of channels, and the frame rate are all coming from the WAV file and being passed into this interface. Setting
output equal to
True means that the sound is going to be played instead of recorded.
02:51
Now it’s time to read that data in chunks, so set
data equal to, from your WAV file, you’ll want
.readframes(), and use that chunk size that you created earlier,
03:07
and then start writing that audio data to the stream. So,
while data does not equal an empty string (
''),
03:17
you’ll want to do
stream.write(data), and then set
data equal to
wf.readframes(), and grab that next chunk. So that’ll keep running until the file’s done, and after that, you can close the stream, and then terminate the interface. Okay.
03:39 So to recap that section there, you’re going to start chunking your WAV file into these little data frames, and as long as there’s another data frame, you’re going to write that data to the stream and then select the next chunk for the next frame. Once everything’s completed, you’re going to want to close that stream and then terminate your interface to the audio device. Okay! Let’s save this and see if it works.
04:08 “Hey there, this is a WAV file.” And there you go! Now, you might have picked up that this is pretty complex compared to what we’ve seen before, and it might seem strange to have to use, what, 24 lines of code? When different libraries only used a single line of code.
04:26
And there is a reason for this.
pyaudio is going to give you a lot more low-level control over how you play your sounds, which can be helpful depending on the demands of your application. Now, let’s say you have a little desktop application that you want to have some sound effects in there to notify a user of something.
04:43
Those might be better handled using a higher-level library where you just need a couple lines of code. Another case might be a more audio-centric program that would require the level of control that
pyaudio provides.
04:55
pyaudio is going to give you a lot more control over input and output devices and can actually check the CPU load for latency. So if you’re making something like a digital audio workstation used in music production, that’s the kind of project that would benefit greatly from the level of control that
pyaudio will give you. And that’s it for playing audio!
05:14 Great job finishing this section of the course. You’ve learned how to use quite a few audio libraries in Python. Now that you can play that audio, it’s time to learn how to record it.
05:24 You’re going to see some familiar libraries in the next couple of videos, but now from a different angle. Thanks for watching.
Thanks Brendan, for confirming the infinite loop I experienced. I also came to suggest while data: instead of while data != ‘’:
And thanks Joe Tatusko for the tutorials on audio.
Become a Member to join the conversation.
Brendan Leber on Feb. 12, 2020
On my system with Python 3.8.1
wf.readframes()doesn’t return a string it returns a
bytesobject. So the test in the while loop always passes and the program continues with no sound playing.
Changing the while loop to use
while data:fixed the infinite loop on my system and I think it should work for Python 2 and 3. | https://realpython.com/lessons/pyaudio-part-1/ | CC-MAIN-2022-05 | refinedweb | 1,034 | 81.53 |
The idea is to encode a unit-length quaternion in a manner analogous to cube mapping, but with one extra dimension and taking advantage of the property that a sign flip doesn't change which rotation is being represented.
One can think of cube mapping as consisting of an encoding of points in a sphere by first indicating which coordinate has largest absolute value and what sign it has (i.e., which of the 6 faces of the axis-aligned cube the point projects to) and then the remaining coordinates divided by the largest one (i.e., what point of the face the point projects to).
In our case, we don't need to encode the sign of the largest component, so we only need to use 2 bits to encode what the largest component is, and we can use the remaining bits to encode the other three components.
I think 32 bits is probably good enough for animation data in a game, and it's convenient that 30 is a multiple of 3, so it's easy to encode the other components. Actually, even if we didn't have that convenience, it wouldn't be a big deal to use a resolution that is not a power of 2, but some integer divisions would be involved in the unpacking code.
Here's the code, together with a main program that generates random rotations and measures how bad the dot product between the original and the packed and unpacked gets (the dot product seems to be > 0.999993, although I haven't made a theorem out of it):
#include <iostream> #include <cstdlib> #include <boost/math/quaternion.hpp> typedef boost::math::quaternion<double> quaternion; int double_to_int(double x) { return static_cast<int>(std::floor(0.5 * (x + 1.0) * 1023.0 + 0.5)); } double int_to_double(int x) { return (x - 512) * (1.0 / 1023.0) * 2.0; } struct PackedQuaternion { // 2 bits to indicate which component was largest // 10 bits for each of the other components unsigned u; PackedQuaternion(quaternion q) { int largest_index = 0; double largest_component = q.R_component_1(); if (std::abs(q.R_component_2()) > std::abs(largest_component)) { largest_index = 1; largest_component = q.R_component_2(); } if (std::abs(q.R_component_3()) > std::abs(largest_component)) { largest_index = 2; largest_component = q.R_component_3(); } if (std::abs(q.R_component_4()) > std::abs(largest_component)) { largest_index = 3; largest_component = q.R_component_4(); } q *= 1.0 / largest_component; int a = double_to_int(q.R_component_1()); int b = double_to_int(q.R_component_2()); int c = double_to_int(q.R_component_3()); int d = double_to_int(q.R_component_4()); u = largest_index; if (largest_index != 0) u = (u << 10) + a; if (largest_index != 1) u = (u << 10) + b; if (largest_index != 2) u = (u << 10) + c; if (largest_index != 3) u = (u << 10) + d; } quaternion get() const { int largest_index = u >> 30; double x = int_to_double((u >> 20) & 1023); double y = int_to_double((u >> 10) & 1023); double z = int_to_double(u & 1023); quaternion result; switch (largest_index) { case 0: result = quaternion(1.0, x, y, z); break; case 1: result = quaternion(x, 1.0, y, z); break; case 2: result = quaternion(x, y, 1.0, z); break; case 3: result = quaternion(x, y, z, 1.0); break; } return result * (1.0 / abs(result)); } }; double rand_U_0_1() { return std::rand() / (RAND_MAX + 1.0); } quaternion random_rotation() { quaternion result; do { result = quaternion(rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0, rand_U_0_1()*2.0-1.0); } while (norm(result) > 1.0); return result*(1.0/abs(result)); } double dot_product(quaternion q, quaternion p) { return q.R_component_1() * p.R_component_1() + q.R_component_2() * p.R_component_2() + q.R_component_3() * p.R_component_3() + q.R_component_4() * p.R_component_4(); } int main() { double worst_dot_product = 1.0; for (int i=0; i<1000000000; ++i) { quaternion q = random_rotation(); PackedQuaternion pq(q); quaternion p = pq.get(); if (dot_product(p,q) < 0) p *= -1.0; if (dot_product(p,q) < worst_dot_product) { worst_dot_product = dot_product(p,q); std::cout << i << ' ' << q << ' ' << p << ' ' << worst_dot_product << '\n'; } } }
Any comments are welcome, and feel free to use the idea or the code if you find them useful.
Edited by alvaro, 05 July 2012 - 09:01 AM. | http://www.gamedev.net/topic/627484-packing-a-3d-rotation-into-32-bits/ | CC-MAIN-2015-14 | refinedweb | 655 | 50.43 |
Followup question to Big XML File:
First thanks a lot for yours answers.
After… what I do wrong?
This is my class which uses SAX:
public class SAXParserXML extends DefaultHandler {
public ...
I'm getting the error in the title occasionally from a process the parses lots of XML files.
The files themselves seem OK, and running the process again on the same files that ...
I have a large XML file that consists of relatively fixed size items i.e.
<rootElem>
<item>...</item>
<item>...</item>
<item>...</item>
<rootElem>
I am facing problem with xml parsing. I am using SAXParsing to parse the xmlfile and override the methods startElement,endElement,characters of DefaultHandler class.I took ByteArrayInputStream to read the file and give ...
I am creating a tool that analyzes some XML files (XHTML files to be precise). The purpose of this tool is not only to validate the XML structure, but also to ...
XML
XHTML
I have 6 XML files containing the following tag
the first XML file is
<root>
<firstName> Smith</firstName>
<lastname>Joe</lastname>
<Age>60</age>
</root>
<root>
<firstName> John</firstName>
<lastname>Andrew</lastname>
<Age>55</age>
</root>
this is my xml file
<?xml version="1.0" encoding="utf-8"?>
<settings></settings>
public void load( String fileName ) {
...
Document xmlDocument = null;
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = null;
try
{
builder ...
So right now I am using the SAX parser in Java to parse the "document.xml" file located within a .docx file's archive. Below is a sample of what I am trying ...
I'm using Java 6, trying to parse this namespace-less XML ...
<gen type='section' name='Gen Info'>
<pmult type='input' name='Price Multiplier' nwidth='200' vlength='10'>
...
I want to open a local xml file to parse it.
So i've this line : saxReader.parse("file.xml");
And i've this error open failed: ENOENT (No such file or directory)
So I try to resolve ...
saxReader.parse("file.xml");
open failed: ENOENT (No such file or directory)
I am using SAX to parse some large XML files and I want to ask the following: The XML files have a complex structure. Something like the following:
<library>
...
Hello, I have a xsd schema file by which I could create a sample empty XML (without any attributes and text values). I wish to populate this XML by values which i get from database. I wish to use SAX (since at some point before I have implemented a SAX parser to read in values). Any suggestions on the same would ...
BufferedWriter out = new BufferedWriter(new FileWriter("Testing1.txt",true)); if (conceptui) { out.write("ConceptUI: "); out.write(new String(ch, start, length)); out.write(" , "); conceptui = false; } else if (conceptname) { out.write("ConceptName: "); out.write(new String(ch, start, length)); out.newLine(); conceptname = false; } out.close(); } catch (IOException e) { System.out.println("IOException:"); e.printStackTrace(); } } } } I run this code to parse an XML file (500MB) on a ... | http://www.java2s.com/Questions_And_Answers/Java-File/XML-file/sax.htm | CC-MAIN-2013-48 | refinedweb | 487 | 68.57 |
In my previous post, I described how I used React and the Amplify CLI to implement an initial front-end for the Bearcam Companion. This time I will write about
- UI improvements (especially the bounding boxes)
- Adding authentication, sign-up and sign-in
- Implementing a method for users to identify bears
UI Improvements
Last time I mentioned I was not happy with using
<canvas> elements for drawing bounding boxes around the bears. I set out to use
<div> and CSS instead, as inspired by the Amazon Rekognition demo interface:
I wrapped my
<img> element with a relatively positioned
<div>. I created a
Boxes component, and used the map() function to instantiate each box in the boxList:
<div style={{position:'relative', margin:'auto', display: 'block'}}> <img id="refImage" ref={inputEl} src={imagePath} { boxList.map( (box) => <Boxes key={box.id} box={box} /> )} </div>
In
Boxes.js, I get the box information: top, left, height and width from the respective
box fields. I use these to set the location of an absolutely positioned
<div>. I add the label text in another
<div> along with the confidence (converted to a percentage by multiplying by 100 and truncating). The code snippet looks like this:
const boxTop = `${box.top*100}%` const boxLeft = `${box.left*100}%` const boxHeight = `${box.height*100}%` const boxWidth = `${box.width*100}%` return( <div className="bbox tooltip" key={box.id} style={{top: boxTop, left: boxLeft, height: boxHeight, width: boxWidth }} > <div className="identname">{box.label} ({Math.trunc(box.confidence*100)})</div> </div> )
Using CSS, I control the
bbox and
identname styles and locations. I use the
:hover properties to control the color of the
bbox and the visibility of the text. With this implementation, I have a much better bounding box experience (note the blue, default box on the left and the red, hover box on the right):
Authentication
Before allowing the user to identify the bears, I want to set up authentication. My main motivation is to associate identifications with users. This will ensure I only get one identification per user and may also come in handy for future functionality.
I used Amplify Studio to enable authentication, select a Username based login mechanism and configure the sign up options. Back on my developer machine, I performed an
amplify pull to get the authentication changes. Enabling the built in sign in and sign up flow is as simple as wrapping
App in
withAuthenticator. I can now access the user information from
user:
import { withAuthenticator } from '@aws-amplify/ui-react'; function App({ signOut, user }) { return ( <div className="App"> <header className="App-header"> <div className="headerImage"> <img width={200} height={65} </div> <Heading level={5}Hello, {user.username} </Heading> <Button onClick={signOut}Sign out</Button> </header> <Heading level={4}>Bearcam Companion</Heading> <FrameView user={user} /> <footer className="App-footer"> <h2>©2022 BearID Project</h2> </footer> </div> ); } export default withAuthenticator(App);
The default sign in screen looks like this:
Identifications
Now that the user is logged in, I want them to be able to identify the bears in the images. I created a new data model, Identifications. This model includes the name of the bear, name, and username of the user that made the identification, user. Since each bear can be identified by multiple users, I need to create a 1:n relationship between Objects and Identifications. I called this field objectsID. The model in Amplify Studio looks like this:
After an
amplify pull I can start using the new data model in my front end. Now I can get all the Identifications for the current box with a call like this:
const idents = await DataStore.query(Identifications, c => c.objectsID("eq", box.id));
This gives me all the individual Identifications for the box. What I really want is a tabulation of votes for each bear name. Then I can show the top voted name (and percentage) in the default box view, like this:
DataStore doesn't provide this sort of aggregation (nor does DynamoDB behind it). I found a bit of code using
.reduce to group my
idents from above by a key, and a count for each key:
function groupIdents(list, key) { return list.reduce(function(rv, x) { rv[x[key]] = rv[x[key]] ? ++rv[x[key]] : 1; return rv; }, {}); };
I call
groupIdents with
idents and a key of
name, which is the bear name. I then sort the results by the count.
const gIdents = groupIdents(idents,"name"); pairIdents = Object.entries(gIdents).sort((a,b) => b[1]-a[1]);
I want to use
idents in a new component, BoxIDs, which will render the sorted list of bear names and counts/percentages. I want this content to to show for each box and update when new identifications are added. To manage this, I made use of useState() and useEffect() hooks. I created a
useState() hooks for my sorted list of names/counts (identAgg) and total count (identCount):
const [identAgg, setIdentAgg] = useState([["Unknown", 1]]); const [identCount, setIdentCount] = useState(1);
As you can see, I set the default
identAgg to have the name "Unknown" with a count of 1. I also set the default identCount to 1. I will use these values when no identifications have been made.
The
useEffect() hook lets me run code on certain lifecycle events or when things change. I wrapped the previous code in the
useEffect() so that it runs when
box.id changes:
useEffect(() => { async function getIdents() { var idents = await DataStore.query(Identifications, c => c.objectsID("eq", box.id)); var pairIdents = [["Unknown", 1]]; var count = 1; if (idents.length) { const gIdents = groupIdents(idents,"name"); pairIdents = Object.entries(gIdents).sort((a,b) => b[1]-a[1]); count = idents.length; } setIdentList(idents); setIdentCount(count); setIdentAgg(pairIdents); } getIdents(); DataStore.observe(Identifications).subscribe(getIdents); }, [box.id]);
I can display the top identification and count/percent information by adding the following to my render:
<div className="identname">{identAgg[0][0]} ({identAgg[0][1]}/{identCount} = {Math.trunc(identAgg[0][1]*100/identCount)}%)
That takes care of the default view I showed previously. When the user hovers over the box, I want to show more details like this:
In this case I choose to show the sorted list of top identifications and their respective counts. The new
BoxIDs component renders the name and count for each aggregated identification:
import React from 'react' export default function BoxIDs({ ident }) { return( <div >{ident[0]} ({ident[1]})</div> ) }
I added it to
Boxes by inserting the following into the render:
<div className="identdetails"> { identAgg.map( (ident) => <BoxIDs key={box.id + "-" + ident[0]} ident={ident} /> ) } <SetID boxID={box.id} curList={identList} username={username} /> </div>
You may have noticed
SetID above. This component shows the user's current selection and implements a drop down list of all the possible identifications. The user's current selection is found by searching the list of identifications for one where the
user matches the current user. When the user selects an identification from the drop down, it creates a new Identification for the user. If the user has previously made an identification, it modifies the existing one instead. The UI looks like this:
Conclusion
That wraps up the latest round of changes. This is getting close to something users can test. I still need to implement a way to pull in new images and automatically find the bears and there are always UI improvements to be made. It's also about time to put everything in a code repository.
I'll cover these topics next time...
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/aws-builders/bearcam-companion-ui-improvements-authentication-and-identifications-3h4i | CC-MAIN-2022-33 | refinedweb | 1,239 | 55.84 |
go: tell me about slow channels
Go channels are a powerful construct for passing information between different go routines, each with distinct responsibilities. Writing to an unbuffered channel is a blocking operation which does not complete until a reader is available at the other end.
The blocked write is not guaranteed to complete – if the channel responsible for the read is somehow blocked itself, you may have deadlock.
Here is a simple wrapper around a channel write that helps detect long-blocked operations.
Some things you might want to change for own application:
- Writes taking longer than one second are identified as a stall. This is a constant in the code and not configurable.
- The
time.Aftertriggers each second of the stall but only does something the first time through. Maybe you’ll want to log every so often that the send is still stalled.
- You might want to do something other than log a message when a stall is detected. Some ideas:
- Print a stack trace.
- Update a counter (maybe external to this function) so a health check fails.
import ( "log" "time" ) func Emit(out chan<- interface{}, data interface{}) (ok bool) { var stallStart time.Time // If we stalled, print the duration of the stall as we exit. defer func() { if !stallStart.IsZero() { diff := time.Now().Sub(stallStart) log.Printf("Stall sending data %+v recovered after %0.2f seconds", data, diff.Seconds()) } }() ok = true sendStart := time.Now() for { select { case out <- data: return case <-time.After(1 * time.Second): if stallStart.IsZero() { log.Printf("Stall sending data %+v", data) stallStart = sendStart ok = false } } } }
Here is a simple
main that shows this wrapper being used:
func main() { out := make(chan interface{}) go func() { <-time.After(2 * time.Second) <-out }() Emit(out, "slow-data") }
When run, it outputs:
$ go run stall.go 2015/11/15 19:38:40 Stall sending data slow-data 2015/11/15 19:38:41 Stall sending data slow-data recovered after 2.00 seconds | http://aronatkins.github.io/2015/11/15/go-tell-me-about-slow-channels.html | CC-MAIN-2017-13 | refinedweb | 327 | 67.15 |
Most software developers share the same pattern in their professional career of having to deal with projects of similar nature, while few developers manage to jump from one project to a completely different one.
Personally for me, I found one particular area that has always been taking great deal of my time from one project to another, and that’s coding around retrieving and changing information about the system/OS, the current process, threads, various hardware and software configuration of the system, security context, etc... I do hope this all sounds very familiar to many developers.
Not only has one is either impossible, or breaks the integrity of the system, like using unmanaged code in .NET, or lots of external procedure imports for VB6, etc.
Out of systematic practice in various development environments came the idea to summarise all my knowledge in this area and offer software developers a simple and unified way in which all such information can be accessed easily, in the same way in any development environment, and with very minimum effort.
This whole article is an introduction to the initiative of writing a library that would allow easy access to the most frequently used information in the system, client’s process, and the program's environment.
It is a very recently started project (July 2008). I am trying to organize all additional information about this project as well as the development efforts for it on this website.
There are four ways in which an application can retrieve information from the system:
When it comes to choosing which method is best to use, our choice depends mostly on the following criteria (given in the order in which the average developer looks at these things):
The Professional System Library (ProSysLib) is a project that offers unification in accessing information about process/system/environment where the developer would no longer have to make a tough choice of looking at these criteria, trying to decide which one is most important or which can be sacrificed.
ProSysLib presents all the information using the concept of a root namespace, very much similar to that in .NET, where System is the root namespace for everything. Again, much like it, ProSysLib has its own System root namespace that defines the entry point to all the sub-namespaces and functionality of the library.
System
The picture in the beginning of this article shows the top hierarchy of namespaces below System. These define the basis for further classification of all the information that can be accessed.
The ProSysLib DLL is a Unicode COM in-process server that uses the Neutral memory model. It is thread-safe, and implements automation interfaces only. The protocol (type library) declarations of the ProSysLib SDK for details).
Implementation is done entirely in VC++ 2008, using only COM, ATL, STL, and the Windows API.
The entire ProSysLib framework is based on Just-On-Time Activation, which means that each and every namespace and object is instantiated and initialized only when used by the client application for the first time, and a COM exception "NOT IMPLEMENTED", to tell you that you are trying to use something in the library that has been declared but not yet implemented.
Since the library concept is built upon a root namespace, it is the only interface that needs to be created by the client application to have access to everything else, much like the System namespace in .NET. In fact, the fool-proof implementation of the library won’t let you create any other interface of the library even if you try.
Declaring and instantiating a variable in different development environments can look different from each other, while using it will look pretty much the same. In this article, we simplify all our code examples for C# clients only. Any developer should be able to work it out how this would look in his environment of choice.
PSLSystem sys = new PSLSystem();
// Declare and instantiate ProSysLib root namespace;
Now, using this variable, we can access anything we want underneath the root namespace.
While ProSysLib is targeted to implement access to many kinds of information, the project started only recently, and there are not that many features implemented so far. However, I did not want to draw abstractions with finger in the air, one can figure them out by looking at the ProSysLib Documentation, so all the examples provided here below are real ones, i.e., fully functional already.
So, let’s consider a few examples of what we can do with ProSysLib as of today, which is less than one month from the beginning of the project.
Many applications need to know and control privileges available to a process. For instance, Debug privilege can be important when accessing some advanced information in the system that’s otherwise unavailable. ProSysLib provides a collection of all the available privileges under the namespace PSLSystem.Security.Privileges. If one needs to enable Debug privilege in a process, the code would be as shown here:
PSLSystem.Security.Privileges
sys.Security.Privileges.Find(“SeDebugPrivilege”).Enabled = true;
I.e., we access the collection of privileges, locate the privilege of interest, and then enable it. We would normally need to verify that the Find method successfully located the privilege in the list, but since the Debug privilege is always available, we can simplify it here.
Find
One of the very popular subjects that can be found on CodeProject is about enumerating all the available processes in the system, or finding a particular process, or how to kill a process by name or ID, etc. ProSysLib enumerates all processes running in the system, under the namespace PSLSystem.Software.Processes. This collection is very flexible to allow any kind of operation one needs to do with processes in the system.
PSLSystem.Software.Processes
Attached to this article is a simple C# application that shows just one example of how ProSysLib can be used. The example enumerates either all the processes or the ones that were launched under the current user account. It displays just some of the available information about each process, and allows killing any process with the press of a button.
Here is just a small code snippet from the example, where we populate a list view object with information about processes:
foreach (PSLProcess p in sys.Software.Processes) // Go through all processes;
{
ListViewItem item = ProcessList.Items.Add(p.ProcessID.ToString());
string sProcessName = "";
if (p.ProcessID == 0) // System Idle Process
sProcessName = "System Idle Process";
else
{
sProcessName = p.FileName;
if (sys.Software.OS.Is64Bit && p.Is64Bit == false)
// If OS is 64-bit, and the found process
sProcessName += " *32";
// is not 64-bit, we add " *32" like in TaskManager;
}
//());
item.Tag = p;
// Associate each list item with the process object;
}
The demo application binary is provided both in 32-bit and 64-bit versions.
One of the quite interesting things about ProSysLib that you might notice from the demo application is that it doesn't require anything to register on your PC in order to run successfully. If somebody assumes that this is COM Isolation for .NET, he would be wrong. This is implementation of Stealth Deployment for COM, which I came up with in my long practice of distributing COM projects. A full description of the idea is given in the ProSysLib SDK, chapter Deployment.
Another typical task for many applications is to find out what access rights the current process has to a particular object in the system. Usually, it is either a file or folder that we want to know what we can do with. I know for a fact that getting this information isn't that straightforward in C++, and can be even more cumbersome in other environments.
The namespace PSLSystem.Security offers the function GetNamedObjectAccess that allows getting Access Mask for any Named Object in the system (file, folder, printer, service, reg-key, network share) in just one line of code:
PSLSystem.Security
GetNamedObjectAccess
long AccessMask = 0;
long lErrorCode = sys.Security.GetNamedObjectAccess(ntFileOrFolder,
"my file or folder path", ref AccessMask);
Just one other small feature that I had a chance to implement in the library by now is how to control Process Affinity. The namespace PSLSystem.Process contains all the properties about the current process (or will, eventually). One of them is AffinityMask, which can be changed very easily. For instance, if you have a Dual-Core system, and would like to execute your process on the second core only, your code for that would be:
PSLSystem.Process
AffinityMask
sys.Process.AffinityMask = 2;
// 2 in binary corresponds to the second core;
In the same way, for each process in the collection PSLSystem.Software.Processes, we have an AffinityMask to get/set the Affinity Mask for any other process in the system like the TaskManager can do. I didn't use this in the demo, because I can't show all at once anyway.
Windows Management Interface is one of those hereditary technologies that sometimes could have been better dead than alive. In the case of WMI, we are talking about a number of problems caused by, though very necessary, yet poorly thought out technology. This was one more reason for writing ProSysLib, to be an alternative to getting information from the system in a much easier and faster way.
Below is a list of perhaps the main problems found in WMI:
Unfortunately, regardless of all the flaws that WMI carries with it, some details about the system just seem impossible to acquire in any other way, or require too much effort. My personal suggestion - only use WMI when you really have to.
For those situations, ProSysLib offers a much simplified access to WMI functionality via the namespace PSLSystem.Tools.WMI. It has a few methods to get information from WMI in the simplest possible way.
PSLSystem.Tools.WMI
Let’s consider an example of getting the property Caption from the WMI class Win32_OperatingSystem, which is the title of the current operational system.
Caption
Win32_OperatingSystem
string OSCaption = sys.Tools.WMI.GetValue(null, “Win32_OperatingSystem”, “Caption”);
In this example, we passed null for the namespace because the class Win32_OperatingSystem typically resides in the default namespace of "root\\cimv2" (also the property DefaultNamespace of the interface).
null
DefaultNamespace
The WMI namespace offers a few methods to get information from it in the simplest possible way. And, while using it simplifies WMI coding under .NET by twice at best, for C++ developers, this simplifies WMI usage by 90%.
WMI
If, for instance, you wanted to get information for the properties "BuildNumber", "CountryCode", and "Locale" from the same class, you could use the following method:
BuildNumber
CountryCode
Locale
Array a = sys.Tools.WMI.GetColValues("root\\cimv2",
"Win32_OperatingSystem", "BuildNumber, CountryCode, Locale");
The WMI namespace has a method that takes a comma-separated list of property names and returns their values as an array of variants. This is for use with single-record WMI classes.
Similarly, if you wanted to get values for all records but just one column (multi-record WMI classes), you could make a call like this:
Array a = sys.Tools.WMI.GetRowValues("root\\cimv2", "Win32_Product", "Caption");
This method returns values for all rows and the selected column, also as an array of variants.
And, if you want to use WMI in full, i.e., to issue a WQL query to get a whole table of data, your code would look like this:
// Selecting Caption and DeviceID for all queued printers:
PSLTable t = sys.Tools.WMI.GetData("root\\cimv2",
"SELECT Caption, DeviceID FROM Win32_Printer WHERE Queued=True");
// Adding our table into the list of values:
for(int i = 0;i < t.nRows;i ++)
{
MyList.Add(t.GetValue(i, 0).ToString()); // Adding Caption value to the list;
MyList.Add(t.GetValue(i, 1).ToString()); // Adding DeviceID value to the list;
}
This method returns an object PSLTable that simplifies access to the data using an array type of addressing as {row, column}. Plus, it has what a recordset object has, the ability to always get the list of columns, using:
PSLTable
{row, column}
// We would get "Caption" for our example here;
string FirstColumnName = t.GetColName(0);
// "DeviceID"
string SecondColumnName = t.GetColName(1);
This is particularly useful when you issue a WQL query using SELECT *, i.e., selecting all columns, so you don't know which column is where.
ProSysLib lives up to its name in the area of handling errors and exceptions as well. Any error/exception is handled gracefully, and exposed to the client via COM Exceptions, providing both numerical and verbal interpretation of any problem in functionality.
COM Exceptions are easy to handle, and supported automatically in any environment. For instance, in .NET, COM exceptions are handled by the object System.Runtime.InteropServices.COMException, while in C++, they are handled via the type _com_error that's generated by the type library import mechanism.
System.Runtime.InteropServices.COMException
_com_error
The ProSysLib root namespace contains a method DecodeException that allows easy interpretation of numerical COM errors as an enumerated type. Let's consider an example of exception handling in which we try to access a collection object with an index beyond what's reasonable, expecting an Index-Out-Of-Range type of exception:
DecodeException
try
{
string sName = sys.Software.Processes[100000].FileName;
}
catch(System.Runtime.InteropServices.COMException ex)
{
if(sys.DecodeException(ex.ErrorCode) ==
PSLException.exceptionIndexOutOfRange)
{
// Yes, this is the exception that we indeed expected;
}
MessageBox.Show(ex.Message); // Show the exception message;
}
These were just a few examples of what has been already implemented within the ProSysLib architecture, and it has much more currently in progress. If you look at the tree of namespaces and the ProSysLib SDK Documentation, you can find how far this project is meant to stretch. This article, again, just scratches the surface of the project. And, I do hope it finds developers who would like to join in to work on this project, providing their unique C++ experience to make their knowledge available to developers in all software platforms.
As I continue development of the project, I will be publishing more articles here with focus on particular features as they become available, without considering the whole thing again.
I always enjoy writing professional COM servers that make intense use of COM collections, internal COM instantiations, automated event marshallers, and many other neat tricks with COM, coverage for which in the Internet is typically poor. There were, in fact, plenty that I learnt with this project.
For example, a great deal of code for the project is based on either undocumented Windows API or poorly documented API, learning of which is a good challenge. It was fun digging out the truth about 64-bit implementations of the API function ZwQuerySystemInformation, which required code debugging to see what the reality was. All information in the Internet about ZwQuerySystemInformation classes is for 32-bit only, but seems like nobody knows that, even Microsoft published misleading information about it in MSDN :)
ZwQuerySystemInformation
Anyways, I am planning to publish these tricks and many others in the implementation of ProSysLib when I get around it, for I believe, this is all a separate subject, and for now, just trying to keep it simple for this article.
P.S.: I appreciate your comments and fair rating for the article.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
sys.Security.Privileges.Find("SeDebugPrivilege").Enabled = true;
PSLPrivilege p = sys.Security.Privileges.Find("SeDebugPrivilege");
if(p != null)
p.Enabled = true;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/28231/Professional-System-Library-Introduction?msg=3580132 | CC-MAIN-2017-09 | refinedweb | 2,606 | 50.97 |
DLL Hijacking: The Code
DLL Hijacking: The Code
Continuing a look at DLL hijacking, we'll turn to the code, with a simple example written with GCC on Kali Linux.
Join the DZone community and get the full member experience.Join For Free
Learn more about how to Prevent Slow or Broken APIs From Affecting Your Bottom Line.
In my previous article on DLL Hijacking, I abstractly discussed how exactly DLL hijacking works, on different operating systems and development frameworks, and ways you can protect your code from this exploit. Today I’m going to show you some code. This is heavily based on the dlopen man page, feel free to reference.
This is written on Kali Linux, using GCC and make. I have a simple makefile, a driving program that side-loads a library, and two different libraries. We’ll build all of these with a manual copy step where you actually hijack the DLL. So first. let’s look at the makefile:
all: main libs main: gcc -rdynamic -ldl -o main main.c libs: goodlib badlib goodlib: gcc -Wall -fPIC -c printer.c gcc -shared -Wl,-soname,libprtr.so.1 -o libprtr.so.1.0 printer.o badlib: gcc -Wall -fPIC -c bad_printer.c gcc -shared -Wl,-soname,libbadprtr.so.1 -o libbadprtr.so.1.0 bad_printer.o clean: rm *.o rm lib*
This is a pretty straightforward makefile. We define a couple of targets, the all target, which has dependencies on main and libs. The main target builds the main driver, and the libs target has a dependency on goodlib and badlib, which build the good library and the hijacking library respectively. We wrap up this makefile fun with a clean target, ‘cause we’re OCD like that.
So let’s look at the main program first:
#include <dlfcn.h> #include <stdio.h> #define OK 0 #define FAIL 1 #define LIBNAME "./libprtr.so.1.0" #define FUNAME "print" int main(int argc, char *argv[]) { void *handle = NULL; void (*printer)(void) = NULL; int result = 0; handle = dlopen(LIBNAME, RTLD_LAZY); if (handle == NULL) { printf("Error opening library: %s\n", dlerror()); return FAIL; } *(void **) (&printer) = dlsym(handle, FUNAME); if (printer == NULL) { printf("Error opening function: %s\n", dlerror()); return FAIL; } (*printer)(); if (dlclose(handle)) { printf("Error closing library: %s\n", dlerror()); return FAIL; } return OK; }
Here, we’re loading a specific SO file, one we’re building and distributing side-by-side with this nifty application. Which prints stuff. But the printing function is in the library we distribute because we’re likely to change the printed message based on the nearest holiday (we are such great planners!). Basically, we load the library via dlopen(.), then grab the function pointer from the library via dlsym(.), print the message from the library function, then close the library with dlclose(.). Then we exit the program.
The libraries are as complex as you’d think, filled with functionality for printing a single message:
#include <stdio.h> void print(void) { printf("do some stuff.\n"); }
...and our evil, hijacking library:
#include <stdio.h> void do_other_stuff(void) { printf("do lots of other stuff.\n"); } void print(void) { do_other_stuff(); printf("do some stuff.\n"); }
Note the hook into do_other_stuff(.) in the hijacking library, where we execute our nefarious evilness.
Now, we have looked through all the code. Let’s build and run.
samhain@durga:~/Work/loading# make gcc -Wall -fPIC -c printer.c gcc -shared -Wl,-soname,libprtr.so.1 -o libprtr.so.1.0 printer.o gcc -Wall -fPIC -c bad_printer.c gcc -shared -Wl,-soname,libbadprtr.so.1 -o libbadprtr.so.1.0 bad_printer.o samhain@durga:~/Work/loading# ./main do some stuff.
We’ve run our application, which has done some stuff on our behalf. Excellent! That’s why this app was so highly rated. Now, say, we navigate our browser to some of the more shady sites on the internet, and at one of them, we inadvertently download a new library (feel free to copy libbadprtr.so.1.0 over libprtr.so.1.0 instead of going to those dark corners of the internet yourself).
The next time we run our printing app, we see this:
samhain@durga:~/Work/loading# ./main do lots of other stuff. do some stuff.
Oh noes. We have been hacked!
So you see, as long as the new DLL has the appropriate entry points defined, applications are happy to load the library and execute the named function. So how to get around this? Well, first, look at the size of the library:
samhain@durga:~/Work/loading# ls -al lib* -rwxr-xr-x 1 root root 4792 Jan 1 09:56 libbadprtr.so.1.0 -rwxr-xr-x 1 root root 4580 Jan 1 09:56 libprtr.so.1.0
You’ll notice libbadprtr.so is noticeably larger than libprtr.so. Let’s take a look at the hashes of the files. We’ll whip up a quick python script to extract the sha256 hashes:
import hashlib as h GOOD_FILENAME = 'libprtr.so.1.0' BAD_FILENAME = 'libbadprtr.so.1.0' def extract_signature(file): file_r = file.read() hash = h.sha256(file_r) print hash.hexdigest() with open(GOOD_FILENAME, 'rb') as file: print 'Processing good file (%s):' % GOOD_FILENAME extract_signature(file) with open(BAD_FILENAME, 'rb') as file: print 'Processing bad file (%s):' % BAD_FILENAME extract_signature(file)
Which gives us, when run:
samhain@durga:~/Work/loading# python hasher.py Processing good file (libprtr.so.1.0): 933ada85c40c1a1897f2f15448f1410597c0ac60ca3b9c702e1d74227fa91984 Processing bad file (libbadprtr.so.1.0): 88143a87fa810d81e636d0a659cfb85887bfda0b70ebada9cd1c5d107fd205a1
The hashes are remarkably different as well, as you’d expect. But why the file size? Isn’t the hash enough? Well, yes, for sha2 series hashes, you’re probably okay using just the hash. In the past, though, malicious actors have been able to generate hash collisions in MD5, for example, but measuring the expected file size can give you an additional data point to use to determine the authenticity of a library. Quick and easy, using a combination of hashes and file sizes can give you a remarkable amount of protection against DLL hijacking.
Learn about the Five Steps to API Monitoring Success with Runscope
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/dll-hijacking-the-code | CC-MAIN-2018-17 | refinedweb | 1,037 | 68.36 |
lp:~larsu/indicator-sound/add-title
- Get this branch:
- bzr branch lp:~larsu/indicator-sound/add-title
Branch merges
- Ted Gould (community): Approve on 2013-09-12
- PS Jenkins bot (community): Approve (continuous-integration) on 2013-09-12
- Diff: 11 lines (+1/-0)1 file modifiedsrc/service.vala (+1/-0)
Related bugs
Related blueprints
Branch information
- Owner:
- Lars Karlitski
- Project:
- The Sound Menu
- Status:
- Merged
Recent revisions
- 375. By Lars Karlitski on 2013-09-12
Add "title" to the root action state dictionary
- 374. By Lars Karlitski on 2013-09-11
Update POTFILES.in and mark remaining strings as translatable. Fixes: https:/
/bugs.launchpad .net/bugs/ 1223500.
Approved by Sebastien Bacher, PS Jenkins bot.
- 373. By Lars Karlitski on 2013-09-09
Fixes bug #1221242 and #1204036 (make scrolling and middle clicking work on the sound indicator)
It soft-depends on lp:~larsu/libindicator/ng-add-scrolling. That means, this branch can be merged without problems, as it only adds an action and a few attributes on the root item. The bugs won't be fixed until both branches land, though.
Please the other merge request for a description of the new attributes. Fixes: https:/
/bugs.launchpad .net/bugs/ 1204036.
Approved by Charles Kerr, PS Jenkins bot.
- 372. By PS Jenkins bot on 2013-08-29
Releasing 12.10.2+
13.10.20130829- 0ubuntu1 (revision 371 from lp:indicator-sound).
Approved by PS Jenkins bot.
- 371. By Lars Karlitski on 2013-08-28
Use bus_watch_
namespace( ) for more robust monitoring of mpris players appearing or disappearing on the bus.
Approved by Charles Kerr, PS Jenkins bot.
- 370. By PS Jenkins bot on 2013-08-22
Releasing 12.10.2+
13.10.20130822- 0ubuntu1 (revision 369 from lp:indicator-sound).
Approved by PS Jenkins bot.
- 369. By Charles Kerr on 2013-08-22
Don't use deprecated GSimpleActionGroup APIs.
Approved by Ted Gould, PS Jenkins bot.
- 368. By PS Jenkins bot on 2013-08-20
Releasing 12.10.2+
13.10.20130820- 0ubuntu1 (revision 367 from lp:indicator-sound).
Approved by PS Jenkins bot.
- 367. By Pete Woods on 2013-08-20
Re-write build scripts using cmake.
Approved by PS Jenkins bot, Ted Gould.
- 366. By PS Jenkins bot on 2013-08-12
Releasing 12.10.2+
13.10.20130812. 1-0ubuntu1 (revision 365 from lp:indicator-sound).
Approved by PS Jenkins bot.
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:indicator-sound/13.10 | https://code.launchpad.net/~larsu/indicator-sound/add-title | CC-MAIN-2021-04 | refinedweb | 419 | 61.12 |
Important: Please read the Qt Code of Conduct -
directoryOf function QT4 -> QT5
I came across the following code snippet written with QT4:
void HelpBrowser::showPage(const QString &page) { auto path = directoryOf("doc").absolutePath(); auto browser = new HelpBrowser(path, page); browser->resize(500, 400); browser->show(); }
However in Qt5 i cant find the function
directoryOf(). Does it still exist?
- Christian Ehrlicher Qt Champions 2019 last edited by
@sandro4912 said in directoryOf function QT4 -> QT5:
directoryOf
In which class was it in Qt4? I'm not aware that such a function exists in Qt4 at all.
- SGaist Lifetime Qt Champion last edited by SGaist
Hi,
What is HelpBrowser ?
@sandro4912 said in directoryOf function QT4 -> QT5:
HelpBrowser
directoryOf
It's from here
C++ GUI Programming with Qt4: Providing Online Help
:)
And to the OP: all the code is written by the author in the book, it doesn't come with Qt.
- Christian Ehrlicher Qt Champions 2019 last edited by
HelpBrowser::directoryOf() is a custom function written by someone but not Qt. So you will likely not find it in any Qt class but in the HelpBrowser implementation which you will find in the book I guess.
Yeah i got it now. I overread it in the mentioned book. If youre currious. The function does this:
QDir MainWindow::directoryOf(const QString &subdir) { QDir dir(QApplication::applicationDirPath()); #if defined(Q_OS_WIN) if (dir.dirName().toLower() == "debug" || dir.dirName().toLower == "release") dir.cdUp(); #elif defined(Q_OS_MAC) if (dir.dirName() == "MacOS") { dir.cdUp(); dir.cdUp(); dir.cdUp(); } #endif dir.cd(subdir); return dir; }
- aha_1980 Lifetime Qt Champion last edited by
Hi @sandro4912,
so you can mark this topic as SOLVED now. Thanks!
Yeah i already marked it. I Gues ś questions like this happen after coding to many hours... | https://forum.qt.io/topic/106882/directoryof-function-qt4-qt5 | CC-MAIN-2020-50 | refinedweb | 291 | 58.79 |
In this short tutorial we’ll look at several ways we can use GIO to launch (or “open”) files.
Note: This article is part of my ongoing GIO tutorial series, which currently consists of: (1) File and path operations; (2) File IO; and (3) Launching files (this document).
Quick recap: GIO is GLib’s IO library. File and path operations are mostly done through the GFile class, while IO operations are done through the GInputStream and GOutputStream interfaces.
This tutorial shows, in essence, GIO’s cross-platform version of os.startfile or xdg-open.
Note that for this tutorial I’m using Vala instead of Python, partly because of the rather unfortunate state of PyGObject in Windows at the time of writing, and partly because I can. (Note: the Vala documentation has no future-proof URI; by now these links have gone to rot, so just use the C documentation.)
Nice way (buggy)
The simplest way to launch a file is with GAppInfo’s
launch_default_for_uri static/class method (C reference,
Vala reference), which is currently not available in PyGObject. To make matters worse, it doesn’t seem to work in Windows because “URIs not supported”; I have no idea what’s up with that.
// Currently broken. var result = AppInfo.launch_default_for_uri ("", null);
Short but strange way (buggy)
Oddly enough, GTK+ (not GLib) has a
show_uri method (C reference,
Vala reference) polluting its root namespace. In Windows, calling this method results in the same “URIs not supported” error as the previous way.
// Currently broken. var result = Gtk.show_uri (null, "", Gdk.CURRENT_TIME);
“Please just do it” way
The safest way to launch a file is to obtain the default handler of the file, and then launch it with the file as argument. We do this by calling GFile’s
query_default_handler (C reference,
Vala reference), which returns a GAppInfo referring to the default handler. We then call its
launch method, passing the GFile wrapped in a GList.
var file = File.new_for_path ("C:\\Windows\\explorer.exe"); var handler = file.query_default_handler (null); var list = new List<File> (); list.append (file); var result = handler.launch (list, null);
Pointless way
The last way is to find out the file’s content type, get the default handler for that content type, and launch it. There’s not much point in using this one, I’m just including it as another alternative.
var file = File.new_for_path ("C:\\Windows\\explorer.exe"); var info = file.query_info ("standard::content-type", FileQueryInfoFlags.NONE, null); var content_type = info.get_content_type (); var handler = AppInfo.get_default_for_type (content_type, false); var list = new List<File> (); list.append (file); var result = handler.launch (list, null);
Still…
There’s another GIO bug in Windows: it doesn’t support launching directories. I’ve filed GNOME bug 606337 accordingly. | https://sjohannes.wordpress.com/2010/01/07/gio-tutorial-launching-files/ | CC-MAIN-2018-43 | refinedweb | 455 | 58.79 |
Hello! This is my first post and I am a beginner in Java programming-so don't come down on me too hard :) I have a problem that I need to solve over the weekend and I was hoping some of you could help me out. The problem at hand is that I need to compute n! where n is an integer entered by the user. the catch is that I need to store the answer in an array (up to 50 places) and print back the array such like (12! = ) 479001600 and being able to omit all leading zeros from the printout. I know I need to divide by 10 and mod by 10 to get the remainder so I can store the value in the next element of the array, but I don't know how. Our teacher told us to go ahead and load the array with zeroes and initialize the last element (x.length - 1) to 1 because 0! and 1! are 1 so that made sense to me, and I believe I have the code correct to compute the factorial, I just need some ideas/help as to storing those values in the array and printing that array...so here goes
import java.util.*; public class Prog3 { public static void main(String[] args) { double n; double factorial = 1; Scanner input = new Scanner(System.in); System.out.println("Enter a non-negative integer and I will compute its factorial"); n = input.nextDouble(); while (n<0) { System.out.println("Please enter a NON-negative number!"); n = input.nextDouble(); } while (n >0) { factorial = factorial * n; n--; } int[] x = new int[50]; for(int i = 0; i <x.length; i++) x[i] = 0; x[x.length - 1] = 1; for(int } }
*EDIT* of course I wrote computer in the title instead of *compute* sorry! | https://www.daniweb.com/programming/software-development/threads/227099/program-to-computer-n-using-an-array-to-store-the-answer | CC-MAIN-2018-30 | refinedweb | 304 | 71.95 |
There are exceptions to the types of SOAP web services Flash can consume. Flash only supports SOAP-based web services when they are transported over the HTTP protocol. Flash does not support SOAP web services over SMTP or FTP as of this release.
Flash also does not support web services with non-SOAP ports, such as MIME. The name attribute within the port element must point to a SOAP port. In the excerpt from the service definition of a WSDL document below, the port that bears the name SamplePort must be a SOAP port.
<service name="SampleService"> <port name="SamplePort"> <soap:address </port> </service>
Flash also does not support the
import tag. You can use
the
import tag to keep parts of a WSDL description in separate
files. You can reuse those parts, such as schemas and other definitions.
Flash MX Professional 2004 does not support web services that use the
import tag.
You may also run into problems with web services that require complex data, such as objects containing arrays, as an input parameter. If you try to pass complex data to a web service using the WebService API, you will receive an error saying that the endpoint URL could not be opened. To send complex data, use Flash Remoting. Complex data in output parameters, on the other hand, does not cause any problems.
This is probably pretty obvious, but I'll point it out anyway: Flash does not support web services that return data it can't handle, such as highly formatted HTML. If a web service returns an interactive HTML map, for example, Flash will not be able to render it correctly due to the limitations of its HTML capabilities. On the other hand, if the web service returns simply the building blocks for an interactive map, such as an XML structure describing label strings, the URLs of image files, and so on, Flash, with some effort, can recreate the map.
Finally, there are some types of web services that the WebService API supports that will fail if you use the WebServiceConnector component. The WebServiceConnector does not support web services with more than one SOAP port defined in the WSDL. The WSDL excerpt below defines a service with two SOAP ports.
<service name="SampleService"> <port binding="tns:Service1" name="Service1"> <soap:address </port> <port binding="tns:Service1" name="Service2"> <soap:address </port> </service>
If you try to invoke this type of web service with the WebServiceConnector, the compiler will throw an error: "There are multiple possible ports in the WSDL file; please specify a service name and port name!" The call fails because the WebServiceConnector API doesn't allow the developer to specify the port. When there is only one port, this does not become an issue.
Even though this is undocumented, the WebService API provides a workaround for handling multiple ports.
// instantiate the WebService object var ws:WebService = new WebService(''); // specify the port name ws._portName = 'Service1'; // call an operation on the service ws.getInfo('94103');
The example consumes a fictitious web service named Sample Service,
which defines an operation named
getInfo.
If you use the code above, the web service call will no longer fail because
it specifies the port. Note that this code requires the WebServiceClasses to
be in your movie's library. Select Other Panels > Common Libraries > Classes
and drag the WebServiceClasses compiled clip into the library of your Flash
file.
The WebServiceConnector component also does not support web services with more than one service defined in the WSDL. Web services with more than one service often have more than one SOAP port as well, since each service element in the WSDL contains a port element. Here is an excerpt from a WSDL description that describes a web service with more than one service:
<service name="SampleService"> <port binding="tns:SampleService" name="SampleService"> <soap:address </port> </service> <service name="AnotherService"> <port binding="tns:AnotherService" name="AnotherService"> <soap:address </port> </service>
Each service usually encompasses several operations. When you add the URL of the WSDL description of a web service with more than one service to the Web Services panel, Flash only parses the first service and its operations and displays them in the panel. This is because the Web Services panel only supports web services with exactly one service. The Web Services panel doesn't display more than one service per WSDL.
If you call one of the web services of a multiservice definition using the WebServiceConnector component, the compiler reports the same multiple ports error mentioned above. Again, there is an undocumented workaround through the WebService API. If you use this API to specify the service name and port name (as shown in the following code), the web service call will succeed.
import mx.services.* ; // instantiate the WebService object var ws:WebService = new WebService(''); // specify the service name ws._name = 'SampleService'; // specify the port name ws._portName = 'Service'; // call an operation on the service ws.invokeMethod('94103');
Note: This code requires the WebServiceClasses to be in the library of your Flash file. Select Other Panels > Common Libraries > Classes and drag the WebServiceClasses compiled clip into the Library panel. | http://www.adobe.com/devnet/flash/articles/flmxpro_webservices_03.html | crawl-002 | refinedweb | 856 | 52.6 |
It's important to know the life cycle of the request so you can understand what is going on under the hood in order to write better software. Whenever you have a better understanding of how your development tools work, you will feel more confident as you'll understand what exactly is going on. This documentation will try to explain in as much detail as needed the flow of the request from initiation to execution. To read about how to use the
Request class, read the Requests documentation.
Masonite is bootstrapped via Service Providers, these providers load features, hooks, drivers and other objects into the Service Container which can then be pulled out by you, the developer, and used in your views, middleware and drivers.
With that being said, not all Service Providers need to be ran on every request and there are good times to load objects into the container. For example, loading routes into the container does not need to be ran on every request. Mainly because they won't change before the server is restarted again.
Now the entry point when the server is first ran (with something like
craft serve) is the
wsgi.py file in the root of the directory. In this directory, all Service Providers are registered. This means that objects are loaded into the container first that typically need to be used by any or all Service Providers later. All Service Providers are registered regardless on whether they require the server to be running (more on this later).
Now it's important to note that the server is not yet running we are still in the
wsgi.py file but we have only hit this file, created the container, and registered our Service Providers.
Right after we register all of our Service Providers, we break apart the provider list into two separate lists. The first list is called the
WSGIProviders list which are providers where
wsgi=True (which they are by default). We will use this list of a smaller amount of providers in order to speed up the application since now we won't need to run through all providers and see which ones need to run.
While we are in the loop we also create a list of providers where
wsgi=False and boot those providers. These boot methods may contain things like Manager classes creating drivers which require all drivers to be registered first but doesn't require the WSGI server to be running.
Also, more importantly, the WSGI key is binded into the container at this time. The default behavior is to wrap the WSGI application in a Whitenoise container to assist in the straight forwardness of static files.
This behavior can be changed by swapping that Service Provider with a different one if you do not want to use Whitenoise.
Once all the register methods are ran and all the boot methods of Service Providers where wsgi is false, and we have a WSGI key in the container, we can startup the server by using the value of the WSGI key.
We then make an instance of the WSGI key from the container and set it to an application variable in order to better show what is happening. Then this is where the WSGI server is started.
Now that we have the server running, we have a new entry point for our requests. This entry point is the app function inside bootstrap/start.py.
Now all wsgi servers set a variable called environ. In order for our Service Providers to handle this, we bind it into the container to the Environ key.
Next we run all of our Service Providers where wsgi is true now (because the WSGI server is running).
The Request Life Cycle is now going to hit all of these providers. Although you can obviously add any Service Providers you at any point in the request, Masonite comes with 5 providers that should remain in the order they are in. These providers have been commented as
# Framework Providers. Because the request needs to hit each of these in succession, they should be in order although you may put any amount of any kind of Service Providers in between them.
We enter into a loop with these 5 Service Providers and they are the:
This Service Provider registered objects like the routes, request class, response, status codes etc into the container and then loads the environment into these classes on every request so that they can change with the environment. Remember that we registered these classes when the server first started booting up so they remain the same class object as essentially act as singletons Although they aren't being reinstantiated with the already existing object, they are instantiated once and die when the server is killed.
If one of the middleware has instructed the request object to redirect, the view that is ready to execute, will not execute.
For example, if the user is planned on going to the dashboard view, but middleware has told the request to redirect to the login page instead then the dashboard view, and therefore the controller, will not execute at all. It will be skipped over. Masonite checks if the request is redirecting before executing a view.
Also, the request object can be injected into middleware by passing the
Request parameter into the constructor like so:
from masonite.request import Requestclass SomeMiddleware:def __init__(self, request: Request):self.request = request...
This will inject the
Request class into the constructor when that middleware is executed. Read more about how middleware works in the Middleware documentation.
This provider loads the ability to use sessions, adds a session helper to all views and even attaches a session attribute to the request class.
This provider takes the routes that are loaded in and makes the response object, static codes, runs middleware and other route displaying specific logic. This is the largest Service Provider with the most logic. This provider also searches through the routes, finds which one to hit and exectues the controller and controller method.
This provider is responsible for showing the nice HTTP status codes you see during development and production. This Service Provider also allows custom HTTP error pages by putting them into the
resources/templates/errors directory.
Nothing too special about this Service Provide. You can remove this if you want it to show the default WSGI server error.
This Service Provider collects a few classes that have been manipulated by the above Service Providers and constructs a few headers such as the content type, status code, setting cookies and location if redirecting.
Once these 5 providers have been hit (and any you add), we have enough information to show the output. We leave the Service Provider loop and set the response and output which are specific to WSGI. The output is then sent to the browser with any cookies to set, any new headers, the response, status code, and everything else you need to display html (or json) to the browser. | https://docs.masoniteproject.com/architectural-concepts/request-lifecycle | CC-MAIN-2020-34 | refinedweb | 1,173 | 67.89 |
Wicket 1.3.3 Support for NetBeans IDE 6.1
Today we uploaded the latest NBMs of our Wicket support into the Plugin Portal:
There are several changes, some mentioned recently in this blog. Basically, the names of the generated templates are friendlier, the JARs are Wicket 1.3.3 instead of 1.3.0, some useless options have been removed from the Frameworks panel, the Wicket Stylesheet support is part of the generated code, the Wicket filter is used in web.xml instead of the Wicket servlet, a header panel is always created... so mostly quite simple enhancements that, I believe, will make the user experience a lot better. Click the above link, then the Download button on the Plugin Portal page, unzip the ZIP that you then get and install the three NBMs. There is no need to download Wicket JARs from the Wicket site, because one of the three NBMs provides these and registers them in the IDE when you install the NBM.
I've installed them into 6.1, though they should probably also work in 6.0. Here's a quick scenario that should show you most of the Wicket support provided by this plugin, together with some nice Wicket/Ajax integration:
- Create a new web application and choose Wicket in the Frameworks panel:
- When you click Finish you have a nice simple source structure to begin your adventures with Wicket:
(From a NetBeans API point of view, the coolest thing about the above screenshot is that you see exactly that when you complete the wizard, i.e., the package opens automatically and the HomePage.java is also opened automatically, so that you can begin coding there right away. It's a small thing, but pretty cool.)
- Right-click on the package (not on the project node, else you'll come across a known bug in this plugin) and choose New | Other and then choose the Panel template from the New File dialog:
- Name your panel:
- Click Finish and you have the skeleton of a new panel, i.e., both the Java side and the HTML side:
- Add a text field to the HTML side of the new panel, with a Wicket ID that will connect the HTML to the Java side of the panel:
- On the Java side of the Country Panel, use Wicket's AutoCompleteTextField class, as follows, making sure to pass the 'countries' ID, which connects the Java side with the HTML side defined in the previous step:(); } };
Above, the bits in bold is Wicket, the rest is just standard JDK code for getting the country names for the available locales. Here we're just building up a collection that will be displayed in the auto complete text field that we are creating here. The collection could contain anything at all, but Wicket provides the class that will make the text field behave in a way that we've come to expect from Ajax.
- Now add the field to the panel, on the Java side, by adding the one line below that is in bold, in the constructor:
CountryPanel(String id) { super(id); add(field); }
- Hurray. You've just defined your first reusable panel. Now let's actually make use of it. In the HomePage.html, add a new tag below the existing tag, i.e., add the tag that is in bold below:
<html> <head> <title></title> <link wicket: </head> <body> <span wicket: <span wicket: </body> </html>
The Wicket ID you specify here could be anything, so long as it is matched by the Wicket ID we add to the Java side, in the next step.
- In the HomePage.java, instantiate the Country Panel, by simply adding the line in bold (all the rest was generated by the Frameworks panel in the Web Application wizard):
package com.myapp.wicket; import org.apache.wicket.model.CompoundPropertyModel; public class HomePage extends BasePage { public HomePage() { setModel(new CompoundPropertyModel(this)); add(new CountryPanel("countryPanel")); } }
In the same way that you've instantiated the Country Panel above, you could do so anywhere else, such as in the generated Header Panel. You just need to make sure that the Wicket ID is the same on both sides, i.e., in the HTML file and in the Java file.
- Hurray, you're done. Deploy the application to the server of your choice. Notice that you now have an auto complete text field in your browser:
What have you learned? Firstly, you've learned that NetBeans IDE has cool support for Wicket (and you haven't seen everything yet, for example, when you refactor a Java class, the related HTML side will be refactored at the same time and you can cause a hyperlink to be created on the Wicket ID on the HTML side, which will let you open the Java side from inside the HTML page, plus the Navigator shows the Wicket tags in the page, plus there's Wicket samples in the New Project wizard). Secondly, you've learned about one of Wicket's Ajax classes (go here for more). Thirdly... how much JavaScript have you used in order to create a very typical Ajax component? Well... ummm... none. So, you can use Ajax without leaving the comfortable world of Java. Fourthly, in the debug mode, which is Wicket's default mode, there's a cool debug console right inside the HTML page, which provides a lot of useful information about the current session. Fifthly (but something you can't see here), if the browser doesn't support JavaScript, Wicket provides fallback behavior to handle this for you. Finally, isn't it cool that you can wrap your Ajax behavior in your own Wicket components and then reuse them, so easily?
May 01 2008, 09:57:16 AM PDT Permalink | http://blogs.sun.com/geertjan/date/20080501 | crawl-001 | refinedweb | 964 | 66.47 |
ARIN IPv6 Allocation Policy 121
possible writes: "ARIN has announced the last call for public comments on its proposed IPv6 address allocation policy. This last call for public comments will expire on 23:59 EDT August 03, 2001."
FORTUNE'S FUN FACTS TO KNOW AND TELL: A black panther is really a leopard that has a solid black coat rather then a spotted one.
Have you tried a dynamic IP service? (Score:1)
Um wow (Score:1)
Quoted... (Score:1)
Allocate by region based on population. Leave room (Score:4)
Re:arin lookups (Score:1)
--
Re:Seems pretty reasonable...? (Score:2)
--
Re:Finally an IP addr for my Coffee Machine (Score:2)
Re:Everyone can have huge networks (Score:1)
Pricing may get more complex since there is that distributed DHCP replacement. I'm suspecting that there might be service fees for resolving blocks or something.
Any way you cut it though, you should be able to get enough IP's cheaply for a good number of the atoms that make up your posessions.
Re:Hmm, time to propose *.os (Score:2)
Divide your namespace properly, man. Major space-borne bodies should have their own TLDs. Maybe group the asteroid belt all under one, the way the
.us domain is chopped up now. Vehicles and space stations to be registered under their controlling entities...
Re:Allocate by region based on population. Leave r (Score:2)
Re:Everyone can have huge networks (Score:2)
What's more interesting is to speculate when most ISPs will offer IPv6 - UMTS Release 5 (the future 3G mobile phone standard for GSM operators) specific IPv6 for all multimedia services, so if 3G takes off this could be a big driver for IPv6 adoption.
Re:Finally an IP addr for my Coffee Machine (Score:1)
Someone PLEASE MOD THIS UP!
Phew! (Score:1)
It's a pity that ipv6 routers are so rare, otherwise everyone would probably start using it tomorrow...
Re:not good. (Score:2)
Permanent IPv6 addresses that can roam screw up the routing tables. Right now the big problem on the backbone isn't the IPv4 address space, it's the sheer number of routing entries needed. If they force everyone connecting through a given provider to use the provider's network number, they drasticaly simplify the routing. And with 16 bits for the provider to subnet, and 64 bits that the end user can play with and subnet if they want ( none of the policies preclude dividing the 'host' portion up into sections by the end user ), handling dynamic network numbers isnt' nearly the problem it is under IPv4.
Re:IPv6 is fundamentally flawed. (Score:2)
Portable addresses are why the routing tables are so big. Compact routing tables require that everything down a given branch of the routing tree have the same address prefix. The larger the number of prefixes down a given branch, the larger the routing tables need to be. IPv6 tries to deal with this by a) insuring that there's enough room in the 'host' portion that customers can subnet their networks completely within it and b) the provider has a large enough address space to assign a single subnet to each customer. That's also why they've kept alive the idea of a subnet hierarchy within the rightmost 64 bits.
And don't try invoking a different addressing method. All of them eventually boil down to the address being a string of bits, and while the terms for each field in that string change the basic problem of the routing tree doesn't.
Re:What /48 means (Score:2)
Actually pppp.pppp.pppp wil be assigned to the provider, ssss will be assigned to the user, and hhhh.hhhh.hhhh.hhhh can be assigned however the user wants. The RFCs specify that the host part should be derived from the Ethernet MAC address on Ethernet-based networks, but they can't really write in dependence on anything but the host part being unique within the subnet ( think about PPP, which doesn't have anything like a MAC address ).
Re:IPv6 is fundamentally flawed. (Score:2)
Apparently you failed to read RFC2462, which addresses this. Hosts do not configure the high 64 bits of their address, they are told what it is by their router(s) during configuration of the interface. A site's local address topology is completely independent of the 48-bit prefix assigned to them by their provider. Creative abuse of the relevant RFC lets you do this even if your provider gives you a
/64.
Re:I can just see it... (Score:2)
Third generation beer, yuck!
Re:So, in summary you're saying: (Score:2)
I doubt you will have that many programs running on your systems. Even at a company level.
With each
/64 subnet having a full 64bits for specific machine identification you could easily assign machine addresses randomly and not really worry about collisions. You're talking about a huge address space. So what you assign machines a few billion address so they can assign one to each program. Current process tables are only measured in the thousands. It's a non issue at this point. I can see a senario where it could be an issue, but then having more than 2^64 objects is rather unlikely.
IPv6 is fundamentally flawed. (Score:2)
IPv6 is fundamentally flawed. It has the same fundamental flaw that IPv4 has. That flaw is that it does not support universally portable IP space. Just like IPv4, IPv6 requires a massive routing table space to be able to route to different address spaces. The only advantage of IPv6 over IPv4 is more addresses. It is NOT going to provide you with your own portable address block.
The Internet is going to end up splitting into a commercial version and a free (as in speech) version, anyway, so who cares. The latter will never need more than the IPv4 space, so IPv6 just isn't needed.
Re:Some real info. (Score:2)
The problem of multi-homing in integrated in the design of both IPv4 and IPv6. The flaw is in the address concept itself. To fix this, you cannot just retrofit something on top of the existing IPv6. I do have an idea I call "layered addressing". It pretty much eliminates the core routing tables (it would most likely be way fewer than 1000 entries, perhaps just 200). But it also requires a whole new way to think about addresses. It has some similarities to "loose source routing", but works on the basis of autonomous secured zones. And that just isn't part of the IPv6 design. I highly doubt the multi6 working group has the authority to scrap the whole IPv6 addressing scheme and start over, so there would be no point in trying to do anything in that group.
So do you know where to reach the IPv7 working group?
Re:IPv6 is fundamentally flawed. (Score:2)
Not in IPv4 or IPv6. I believe that to handle truly portable addressing requires a whole new way to think about addressing that IPv6 simply didn't try to do.
Re:IPv6 is fundamentally flawed. (Score:2)
You're still making some assumptions about this string of bits called the address. For one thing, you assume that the address has to remain constant throughout the travels. That's part of the flaw in the design. IPv6 spent too much time thinking about how to divvy up a constant address and didn't look at the big picture where the real requirements are the ability to configure a machine/network once, and be able to rapidly find the path to it wherever, and whenever, it moves. It's a convergence problem in time/space, and the addressing concept has to be a part of it. A constant (not static) addressing is just part of the problem.
Re:IPv6 is fundamentally flawed. (Score:2)
I'm not going to write up an RFC unless there is some reason to believe people will take it seriously. There is one reason to believe they won't, and that is because the solution means to scrap the whole design of IPv6 and start over (call it IPv7 maybe). The requirement "where every device could get a one-time fixed address and then you could plug that device into any network jack in the world and have it instantly work" is not achieveable with IPv6 (I can't exactly prove it because there are ways to sort of make it kind of work). It would require a new design to replace IPv6 and its way of doing fixed addressing.
Re:IPv6 is fundamentally flawed. (Score:2)
Writing up such a document would be very time consuming. This is simply not worth it for one person. This issue isn't about getting you to listen to me. I could care less if you listen to me. The issue is about whether IPv6 will be scrapped. First ask yourself if there was indeed a way to do multi-homing right, without massive routing table, that would only work on a new addressing scheme and not IPv6, would the powers that be be willing to scrap the last 8 years of design work to have this feature? If you think the answer is yes, then go ask the various IPv6 working groups the same question. Is having multi-home light routing worth all that? I highly suspect that such a feature is "way down" in priority and would not justify scrapping all the work they have done in IPv6 and delaying the rollout for a few more years.
Re:IPv6 is fundamentally flawed. (Score:2)
That's only one layer.
Re:what? (Score:3)
Thank you for contributing this error. It will help maintain
--
Re: BGP and IPv6 (Score:1)
IPv6 (Score:1)
static addressing (Score:5)
That being said, routing protocols will need to be furthered, and some of the new routing protocols as well as the IPv6 versions of old standbys (like BGP, OSPF, etc) are pretty slick. think about the amount of route summarization you'd need to do for BGP so you don't kill yourself! we're talking massive exponentional expansions in potential routes. ouch. I think that's why most of the IPv6 space is going to be kept close together to save us all the hassle of watching our older equipment die under the load. thinking of all those little ISP's loading up IPv6 BGP on a cisco 3640 or something equivalent just makes me want to cry
Here's a good link on the routing issues moving to IPv6:
Umm, actually it's a joint IAB/IESG recommendation (Score:1)
Re:Have you tried a dynamic IP service? (Score:2)
Sure you can do dynamic dns for vanity names and domains through any service now. I want to be my own dns, i want legal mail services, i want to do vanity domains or virtual hosts based on having a constant ip.
just an idea.. dunno if it is even feasable
Re:IPv6 is DOA... (Score:2)
Standardization is why we have area codes. You know that 281,713 and 409 are houston, you know that 610 is philadelphia, you know that 215 is a place you don't want to call.
Cell phones are just mobile phones and believe it or not cell phone users have a home market. Much like the area codes, this helps identify and localize the user.
Extensions suck. It is nice to have my own phone number at work, at home and on my cell phone. You want to try and remember extensions and numbers? You extension idea is simply adding incomprehensible and unplanned numbers BEHIND the normal 7 digit number adding only to the confusion. 10 didgit dialing is alot easier then 7+4 digit extensions that don't mean squat unless you work within the company. Most business use the suffix of the number as the extension anyway only adding to the EASE OF USE.
Private networks wouldn't be needed and all the computing resources being utilized for managing private networks coould be a thing of the past if it wasn't necessary.
Service levels will increase, productivity would increase and network management would increase.
Lets just hope Verizon follows through.. (Score:4)
But then again, i may be dreaming.
On the otherhand, is it possible for someone to do virtual ip's in some fashion? Like a vpn connection that authenticates the client and then does shortest path routing? Something like provider x assigns me 222.222.222.222 through the vpn and then bgp's the routes to the dynamic ip address by weights (so that your traffic still goes through your local provider and doesn't need to be tunneled through the vpn).
Just wondering. Too many big companies screwing over the lil guys and customers. "It is our policy to not assign static ip's". Thats like saying you sell me a 100% connect dedicated DSL circuit and say i need dynamic ip's because it saves your space on your ip subnets.. thats bs since the same customers are going to be on.. save yourself a dhcp server and assign ips. If your all about spam and email filtering with your new no smtp/pop outside of verizon email addy policy then why not implement static ip's so you can CATCH the people doing it instead of chasing them elsewhere and ruining services for people who don't do bad.
Re:Finally an IP addr for my Coffee Machine (Score:1)
Oh great.. now somebody is gonna r00t my coffee maker and make it brew nothing but decaf...
Re:Quoted... (Score:3)
Although this COULD become a problem when we get into nanotechnology and ever nanite needs its own IP address. A body full of these suckers COULD potentially run out of IP addresses.
"No, but you don't understand. I need an extra block of addresses because it is vitally important that I can access nanite #38273749590627
directly from a computer on the other side of the world. A double hop is simply NOT an option guys!"
Enough for anyone. Humph!
-Restil
Re:Everyone can have huge networks (Score:1)
Re:IPv6 is DOA... (Score:2)
Inside going out, NAT is fine. However outside coming in it is a mess. IPv6 will fix this.
--
Charles E. Hill
Re:IPv6 is DOA... (Score:2)
Yes, there are benefits from a security standpoint but I prefer my security solution to be more flexible. My coffee pot doesn't need the same protection that my home alarm system does. NAT with PF forces this to a good degree.
It also causes problems with things like redundant links. Multiple connections to the 'net would be a good thing. A full-mesh config on your internal LAN with a couple of redundant egress points could help. Not to mention the possibility of different speed connections.
Simple devices can be controlled/monitored with simple commands (SNMP-like) and slow/small-bandwidth links. Again, my coffee pot doesn't need a DS-3, but my porn-scouring spider would like one!
Having to reconfig multiple similar devices (like clocks and/or TVs that naturally use the same ports) to use different ports will be a pain -- though I suppose some form of DHCP for port assignment could be created.
IPv6 also has better support for QoS and a few other additions that make it desirable. No, it isn't perfect but it is a step in the right direction.
--
Charles E. Hill
Same problem with hard to use addresses (Score:2)
This and every other Ip address scheme is based on the concept that the end user is a leaf node and has one upstream and that is the root of the problem since the "Internet" is about having multi-homed hosts which have 1 or more upstream connections.
The current mess with ip v4 could be fixed by telling every ISP that they will have to return 10% of their address space per year and then only allocate
Everyone can have huge networks (Score:5)
And still as they state, they can easily give up to 178 billion of these
Now the real trick as the article alludes to but doesn't really address is the complexity of handling the routing for multihomed sites. Someone still has to figure out how to make multihomed routing easy, fast, and efficient.
Finally an IP addr for my Coffee Machine (Score:3)
- Home network subscribers, connecting through on-demand or always-on connections should receive a
/48.
This means that every home will have enough IP addresses for about everything in the home. Finally I will be able to telnet into my coffee machine from downstairs and brew a new pot of joe! The possibilities for us caffeine soaked programmers are endless!!!
I can just see it... (Score:3)
Welcome to the FreezyFridge 2010
Running Linux 2.4.15
Login:root
# mv
# exit
> _
Then no more beer!!!!
Re:Finally an IP addr for my Coffee Machine (Score:5)
Welcome to the BrewMatic 4000
Running Linux 2.4.14
Login:root
# cd
# mv
# cp
# mv
# brew --cups 12
# exit
> _
Re:Allocate by region based on population. Leave r (Score:4)
Where is your forward thinking?
:)
On the other hand, I do agree with you regarding the heirarchical designation, however it appears that ARIN want to give everyone a
/48 address by default (that is 2^80 addresses per person). Only 1/8th of the IPv6 address space will be available (001 designation) by default, allowing 2^45 entities to have up to 2^80 addresses.
The paper says that there will be 10billion people on the Earth by 2050. I bet IPv6 will last until 2100 at least though, and you shouldn't design upgrades into the system for something anyway, so assume that it will last forever...
In 3000, the Interplanetary Confederation will have 10 trillion people under its finger, and 100 billion companies (imagine giving each of those a unique name to avoid
.com naming problems!). 2^45 is more than the sum of these (2^35), so even then IPv6 will be fine. I assume that the average person will not have more than 2^80 IPv6 addressable elements on or within their body though. I think this is reasonable... !
Re:IPv6 is DOA... (Score:1)
IPv6 is DOA. And we don't need more IP address space. BUT the telephone number issue is a lot more complex.
IPv6 has been around, more or less, for about a decade. It was SIP and PIP merged; neither Steve nor Paul were terribly good protocol designers, and neither understood addressing. ISO CLNP was a far better protocol; it was almost adopted as the standard under the name TUBA (TCP and UDP with Bigger Addressing). But at the last minute, Vint Cerf (the Chauncey Gardner of the Internet) reneged on a deal with the TUBA advocates and changed his position. Thus we've had no progress for pretty much the entire life of the commercial (post-1993) Internet.
And because IPv6 is such a botch, IPv4 workarounds like NAT will keep it going, and ARIN is sitting on heaps of spare v4 space, like all of the old Class As from 67 to 126! With CIDR, that'll last quite a lot, and indeed Disney does not need a full Class A. But they could use it more than many other Class A occupants!
Telephone numbers are a different story. Every LEC (ILEC, CLEC) needs its own prefix code in every rate center it does business in. There are too many rate centers (in order to keep local calling areas small) and most CLECs don't need as many numbers as they have. But they got full prefix codes because that was the only choice. Now they get 1000 numbers at a time in most areas, or will soon, slowing down area code growth. That, and not PBX extensions or cell phones or even fax servers, is the main waste of phone numbers. And direct-inward-dialing PBX extensions (a feature bundled with Centrex but also used without it) is very beneficial; extension numbers are not a valid substitute.
Still, there will be some need for new phone number space in North America one of these years, not too far out. This has been a recent discussion on comp.dcom.telecom (Telecom Digest) and shouldn't really be a tangent here. But yes, there is some analogy.
Re:IPv6 is DOA... (Score:1)
--
Re:IPv6 is DOA... (Score:1)
We should just give up on IPv6 then, huh? Not needed? What about the built-in security (packet encryption and source authentication). What about policy route specification? What about combination of IPX and NSAP addresses into IP? What about priority routing for "real-time" or "critical" services? What about "local-use addresses that allow companies to not have to renumber their IP addresses if they start out not connected to the internet, but later connect and need to request an address prefix from the global internet address space?
Anything else that is useless and a waste of time with IPv6?
--
Re:Fridge (Score:1)
Re:no, no and no again (Score:1)
3) The last 64 bits of an IPv6 address are often used to store the MAC address of the sending host. This is going to make things like Mobile IP and automatic IP allocation (think DHCP) a breeze."
Its not ok for anyone to know where I am, but its fine if they can identify me with a unique MAC address?
Re:not good. (Score:1)
Re:Some real info. (Score:1)
Study up on IPsec/IKE, and you'll find that having live unique Ip addresses at each end is essential to the security model. NAT breaks this.
As for the server stuff - yes it probably can be done, but how do you handle several web servers, all through a single IP address and the same public port 80. You have to choose another public port which then will start to break other things (e.g. routing filters), or decode the TCP data to attempt to work out where the stream is destined. You can probably do it by inspecting the data streams and directing traffic as appropriate, but this won't scale to large networks... too much CPU required.
As for FTP or any other protocol that passes IP addresses in the TCP stream, the NAT box has to decode the data and modify it in a protocol dependent manner. Ok for a small NAT newwork, but prohibitive in a large network. But for similar reasons above won't scale.
Re:Some real info. (Score:1)
The whole point is that its impossible to route a very large network by meshing large numbers of nodes or networks in one location. Eventually the thing won't scale if you allow indiscriminate DFZ explosion. Even labeling (MPLS) will die when the network gets too big.
Hierarchical table management is the way to deal with it. The Ipv6 solution to the multihoming problem is to assign multiple network addresses. Packet delivery for at least one of those networks can be guaranteed under that scenario so the network infrastructure is certainly doing its job. It's up to the higher layer protocol designers to figure out how they should deal with hosts having multiple addresses and then the problem is licked.
If you consider that multihoming information is much longer lived than perhaps mobile IP, the solutions for dealing with the multiple addresses start to fall in place.
Perhaps its time I got back into the multi 6 debate again and get the issue resolved - I have to admit its the bugbear of Ipv6 deployment at the moment.
Some real info. (Score:3)
wrong. There are large chunks of the world that can't get address space to do what they want. Especially Asia which is only now starting to get into the Internet. it is also estimated that giving every mobile phone over the next 10 years or so an IP address will also make us run out of addresses.
2. NAT is the answer? No, for true secure internet you need end to end connectivity. This means live IP addresses, not hiding behind NAT. Also NAT can't pass everything through. e.g. try to pass ESP for several devices through NAT. Also try to run several independent servers of the same service type (e.g. web sites) behind a NAT. Gets very difficult.
3. Routing for Ipv6 will fall apart because of the large routing tables?
Wrong. The way strong aggregation is defined in Ipv6 results in the Default Free Zone (DFZ) of the core internet being very small (designed to be < 8000 or so entries). That same aggregation policy applies to for TLA (top level aggregate), NLA (next level aggregate) and SLA (Site level aggregate). If people adhere to the rules, there will be no routers blowing up any time soon. Router lookups will be faster than they have ever been because of the strict aggregation boundaries.
As an aside, Ipv6 does not have a header checksum so routers will no longer need to checksum all headers as they pass through. This will also reduce router processing overhead.
To qualify (3) I must add that multihoming is done differently in Ipv6. No site will ever "own" their address space so it can never be advertised into the DFZ. This is the mistake that we learnt from IPv4. To multihome you will be required to have an address space from each provider (SLA/NLA or TLA) that you are multihoming to. This means that nodes in a multihomed site will potentially have more than one visible address on the internet to maintain connectivity. The details of how to deal with the multiple address issue are in the process of being sorted out, but I can assure you there are several solutions to the issue of multihoming in Ipv6.
4. Privacy is gone in Ipv6. (in case anyone wants to raise the point).
This has been debated before about the issue of your NIC address being publicized. It is a simple matter to anonymize the address and an I-D has already been done to deal with this.
So Ipv6 is not DOA as some would suggest. It's only a matter of time before people realize that it's absolutely required for the Internet to move forward.
Do your research and you'll find that Ipv6 is needed and will make life on the internet much more saner. The availability of reasonable address space is the fundamental one, and I'm sure the IAB/IETF can bring enough pressure to bear on providers to make sure everyone gets a fair share of this address space. Don't also forget that it's a free market - giving adequate address space can be a selling point for a competitive ISP.
Re:Everyone can have huge networks (Score:2)
Re:IPv6 is DOA... (Score:2)
You're now limited to 65,535 possible things you can address through that firewall (TCP ports are a 16-bit field).
So you've got 10/8 behind the firewall (2^24 devices) and you can only address 2/3 of them--assuming each one "only" needs one TCP port. Oops!
Admittedly, you could as much as double the "address space" by using UDP for some things...but since most of your embedded gear is probably going to want to use HTTP, that won't work too well.
If your control software is smart enough, I suppose you could use an HTTP proxy on the gateway...but does the Linksys box provide one? Didn't think so.
Re:Finally an IP addr for my Coffee Machine (Score:2)
Haven't you ever heard of HTCPCP [faqs.org]?
RFCs in HTML (Score:2)
Not the last call (Score:2)
Re:Why? (Score:1)
Re:Allocate by region based on population. Leave r (Score:1)
3 bits for continent, that's good for 8 continents, fair enough.
16 bits for nation, that's 256 nations per continent (we've already specified the continent) I'm not sure what continent has the most nations, but it's no where near 256.
24 for city, that's 16 million cities per nation - not likely. If applied to China, that would imply an average city population of 60 people.
48 bits for the individuals/companies?? I don't know how many that is, but it's about a billion billion billion times larger than the largest city. 27 bits would handle cities w/populations of 128 million, make it 30 and you've got enough for cities of a billion.
Re:Allocate by region based on population. Leave r (Score:1)
oops..
There will still be a shortage of IP numbers (Score:1)
The FACT is that IP number are artificially scares, that is people "own" big chunks of them and see them as a source of power, so they wont let anyone else use them.
Creating more IP addresses and giving the new IP address to people who already have them is useless as they will be stockpiled just like the current situation.
We need to escape theses overloads who control us so.
Re:Some real info. (Score:1)
With respect to the very small default free zone, that was exactly the plan for IPv4. Entities like UUnet and Genuity get huge IPv4 blocks of address space. But if you look in the core routing table, you will see tons of more-specific routes that companies multi-homing are announcing both through the provider that allocated them the space as well as the other provider[s] they multi-home to.
The problem has always been multi-homing. And guess what? Nobody has figured out (yet) how to do multi-homing with IPv6 in such a way that these problems will not re-surface again in the future.
If you have suggestions on how to deal with the IPv6 multi-homing issue, I suggest you join up and participate in the multi6 working group. There are some real issues that need to be addressed before IPv6 can be deployed.
Alec
I know her (Score:1)
I don't understand... (Score:1)
A) increading the subnet to 999.999.999.0 from 255.255.255.0 or
B) adding on another decimal or two (ex. xxx.xxx.xxx.xxx.xxx) or
C) doing both
While the actual mechanics of the protocol itself in terms of getting the data where it needs to go seem very good, the new hex addressing model is completely idiotic. When this goes into effect, we will be taking a giant step backwards, back into the 70s and 80s when no normal person could find their way around a network. And sure, there will be domain names still, but sometimes you need to use an IP. I'd much rather type a 11 or 12-digit number rather than a big huge alphanumeric hex number. And just think how beautifully slow DNS servers will be in the future.
I'd like to end this posting with a question:
Why did they decide on a big nasty hex setup, rather than expanding the current system, maintaining some form of compatability? And, if it moves forward, when can I expect the displeasure of seeing my IP change from 24.5.164.0/255 to something like 2001:200:800:6000::/56?
------------------------
Re:Let me get this straight... (Score:1)
I always thought that was strange myself, but eventually came to the conclusion that it must mean that at one time it had been submitted as a "request for comments" and then, once the comment period has ended it gets tagged with a number and then it's frozen. Once frozen it's only changed by the creation of a new RFC (i.e., no revision possible.) Once you have a copy of an RFC, you'll always have the latest version of it. You just need to keep an eye out for new RFCs that replace it.
Re:Everyone can have huge networks (Score:1)
Should make DNS pretty interesting. I sense a great need on the part of ISPs to provide an interface to let customers handle their own DNS.
Just write a filter to do that (Score:1)
Maybe someday we'll see RFCs in HTML - that way we there can be links instead of footnotes. Now that would be progress.
I'm sure that it would be straightforward to write a filter program in a text processing language such as Ruby, Python, or Perl to translate the plain text format of RFCs into HTML markup. However, it would be a little tougher to resolve bibliographic references to a printed work into links to the book's BN Fatbrain [fatbrain.com] page.
Precious Moments figurines strike again (Score:1)
You took up 2 precious minutes
Every minute we waste is another minute we spend evolving (or not). It is predicted that by the year 802,701 [everything2.com], the human race will have evolved into something resembling Precious Moments figurines [everything2.com]. We only have 800,700 years to make sure that the carnivorous ant people [everything2.com] don't come down from space, enslave us, and eventually farm us for food.
What /48 means (Score:2)
What does
/48 mean?
Re:There will still be a shortage of IP numbers (Score:1)
-- Fester
Re:Finally an IP addr for my Coffee Machine (Score:2)
> telnet coffee.appliance.myhome.org
Welcome to the BrewMatic 4000
Running Linux 2.4.14
Login:root
# rm
# rm
# echo "Owned!" >
# exit
> _
arin lookups (Score:1)
Re:IPv6 is fundamentally flawed. (Score:2)
Re:Finally an IP addr for my Coffee Machine (Score:1)
Re:IPv6 is DOA... (Score:2)
Re:Allocate by region based on population. Leave r (Score:1)
Its a high latency design anyways.
no, no and no again (Score:2)
There are a number of reasons why this is a bad idea -
1) Privacy. Maybe I don't want people (read companies) to know what city I'm currently in.
2) Speed. Most IP traffic is routed between major network providers which do not operate within set geographic boundaries. Knowing that a packet at a major peering point needs to go to Cambridge, England is nowhere near as helpful as knowing the transit provider is PSInet.
3) The last 64 bits of an IPv6 address are often used to store the MAC address of the sending host. This is going to make things like Mobile IP and automatic IP allocation (think DHCP) a breeze.
All these reasons and more are why the (substantially more knowledgable than you and I) members of the IETF working group chose the current system 8-)
Si
ps. go visit to learn more. Get yourself a free
wrong (rta) (Score:2)
actually in says quite explicitly that each entity will get a
/48 address, and can can assign all the subnet ranges as it sees fit.
The whole idea behind this is so that an ISP will not have to distinguish address assignment between an occaisional dialup user and a major multinational corporation- they both get as many routable addresses as they could ever use.
The rfcrfc also makes a long argument about why this is desirable. Go read it
;)
Re:Finally an IP addr for my Coffee Machine (Score:1)
Just think of all the other possibilities: we'll have enough IPs for our video game systems (gamecube, ps2, and *shudder* xbox), all of our other internet thingies (internet radios that nobody ever buys, networked mp3 players, and such), and other things that I can't remember right now.
Re:I can just see it... (Score:2)
Re:Finally an IP addr for my Coffee Machine (Score:5)
Running Linux 2.4.14
Login:root
Brewing as root? With all the coffee buffer overflow exploits around?
Re:RFCs in HTML (Score:2)
Try FAQs.org. [faqs.org] Looks like they have everything HTMLized, much better than the plain text docs.
Enigma
Re:There will still be a shortage of IP numbers (Score:2)
Hooray! No manufactured address shortage! (Score:3)
I've heard a lot of FUD lately about how ARIN was going to limit the amount of IPv6 space given out so that it could lease the addresses and make money. The proposed policy, if adopted, appears to mitigate that fear. As the document says:(Now let's get IPv6 fielded! I'm ready...)
what? (Score:5)
this can't be slashdot.. if it is.. i feel kind of betrayed..
Re:Allocate by region based on population. Leave r (Score:2)
Until we discover a means of FTL communication, interplanetary networks will have to use something other than TCP/IP [slashdot.org].
Fridge (Score:2)
I remember the days where getting 100s of IPs was cheap and no problem what so ever. These days I still wonder why some companies that I visit, still have a full range of IPs when they only use one or two.
I have been told that is is hard even to get a small range today, but I see many private people with their xDSL lines getting 8 IPs. hmm.
Most people forget that they can host many servers on one IP using layer 4 switching. I just love to configure those Foundry boxes [foundrynetworks.com]
But I can't help to wonder that we might have missed something, I'll bet that real soon someone comes up with something that will make the amount of IPs available with IPv6 too small.
Just like when you got that 4GB harddrive, "Now I will never need another drive", then came 37GB "now I will truly never need a bigger drive", deal if you know what I mean.
--------
For sale: Rhesus-Monkey-Torture-Kit 40$
Renumbering (Score:2)
Re:Allocate by region based on population. Leave r (Score:2)
No worries, by that time we'll either be telepathic or we will have invented something that will probably be called 'NAT'
Was it not Bill Gates who said "640Kb RAM should be enough for everyone"?
:-)
ping sweeping (or security through obscurity) (Score:2)
Now if only they would introduce 128 bit port numbers
Re:Allocate by region based on population. Leave r (Score:2)
It's called a subspace channel. I've been trying to tunnel TCP/IP over one, but I keep getting problems with timeouts and dropped packets that are associated with non-causality paradoxes.
I did manage to use my setup to chat with a hot black chick who seemed to be on some kind of space mission, though...
Why not to hold your breath (Score:2)
- That only by having a provider-independent boundary can we guarantee that a change of ISP will not require a costly internal restructuring or consolidation of subnets."
It is not in the larger ISP's (AOL, Baby Bells, etc.) to allow customers to easily change providers.
"- To allow easy growth of the subscribers' networks without need to go back to ISPs for more space (except for that relatively small number of subscribers for which a
/48 is not enough)."
The more devices you have in your network, the more bandwidth the ISP will be exptected to provide. Bandwidth costs ISPs money, and many home broadband providers to home users don't like you using all your alloted bandwitdth for any period of time.
"- To remove the burden from the ISPs and registries of judging sites' needs for address space, unless the site requests more space than a
/48."
If they maintain control over those decisions, they can keep a cap on the bandwidth they need to provide. Besides, everybody likes hanging on to power.
"- To allow the site to maintain a single reverse-DNS zone covering all prefixes."
Then how will the ISPs charge you for using their DNS servers?
From where I sit, the big ISPs/telecoms stand to make more money in maintaining the current IPv4 structure of the internet than moving to this implementation of IPv6. I mean, come on: Charge $40/month for a
/48 or for a /128? You do the math.
Let me get this straight... (Score:4)
So they're requesting for comments before it gets publicized as a Request For Comments? No wonder the internet is so fucked up!
Re:Everyone can have huge networks (Score:4)
"and
/128 when it is absolutely known that one and only one device is connecting."
Unless they want to dish out huge amounts of money upgrading their hardware and increasing their bandwidth, your ISP is going to give you one and only one IP. For us home users, pricing and distribution won't be much different from IPv4.
Hmm, time to propose *.os (Score:2)
Re:IPv6 is DOA... (Score:2)
Ofcourse, unfortunately, I would only be able to use 65535 or so portnumbers... Hums hums.. BUMMER! 'Mom, you cannot put more than 60k devices on our home network damnit! How many times do I have to repeat this, your hairdryer just CAN'T be controlled through the internet, you will REALLY have to control it through the local LAN.
Ofcourse, as another poster suggested, a name based HTTP proxy on the 'router' would also be cool. So let the proxy server decide which IP address on the local LAN to forward requests for to! That'd kick ass, I bet Linksys could fairly easily put such functionality in their little box. Ofcourse when you have a Linux internet gateway that's trivial to set up anyway.
However, wasting 'numbers' is not so harmfull as the more typical American waste, so why the hell not implement IPv6 and have a plethora of numbers available. Hmmm. Hey it would probably be good for the IT work opportunities in the future, all those routers that will have to be replaces 'n all, all those computers that'll need to be reconfigured. Hmm. I like it. Let's do it!
Why? (Score:2)
Re:If /48 MAC, what about DNS? (Score:2)
-- | https://slashdot.org/story/01/07/28/2147259/arin-ipv6-allocation-policy | CC-MAIN-2017-39 | refinedweb | 7,081 | 71.95 |
Answers to Review Questions
- Carmella Wood
- 2 years ago
- Views:
Transcription
1 Answers to Review Questions 1. The real rate of interest is the rate that creates an equilibrium between the supply of savings and demand for investment funds. The nominal rate of interest is the actual rate of interest charged by the supplier and paid by the demander. The nominal rate of interest differs from the real rate of interest due to two factors: (1) a premium due to inflationary expectations (IP) and (2) a premium due to issuer and issue characteristic risks (RP). The nominal rate of interest for a security can be defined as r 1 r * IP RP. For a 3-month U.S. Treasury bill, the nominal rate of interest can be stated as r 1 r * IP. The default risk premium, RP, is assumed to be borrowing costs. c. Flat: Borrowing costs are relatively similar for short- and long-term loans. The upward-sloping yield curve has been the most prevalent historically. 4. a. According to the expectations theory, the yield curve reflects investor expectations about future interest rates, with the differences based on inflation expectations. The curve can take any of the three forms. An upward-sloping curve is the result of increasing inflationary expectations, and vice versa. b. The liquidity preference theory is an explanation for the upward-sloping yield curve. This theory states that long-term rates are generally higher than short-term rates due to the desire of investors for greater liquidity, and thus a premium must be offered to attract adequate long-term investment. c. The market segmentation theory is another theory that can explain any of the three curve shapes. Since the market for loans can be segmented based on maturity, sources of supply and demand for loans within each segment determine the prevailing interest rate. If supply is greater than demand for short-term funds at a time when demand for long-term loans is higher than the supply of funding, the yield curve would be upward sloping. Obviously, the reverse also holds true. 5. In the Fisher equation, r r * IP RP, the risk premium, RP, consists of the following issuer- and issue-related components: Default risk: The possibility that the issuer will not pay the contractual interest or principal as scheduled. Maturity (interest rate) risk: The possibility that changes in the interest rates on similar securities will cause the value of the security to change by a greater amount the longer its maturity, and vice versa. Liquidity risk: The ease with which securities can be converted to cash without a loss in value. Contractual provisions: Covenants included in a debt agreement or stock issue defining the rights and restrictions of the issuer and the purchaser. These can increase or reduce the risk of a security. Tax risk: Certain securities issued by agencies of state and local governments are exempt from federal, and in some cases state and local taxes, thereby reducing the nominal rate of interest by an amount that brings the return into line with the after-tax return on a taxable issue of similar risk.
2 The risks that are debt specific are default, maturity, and contractual provisions. 6. Most corporate bonds are issued in denominations of $1,000 with maturities of 10 to 30 years. The stated interest rate on a bond represents the percentage of the bond s par value that will be paid out annually, although the actual payments may be divided up and made quarterly or semiannually. Both bond indentures and trustees are means of protecting the bondholders. The bond indenture is a complex and lengthy legal document stating the conditions under which a bond is issued. The trustee may be a paid individual, corporation, or commercial bank trust department that acts as a third-party watch dog on behalf of the bondholders to ensure that the issuer does not default on its contractual commitment to the bondholders. 7. Long-term lenders include restrictive covenants in loan agreements in order to place certain operating and/or financial constraints on the borrower. These constraints are intended to assure the lender that the borrowing firm will maintain a specified financial condition and managerial structure during the term of the loan. Since the lender is committing funds for a long period of time, he seeks to protect himself against adverse financial developments that may affect the borrower. The restrictive provisions (also called negative covenants) differ from the so-called standard debt provisions in that they place certain constraints on the firm s operations, whereas the standard provisions (also called affirmative covenants) require the firm to operate in a respectable and businesslike manner. Standard provisions include such requirements as providing audited financial statements on a regular schedule, paying taxes and liabilities when due, maintaining all facilities in good working order, and keeping accounting records in accordance with generally accepted accounting procedures (GAAP). Violation of any of the standard or restrictive loan provisions gives the lender the right to demand immediate repayment of both accrued interest and principal of the loan. However, the lender does not normally demand immediate repayment but instead evaluates the situation in order to determine if the violation is serious enough to jeopardize the loan. The lender s options are: Waive the violation, waive the violation and renegotiate terms of the original agreement, or demand repayment. 8. Short-term borrowing is normally less expensive than long-term borrowing due to the greater uncertainty associated with longer maturity loans. The major factors affecting the cost of long-term debt (or the interest rate), in addition to loan maturity, are loan size, borrower risk, and the basic cost of money. 9. If a bond has a conversion feature, the bondholders have the option of converting the bond into a certain number of shares of stock within a certain period of time. A call feature gives the issuer the opportunity to repurchase, or call, bonds at a stated price prior to maturity. It provides extra compensation to bondholders for the potential opportunity losses that would result if the bond were called due to declining interest rates. This feature allows the issuer to retire outstanding debt prior to maturity and, in the case of convertibles, to force conversion. Stock purchase warrants, which are sometimes included as part of a bond issue, give the holder the right to purchase a certain number of shares of common stock at a specified price. 10. Current yields are calculated by dividing the annual interest payment by the current price. Bonds are quoted in percentage of par terms, to the thousandths place. Hence, corporate bond prices are effectively quoted in dollars and cents. A quote of means the bond is priced at % of par, or $
3 Bonds are rated by independent rating agencies such as Moody s and Standard & Poor s with respect to their overall quality, as measured by the safety of repayment of principal and interest. Ratings are the result of detailed financial ratio and cash flow analyses of the issuing firm. The bond rating affects the rate of return on the bond. The higher the rating, the less risk and the lower the yield. 11. Eurobonds are bonds issued by an international borrower and sold to investors in countries with currencies other than that in which the bond is denominated. For example, a dollar-denominated Eurobond issued by an American corporation can be sold to French, German, Swiss, or Japanese investors. A foreign bond, on the other hand, is issued by a foreign borrower in a host country s capital market and denominated in the host currency. An example is a French-franc denominated bond issued in France by an English company. 12. A financial manager must understand the valuation process in order to judge the value of benefits received from stocks, bonds, and other assets in view of their risk, return, and combined impact on share value. 13. Three key inputs to the valuation process are: a. Cash flows the cash generated from ownership of the asset; b. Timing the time period(s) in which cash flows are received; and c. Required return the interest rate used to discount the future cash flows to a PV. The selection of the required return allows the level of risk to be adjusted; the higher the risk, the higher the required return (discount rate). 14. The valuation process applies to assets that provide an intermittent cash flow or even a single cash flow over any time period. 15. The value of any asset is the PV of future cash flows expected from the asset over the relevant time period. The three key inputs in the valuation process are cash flows, the required rate of return, and the timing of cash flows. The equation for value is: where: V 0 CF 1 r n V CF CF CFn (1 r) (1 r) (1 r) value of the asset at time zero cash flow expected at the end of year t appropriate required return (discount rate) relevant time period 16. The basic bond valuation equation for a bond that pays annual interest is: where: V 0 I n M r d 1 1 (1 ) (1 rd) n V 0 I M t t 1 rd value of a bond that pays annual interest interest years to maturity dollar par value required return on the bond To find the value of bonds paying interest semiannually, the basic bond valuation equation is adjusted as follows to account for the more frequent payment of interest: n n
4 a. The annual interest must be converted to semiannual interest by dividing by two. b. The number of years to maturity must be multiplied by two. c. The required return must be converted to a semiannual rate by dividing it by two. 17. A bond sells at a discount when the required return exceeds the coupon rate. A bond sells at a premium when the required return is less than the coupon rate. A bond sells at par value when the required return equals the coupon rate. The coupon rate is generally a fixed rate of interest, whereas the required return fluctuates with shifts in the cost of long-term funds due to economic conditions and/or risk of the issuing firm. The disparity between the required rate and the coupon rate will cause the bond to be sold at a discount or premium. 18. If the required return on a bond is constant until maturity and different from the coupon interest rate, the bond s value approaches its $1,000 par value as the time to maturity declines. 19. To protect against the impact of rising interest rates, a risk-averse investor would prefer bonds with short periods until maturity. The responsiveness of the bond s market value to interest rate fluctuations is an increasing function of the time to maturity. 20. The yield-to-maturity (YTM) on a bond is the rate investors earn if they buy the bond at a specific price and hold it until maturity. The YTM can be found precisely by using a hand-held financial calculator and using the time value functions. Enter the B 0 as the PV, and the I as the annual payment, and the n as the number of periods until maturity. Have the calculator solve for the interest rate. This interest value is the YTM. Many calculators are already programmed to solve for the internal rate of return (IRR). Using this feature will also obtain the YTM since the YTM and IRR are determined the same way. Spreadsheets include a formula for computing the yield to maturity. Answers to Warm-Up Exercises E6-1. Finding the real rate of interest Answer: r* R F IP 0.8% 1.23% IP IP %
5 E6-2. Yield curve E6-3. b. {(4.51% 10) (3.7% 5)} 5 {45.1% 18.5%} % % c. (3.01% 3) (2.68% 2) 9.03% 5.36% 3.67% d. Yield curves may slope up for many reasons beyond expectations of rising interest rates. According to liquidity preference theory, long-term interest rates tend to be higher than short-term rates because longer-term debt has lower liquidity, higher responsiveness to general interest rate movements, and borrower willingness to pay a higher interest rate to lock in money for a longer period of time. In addition to expectations theory and liquidity preference theory, market segmentation theory allows for additional interest rate increases arising from either limited availability of funds or greater demand for funds at longer maturities. Calculating inflation expectation Answer: The inflation expectation for a specific maturity is the difference between the yield and the real interest rate at that maturity. E6-4. Maturity Yield Real Rate of Interest Inflation Expectation 3 months 1.41% 0.80% 0.61% 6 months years years years years years Real returns Answer: A T-bill can experience a negative real return if its interest rate is less than the inflation rate as measured by the CPI. The real return would be zero if the T-bill rate was 3.3% exactly matching the CPI rate. To obtain a minimum 2% real return, the T-bill rate would have to be at least 5.3%.
6 E6-5. Calculating risk premium Answer: We calculate the risk premium of other securities by subtracting the risk-free rate, 4.51%, from each nominal interest rate. E6-6. Security Nominal Interest Rate Risk Premium AAA 5.12% 5.12% 4.51% 0.61% BBB % 4.51% 1.27% B % 4.51% 3.31% The basic valuation model Answer: Find the PV of the cash flow stream for each asset by discounting the expected cash flows using the respective required return. E6-7. Asset 1: PV $ $3, Asset 2: PV $1,200 $1,500 $ (1.10) (1.10) 2 3 $2, Calculating the PV of a bond when the required return exceeds the coupon rate Answer: The PV of a bond is the PV of its future cash flows. In the case of the 5-year bond, the expected cash flows are $1,200 at the end of each year for 5 years, plus the face value of the bond that will be received at the maturity of the bond (end of year 5). You may use the bond valuation formula found in your text or you may use a financial calculator. The solution presented below is derived using a financial calculator. Set the calculator on 1 period/year. E6-8. PV of interest: PMT 1,200 I N 8%/year 5 periods Solve for PV $4, PV of the bond s face value: FV $20,000 N I 5 periods 8%/year Solve for PV $13, The PV of this bond is $4, $13, $18, This answer is consistent with the knowledge that when interest rates rise, the values of previously issued bonds fall. The present value is a cash outflow, or cost to investor. Bond valuations using required rates of return Answer: a. Student answers will vary but any required rate of return above the coupon rate will cause the bond to sell at a discount, while at a required return of 4.5% the bond will sell at par. Any required rate of return below the coupon rate will cause the bond to sell at a premium. b. Student answers will vary but should be consistent with their answers to part a.
7 Solutions to Problems P6-1. P6-2. Interest rate fundamentals: The real rate of return LG1; Basic Real rate of return 5.5% 3.0% 2.5% Real rate of interest LG 1; Intermediate a. b. The real rate of interest creates an equilibrium between the supply of savings and the demand for funds, which is shown on the graph as the intersection of lines for current suppliers and current demanders; r 4%. c. See graph. d. A change in the tax law causes an upward shift in the demand curve, causing the equilibrium point between the supply curve and the demand curve (the real rate of interest) to rise from r 0 4% to r 0 6% (intersection of lines for current suppliers and demanders after new law). P6-3. Personal finance: Real and nominal rates of interest LG 1; Intermediate a. 4 shirts b. $100 ($ ) $109 c. $25 ($25.05) $26.25 d. The number of polo shirts in one year $109 $ He can buy 3.8% more shirts ( ). e. The real rate of return is 9% 5% 4%. The change in the number of shirts that can be purchased is determined by the real rate of return since the portion of the nominal return for expected inflation (5%) is available just to maintain the ability to purchase the same number of shirts.
8 P6-4. Yield curve LG 1; Intermediate a. b. The yield curve is slightly downward sloping, reflecting lower expected future rates of interest. The curve may reflect a general expectation for an economic recovery due to inflation coming under control and a stimulating impact on the economy from the lower rates. However, a slowing economy may diminish the perceived need for funds and the resulting interest rate being paid for cash. Obviously, the second scenario is not good for business and highlights the challenge of forecasting the future based on the term structure of interest rates. P6-5. Nominal interest rates and yield curves LG 1; Challenge a. r l r * IP RP 1 For U.S. Treasury issues, RP 0 r F r * IP 20-year bond: R F 2.5% 9% 11.5% 3-month bill: R F 2.5% 5% 7.5% 2-year note: R F 2.5% 6% 8.5% 5-year bond: R F 2.5% 8% 10.5% b. If the real rate of interest (r * ) drops to 2.0%, the nominal interest rate in each case would decrease by 0.5% point. c. The yield curve for U.S. Treasury issues is upward sloping, reflecting the prevailing expectation of higher future inflation rates.
9 d. Followers of the liquidity preference theory would state that the upward sloping shape of the curve is due to the desire by lenders to lend short term and the desire by business to borrow long term. The dashed line in the part c graph shows what the curve would look like without the existence of liquidity preference, ignoring the other yield curve theories. e. Market segmentation theorists would argue that the upward slope is due to the fact that under current economic conditions there is greater demand for long-term loans for items such as real estate than for short-term loans such as seasonal needs. P6-6. Nominal and real rates and yield curves LG 1; Challenge Real rate of interest (r * ): r i r * IP RP RP 0 for Treasury issues r * r i IP a. Security Nominal Rate (r j ) IP Real Rate of Interest (r * ) A 12.6% 9.5% 3.1% B 11.2% 8.2% 3.0% C 13.0% 10.0% 3.0% D 11.0% 8.1% 2.9% E 11.4% 8.3% 3.1% b. The real rate of interest decreased from January to March, remained stable from March through August, and finally increased in December. Forces that may be responsible for a change in the real rate of interest include changing economic conditions such as the international trade balance, a federal government budget deficit, or changes in tax legislation. c. d. The yield curve is slightly downward sloping, reflecting lower expected future rates of interest. The curve may reflect a current, general expectation for an economic recovery due to inflation coming under control and a stimulating impact on the economy from the lower rates.
10 P6-7. Term structure of interest rates LG 1; Intermediate a. b. and c. Five years ago, the yield curve was relatively flat, reflecting expectations of stable interest rates. Two years ago, the yield curve was downward sloping, reflecting lower expected interest rates, which could be due to a decline in the expected level of inflation. Today, the yield curve is upward sloping, reflecting higher expected future rates of interest. d. Five years ago, the 10-year bond was paying 9.5%, which would result in approximately 95% in interest over the coming decade. At the same time, the 5-year bond was paying just 9.3%, or a total of 46.5% over the five years. According to the expectations theory, investors must have expected the current 5-year rate to be 9.7% because at that rate, the total return over ten years would have been the same on a 10-year bond and on two consecutive 5-year bonds. The numbers are given below. {(9.5% 10) (9.3% 5)} 5 {95% 46.5%} % 5 9.7% P6-8. Risk-free rate and risk premiums LG 1; Basic a. Risk-free rate: R F r * IP Security r * IP R F A 3% 6% 9% B 3% 9% 12% C 3% 8% 11% D 3% 5% 8% E 3% 11% 14% b. Since the expected inflation rates differ, it is probable that the maturity of each security differs.
11 c. Nominal rate: r r * IP RP Security r * IP RP r A 3% 6% 3% 12% B 3% 9% 2% 14% C 3% 8% 2% 13% D 3% 5% 4% 12% E 3% 11% 1% 15% P6-9. Risk premiums LG 1; Intermediate a. R Ft r * IP t Security A: R F3 2% 9% 11% Security B: R F15 2% 7% 9% b. Risk premium: RP default risk maturity risk liquidity risk other risk Security A: RP 1% 0.5% 1% 0.5% 3% Security B: RP 2% 1.5% 1% 1.5% 6% c. r i r * IP RP or r 1 r F risk premium Security A: r 1 11% 3% 14% Security B: r 1 9% 6% 15% Security A has a higher risk-free rate of return than Security B due to expectations of higher near-term inflation rates. The issue characteristics of Security A in comparison to Security B indicate that Security A is less risky. P6-10. Bond interest payments before and after taxes LG 2; Intermediate a. Yearly interest [($2,500,000/2500) 0.07] ($1, ) $70.00 b. Total interest expense $70.00 per bond 2,500 bonds $175,000 c. Total before tax interest $175,000 Interest expense tax savings (0.35 $175,000) 61,250 Net after-tax interest expense $113,750 P6-11. Bond prices and yields LG 4; Basic a $1,000 $ b. ( $1,000) $ $ $ % c. The bond is selling at a discount to its $1,000 par value. d. The yield to maturity is higher than the current yield, because the former includes $22.92 in price appreciation between today and the May 15, 2017 bond maturity.
12 P6-12. Personal finance: Valuation fundamentals LG 4; Basic a. Cash flows: CF 1 5 $1,200 CF 5 $5,000 Required return: 6% b. V CF1 CF2 CF3 CF4 CF5 (1 r) (1 r) (1 r) (1 r) (1 r) V $1,200 $1,200 $1,200 $1,200 $6,200 (1 0.06) (1 0.06) (1 0.06) (1 0.06) (1 0.06) V 0 $8,791 Using Calculator: N 5, I 6, PMT $1,200, FV $5,000 Solve for PV: $8791 The maximum price you should be willing to pay for the car is $8,791, since if you paid more than that amount, you would be receiving less than your required 6% return. P6-13. Valuation of assets LG 4; Basic Present Value of Asset End of Year Amount Cash Flows A 1 $ 5,000 N 3, I 18 $10, $ 5,000 PMT $5,000 3 $ 5,000 B 1 $ $2,000 C 1 0 N 5, I 16 $16, FV $35, $35,000 D 1 5 $ 1,500 N 6, I 12, $9, ,500 PMT $1,500 FV $7,000 E 1 $ 2,000 Use Cash Flow Worksheet 2 3, , , , ,000 $14,115.27
13 P6-14. Personal finance: Asset valuation and risk LG 4; Intermediate a. N 10% Low 15% Average 22% High Risk CF 1 4 $3,000 $ 9,510 $ 8,565 $ 7,481 CF 5 15,000 9,314 7,455 5,550 Calculator solutions: $18, $16, $13, b. The maximum price Laura should pay is $13, Unable to assess the risk, Laura would use the most conservative price, therefore assuming the highest risk. c. By increasing the risk of receiving cash flow from an asset, the required rate of return increases, which reduces the value of the asset. P6-15. Basic bond valuation LG 5; Intermediate a. I 10%, N 16, PMT $120, FV $1,000 Solve for PV $1, b. Since Complex Systems bonds were issued, there may have been a shift in the supplydemand relationship for money or a change in the risk of the firm. c. I 12%, N 16, PMT $120, FV $1,000 Solve for PV: $1,000 When the required return is equal to the coupon rate, the bond value is equal to the par value. In contrast to part a above, if the required return is less than the coupon rate, the bond will sell at a premium (its value will be greater than par). P6-16. Bond valuation annual interest LG 5; Basic Bond Calculator Inputs Calculator Solution A N 20, I 12, PMT 0.14 $1,000 $140, FV 1,000 $1, B N 16, I 8, PMT 0.08 $1,000 $80, FV $1,000 $1, C N 8, I 13, PMT 0.10 $100 $10, FV $100 $ D N 13, I 18, PMT 0.16 $500 $80, FV $500 $ E N 10, I 10, PMT 0.12 $1,000 $120, FV $1,000 $1,122.89
14 P6-17. Bond value and changing required returns LG 5; Intermediate a. b. Bond Calculator Inputs Calculator Solution (1) N 12, I 11%, PMT $110, FV $1,000 $1, (2) N 12, I 15%, PMT $110, FV $1,000 $ (3) N 12, I 8%, PMT $110, FV $1,000 $1, c. When the required return is less than the coupon rate, the market value is greater than the par value and the bond sells at a premium. When the required return is greater than the coupon rate, the market value is less than the par value; the bond therefore sells at a discount. d. The required return on the bond is likely to differ from the coupon interest rate because either (1) economic conditions have changed, causing a shift in the basic cost of long-term funds, or (2) the firm s risk has changed. P6-19. Personal finance: Bond value and time changing required returns LG 5; Challenge a. Bond Calculator Inputs Calculator Solution (1) N 5, I 8%, PMT $110, FV $1,000 $1, (2) N 5, I 11%, PMT $110, FV $1,000 $1, (3) N 5, I 14%, PMT $110, FV $1,000 $ b. Bond Table Values Calculator Solution (1) N 15, I 8%, PMT $110, FV $1,000 $1,256.78
15 (2) N 15, I 11%, PMT $110, FV $1,000 $1, (3) N 15, I 14%, PMT $110, FV $1,000 $ c. Value Required Return Bond A Bond B 8% $1, $1, % 1, , % The greater the length of time to maturity, the more responsive the market value of the bond to changing required returns, and vice versa. d. If Lynn wants to minimize interest rate risk in the future, she would choose Bond A with the shorter maturity. Any change in interest rates will impact the market value of Bond A less than if she held Bond B.
16 P6-20. Yield to maturity LG 6; Basic Bond A is selling at a discount to par. Bond B is selling at par value. Bond C is selling at a premium to par. Bond D is selling at a discount to par. Bond E is selling at a premium to par. P6-21. Yield to maturity LG 6; Intermediate a. Using a financial calculator, the YTM is %. The correctness of this number is proven by putting the YTM in the bond valuation model. This proof is as follows: N 15, I %, PMT $120, FV $1,000 Solve for PV $ Since PV is $ and the market value of the bond is $955, the YTM is equal to the rate derived on the financial calculator. b. The market value of the bond approaches its par value as the time to maturity declines. The yield to maturity approaches the coupon interest rate as the time to maturity declines. P6-22. LG 6: Yield to maturity LG 6; Intermediate a. Calculator Bond Approximate YTM Solution A $90 [($1,000 $820) 8] [($1,000 $820) 2] 12.36% 12.71% B 12.00% 12.00% C $60 [($500 $560) 12] [($500 $560) 2] $10.38% 10.22% D $150 [($1,000 $1,120) 10] [($1,000 $1,120 2] 13.02% 12.81% E $50 [($1,000 $900) 3] [($1,000 $900) 2] 8.77% 8.95% b. The market value of the bond approaches its par value as the time to maturity declines. The yield-to-maturity approaches the coupon interest rate as the time to maturity declines. Case B highlights the fact that if the current price equals the par value, the coupon interest rate equals the yield to maturity (regardless of the number of years to maturity).
17 P6-24. Bond valuation semiannual interest LG 6; Intermediate N , I %, PMT 0.10 $1,000 2 $50; FV $1,000 Solve for PV $ $ P6-25. Bond valuation semiannual interest LG 6; Intermediate Bond Computer Inputs Calculator Solution A N 24, I 4%, PMT $50, FV $1,000 $1, B N 40, I 6%, PMT $60, FV $1,000 $1, C N 10, I 7%, PMT $30, FV $500 $ D N 20, I 5%, PMT $70, FV $1,000 $1, E N 8, I 7%, PMT $3, FV $100 $ 76.11
18 P6-26. Bond valuation quarterly interest LG 6; Challenge N , I 12% 4 3.0%, PMT 3, PMT 0.10 $5,000 4 $125, FV $5,000 Solve for PV $4, P6-27. Ethics problem LG 6; Intermediate Student answers will vary. Some students may argue that such a policy decreases the reliability of the rating agency s bond ratings since the rating is not purely based on the quantitative and nonquantitative factors that should be considered. One of the goals of the new law is to discourage such a practice. Other students may argue that, like a loss leader, ratings are a way to generate additional business for the rating firm. 6. Interest Rates And Bond Valuation. Learning Goals. Learning Goals (cont.)
Chapter 6 Interest Rates And Bond Valuation Learning Goals 1. Describe interest rate fundamentals, the term structure of interest rates, and risk premiums. 2. Review the legal aspects of bond financing
Interest Rates and Bond Valuation
Interest Rates and Bond Valuation Chapter 6 Key Concepts and Skills Know the important bond features and bond types Understand bond values and why they fluctuate Understand bond ratings and what they mean
Topics in Chapter. Key features of bonds Bond valuation Measuring yield Assessing risk
Bond Valuation 1 Topics in Chapter Key features of bonds Bond valuation Measuring yield Assessing risk 2 Determinants of Intrinsic Value: The Cost of Debt Net operating profit after taxes Free cash 5. Interest Rates. Chapter Synopsis
CHAPTER 5 Interest Rates Chapter Synopsis 5.1 Interest Rate Quotes and Adjustments Interest rates can compound more than once per year, such as monthly or semiannually. An annual percentage rate (APR)
Chapter 5: Valuing Bonds
FIN 302 Class Notes Chapter 5: Valuing Bonds What is a bond? A long-term debt instrument A contract where a borrower agrees to make interest and principal payments on specific dates Corporate Bond Quotations
CHAPTER 14: BOND PRICES AND YIELDS
CHAPTER 14: BOND PRICES AND YIELDS PROBLEM SETS 1. The bond callable at 105 should sell at a lower price because the call provision is more valuable to the firm. Therefore, its yield to maturity should
Interest Rates and Bond Valuation
and Bond Valuation 1 Bonds Debt Instrument Bondholders are lending the corporation money for some stated period of time. Liquid Asset Corporate Bonds can be traded in the secondary market. Price at)
FIN 534 Week 4 Quiz 3 (Str) Click Here to Buy the Tutorial- str/ For more course tutorials visit Which of the following
CHAPTER 15: THE TERM STRUCTURE OF INTEREST RATES
CHAPTER 15: THE TERM STRUCTURE OF INTEREST RATES 1. Expectations hypothesis. The yields on long-term bonds are geometric averages of present and expected future short rates. An upward sloping curve
Understanding Fixed Income
Understanding Fixed Income 2014 AMP Capital Investors Limited ABN 59 001 777 591 AFSL 232497 Understanding Fixed Income About fixed income at AMP Capital Our global presence helps us deliver outstanding
FIN 472 Fixed-Income Securities Debt Instruments
FIN 472 Fixed-Income Securities Debt Instruments Professor Robert B.H. Hauswald Kogod School of Business, AU The Most Famous Bond? Bond finance raises the most money fixed income instruments types of:
Bonds and Yield to Maturity
Bonds and Yield to Maturity Bonds A bond is a debt instrument requiring the issuer to repay to the lender/investor the amount borrowed (par or face value) plus interest over a specified period of time.
- 15: THE TERM STRUCTURE OF INTEREST RATES
Chapter - The Term Structure of Interest Rates CHAPTER : THE TERM STRUCTURE OF INTEREST RATES PROBLEM SETS.. In general, the forward rate can be viewed as the sum of the market s expectation of the future
Exam 1 Morning Session
91. A high yield bond fund states that through active management, the fund s return has outperformed an index of Treasury securities by 4% on average over the past five years. As a performance benchmark
CHAPTER 22: FUTURES MARKETS
CHAPTER 22: FUTURES MARKETS PROBLEM SETS 1. There is little hedging or speculative demand for cement futures, since cement prices are fairly stable and predictable. The trading activity necessary to support
ANALYSIS OF FIXED INCOME SECURITIES
ANALYSIS OF FIXED INCOME SECURITIES Valuation of Fixed Income Securities Page 1 VALUATION Valuation is the process of determining the fair value of a financial asset. The fair value of an asset is
How credit analysts view and use the financial statements
How credit analysts view and use the financial statements Introduction Traditionally it is viewed that equity investment is high risk and bond investment low risk. Bondholders look at companies for creditworthiness,
PRESENT DISCOUNTED VALUE
THE BOND MARKET Bond a fixed (nominal) income asset which has a: -face value (stated value of the bond) - coupon interest rate (stated interest rate) - maturity date (length of time for fixed income payments) 4 Valuing Bonds
Chapter 4 Valuing Bonds MULTIPLE CHOICE 1. A 15 year, 8%, $1000 face value bond is currently trading at $958. The yield to maturity of this bond must be a. less than 8%. b. equal to 8%. c. greater than 3 Fixed Income Securities
Chapter 3 Fixed Income Securities Road Map Part A Introduction to finance. Part B Valuation of assets, given discount rates. Fixed-income securities. Stocks. Real assets (capital budgeting). Part C Determination
BOND - Security that obligates the issuer to make specified payments to the bondholder.
Bond Valuation BOND - Security that obligates the issuer to make specified payments to the bondholder. COUPON - The interest payments paid to the bondholder. FACE VALUE - Payment at the maturity of the
Alliance Consulting BOND YIELDS & DURATION ANALYSIS. Bond Yields & Duration Analysis Page 1
BOND YIELDS & DURATION ANALYSIS Bond Yields & Duration Analysis Page 1 COMPUTING BOND YIELDS Sources of returns on bond investments The returns from investment in bonds come from the following: 1. Periodic
Bond Valuation. What is a bond?
Lecture: III 1 What is a bond? Bond Valuation When a corporation wishes to borrow money from the public on a long-term basis, it usually does so by issuing or selling debt securities called bonds. A bond;
Module 1: Corporate Finance and the Role of Venture Capital Financing TABLE OF CONTENTS
1.0 ALTERNATIVE SOURCES OF FINANCE Module 1: Corporate Finance and the Role of Venture Capital Financing Alternative Sources of Finance TABLE OF CONTENTS 1.1 Short-Term Debt (Short-Term Loans, Line of
Term Structure of Interest Rates
Appendix 8B Term Structure of Interest Rates To explain the process of estimating the impact of an unexpected shock in short-term interest rates on the entire term structure of interest rates, FIs
Yield Curve September 2004
Yield Curve Basics The yield curve, a graph that depicts the relationship between bond yields and maturities, is an important tool in fixed-income investing. Investors use the yield curve as a reference
Goals. Bonds: Fixed Income Securities. Two Parts. Bond Returns
Goals Bonds: Fixed Income Securities History Features and structure Bond ratings Economics 71a: Spring 2007 Mayo chapter 12 Lecture notes 4.3 Bond Returns Two Parts Interest and capital gains Stock comparison:
CHAPTER 10 BOND PRICES AND YIELDS
CHAPTER 10 BOND PRICES AND YIELDS 1. a. Catastrophe bond. Typically issued by an insurance company. They are similar to an insurance policy in that the investor receives coupons and par value, but takes
Review for Exam 1. Instructions: Please read carefully
Review for Exam 1 Instructions: Please read carefully The exam will have 21 multiple choice questions and 5 work problems. Questions in the multiple choice section will be either concept or calculation
The Time Value of Money
The following is a review of the Quantitative Methods: Basic Concepts principles designed to address the learning outcome statements set forth by CFA Institute. This topic is also covered in: The Time
DFA INVESTMENT DIMENSIONS GROUP INC.
PROSPECTUS February 28, 2015 Please carefully read the important information it contains before investing. DFA INVESTMENT DIMENSIONS GROUP INC. DFA ONE-YEAR FIXED INCOME PORTFOLIO Ticker: DFIHX DFA TWO-YEAR
CALCULATOR TUTORIAL. Because most students that use Understanding Healthcare Financial Management will be conducting time
CALCULATOR TUTORIAL INTRODUCTION Because most students that use Understanding Healthcare Financial Management will be conducting time value analyses on spreadsheets, most of the text discussion focuses
Save and Invest Bonds
Lesson 6 Save and Invest Bonds Lesson Description In this lesson, students will learn that bonds are financial assets used to build wealth. Using the more familiar concept of bank loans, bonds are introduced
CHAPTER 8 INTEREST RATES AND BOND VALUATION
CHAPTER 8 INTEREST RATES AND BOND VALUATION Solutions to Questions and Problems 1. The price of a pure discount (zero coupon) bond is the present value of the par value. Remember, even though there are
Bond Valuation. Capital Budgeting and Corporate Objectives
Bond Valuation Capital Budgeting and Corporate Objectives Professor Ron Kaniel Simon School of Business University of Rochester 1 Bond Valuation An Overview Introduction to bonds and bond markets» What
Web. Chapter FINANCIAL INSTITUTIONS AND MARKETS
FINANCIAL INSTITUTIONS AND MARKETS T Chapter Summary Chapter Web he Web Chapter provides an overview of the various financial institutions and markets that serve managers of firms and investors who
6. Debt Valuation and the Cost of Capital
6. Debt Valuation and the Cost of Capital Introduction Firms rarely finance capital projects by equity alone. They utilise long and short term funds from a variety of sources at a variety of costs. No
Chapter 11. Stocks and Bonds. How does this distribution work? An example. What form do the distributions to common shareholders take?
Chapter 11. Stocks and Bonds Chapter Objectives To identify basic shareholder rights and the means by which corporations make distributions to shareholders To recognize the investment opportunities in
20. Investments 4: Bond Basics
20. Investments 4: Bond Basics Introduction The purpose of an investment portfolio is to help individuals and families meet their financial goals. These goals differ from person to person and change over
Global Financial Management
Global Financial Management Bond Valuation Copyright 999 by Alon Brav, Campbell R. Harvey, Stephen Gray and Ernst Maug. All rights reserved. No part of this lecture may be reproduced without the permission 5 HOW TO VALUE STOCKS AND BONDS
CHAPTER 5 HOW TO VALUE STOCKS AND BONDS Answers to Concepts Review and Critical Thinking Questions 1. Bond issuers look at outstanding bonds of similar maturity and risk. The yields on such bonds are used
CHAPTER 22: FUTURES MARKETS
CHAPTER 22: FUTURES MARKETS 1. a. The closing price for the spot index was 1329.78. The dollar value of stocks is thus $250 1329.78 = $332,445. The closing futures price for the March contract was 1364.00,
Prepared by: Dalia A. Marafi Version 2.0
Kuwait University College of Business Administration Department of Finance and Financial Institutions Using )Casio FC-200V( for Fundamentals of Financial Management (220) Prepared by: Dalia A. Marafi Version
Chapter 10. Fixed Income Markets. Fixed-Income Securities
Chapter 10 Fixed-Income Securities Bond: Tradable security that promises to make a pre-specified series of payments over time. Straight bond makes fixed coupon and principal payment. Bonds are traded mainly.
Interest Rate Options
Interest Rate Options A discussion of how investors can help control interest rate exposure and make the most of the interest rate market. The Chicago Board Options Exchange (CBOE) is the world s largest
Lesson 6 Save and Invest: Bonds Lending Your Money
Lesson 6 Save and Invest: Bonds Lending Your Money Lesson Description This lesson introduces bonds as an investment option. Using a series of classroom visuals, students will identify the three main parts
CHAPTER 5 INTRODUCTION TO VALUATION: THE TIME VALUE OF MONEY
CHAPTER 5 INTRODUCTION TO VALUATION: THE TIME VALUE OF MONEY Answers to Concepts Review and Critical Thinking Questions 1. The four parts are the present value (PV), the future value (FV), the discount
Investing in Bonds - An Introduction
Investing in Bonds - An Introduction By: Scott A. Bishop, CPA, CFP, and Director of Financial Planning What are bonds? Bonds, sometimes called debt instruments or fixed-income securities, are essentially
CHAPTER 7 INTEREST RATES AND BOND VALUATION
CHAPTER 7 INTEREST RATES AND BOND VALUATION Answers to Concepts Review and Critical Thinking Questions 1. No. As interest rates fluctuate, the value of a Treasury security will fluctuate. Long-term Treasury INTRODUCTION TO SECURITY VALUATION TRUE/FALSE QUESTIONS
1 CHAPTER 11 INTRODUCTION TO SECURITY VALUATION TRUE/FALSE QUESTIONS (f) 1 The three step valuation process consists of 1) analysis of alternative economies and markets, 2) analysis of alternative industries
INTERACTIVE BROKERS DISCLOSURE STATEMENT FOR BOND TRADING
INTERACTIVE BROKERS DISCLOSURE STATEMENT FOR BOND TRADING THIS DISCLOSURE STATEMENT DISCUSSES THE CHARACTERISTICS AND RISKS OF TRADING BONDS THROUGH INTERACTIVE BROKERS (IB). BEFORE TRADING BONDS YOU SHOULD
Madison Investment Advisors LLC
Madison Investment Advisors LLC Intermediate Fixed Income SELECT ROSTER Firm Information: Location: Year Founded: Total Employees: Assets ($mil): Accounts: Key Personnel: Matt Hayner, CFA Vice President
The consequence of failing to adjust the discount rate for the risk implicit in projects is that the firm will accept high-risk projects, which usually have higher IRR due to their high-risk nature, and
Practice Set #1 and Solutions.
Bo Sjö 14-05-03 Practice Set #1 and Solutions. What to do with this practice set? Practice sets are handed out to help students master the material of the course and prepare for the final exam. These sets
American Options and Callable Bonds
American Options and Callable Bonds American Options Valuing an American Call on a Coupon Bond Valuing a Callable Bond Concepts and Buzzwords Interest Rate Sensitivity of a Callable Bond exercise policy | http://docplayer.net/11251370-Answers-to-review-questions.html | CC-MAIN-2018-05 | refinedweb | 7,289 | 61.26 |
<<value-type>> A string convertible to std::string and with similar functionality More...
#include <dds/core/String.hpp>
<<value-type>> A string convertible to std::string and with similar functionality
In many cases, for performance reasons and other implementation requirements, the RTI Connext API uses dds::core::string instead of std::string. The most significant case is the C++ types that rtiddsgen generates from IDL.
A dds::core::string provides a subset of the functionality of a
std::string. It also provides automatic conversion to and from
std::string.
Creates an empty string.
Copy constructor.
Constructor from C string.
Constructor from
std::basic_string.
Creates a string of a given size, initialized to '\0'.
Gets the underlying C string.
Gets the size.
Assignment operator.
Returns if two strings are equal.
Returns if two strings are different.
Creates a
std::basic_string from this
dds::core::basic_string.
Creates a
std::string from this
dds::core::string.
Prints the string. | https://community.rti.com/static/documentation/connext-dds/5.2.0/doc/api/connext_dds/api_cpp2/classdds_1_1core_1_1basic__string.html | CC-MAIN-2022-33 | refinedweb | 156 | 54.49 |
Enumeration
Enumeration is a way of defining your own type that can accept a predefined set of values identified by names. For example, you want a variable that can only store directions such as east, west, north, and south. You can define an enumeration named Direction and declare all the possible values it can have inside the enumeration body. Let’s look at the syntax for an enumeration.
enum enumName { value1, value2, value3, . . . valueN }
We used the enum keyword then the name of the enumeration. As a practice in the C# world, enumeration names uses Pascal Casing. Following is the body of the enum. Inside it are the values identified by names. Here is how our Direction enumeration would look like.
enum Direction { North, East, South, West }
By default, the values that an enumeration can store are of type int. For example, North is mapped to the value of 0 and each subsequent values is 1 greater than the last value. So East has a value of 1, South has 2, and West has 3. You can modify this flow by specifying a specific value like this:
enum Direction { North = 3, East = 5, South = 7, West = 9 }
This time, the North has a value of 3, East has a value of 5, South has a value of 7 and West contains a value of 9. If for example you don’t assign a value for a value, then an underlying value is given to it automatically.
enum Direction { North = 3, East = 5, South, West }
After East, we didn’t assign a value to South, therefore, it’s value will be 1 greater than the value of 5 which is 6 and West will have a value of 1 greater than South, which is 7. You can also assign identical values for the enumeration items.
enum Direction { North = 3, East, South = North, West }
Can you guess their values now? The values for North, East, South, and West are 3, 4, 3, 4 respectively. We assign 3 to North so Eastwill have a value of 4. Then we assign the value of South with the value of North which is 3. If South has now a value of 3, then the next one which is West will have a value of 4.
Be cautious though when using this technique. If you assign the value of North to South, then these two directions would be equal and you can’t walk to North and South at the same time. But in some occasions, you might find using this technique if it is appropriate.
If you don’t want the values of your enumeration items to be int (which is the default). For example, you can use byte as the type of the enumeration items.
enum Direction : byte { North, East, South, West }
The byte data type can only hold a value of up to 255 so the number of values you can add to the enumeration is quite limited. Let’s see how to use an enumeration inside a C# program.
using System; namespace EnumerationDemo { enum Direction { North = 1, East, South, West } public class Program { public static void Main() { Direction myDirection; myDirection = Direction.North; Console.WriteLine("Direction: {0}", myDirection.ToString()); } } }
Example 1 – Enumeration Demo
Direction: North
First, we created our enumeration (lines 5-11). Note that we placed our enumeration outside class Program. Doing so will make our enumeration available throughout the program. You can also place the declaration of the enumeration inside a class to make it only available inside that class.
class Program { enum Direction { //Code omitted } static void Main(string[] args) { //Code omitted } }
Continuing with our program in Example 1. Inside the enumeration are the four possible values and each of them is assigned with value 1 to 4. Line 17 declared our variable that will store a Direction value. We follow this syntax:
enumType variableName;
where the enum type is the Enumeration Type such as Direction and the name of a particular enumeration value. After that, we assign a value to myDirection variable (line 19). We used this syntax:
variable = enumType.value;
We wrote the Enumeration type then a period and then the value from that particular enumeration type such as North. You can initialize a variable right away in its declaration like this:
Direction myDirection = Direction.North;
We then print the value of myDirection using Console.WriteLine() (line 21). Notice that we use the method ToString() to convert the value into its string equivalent.
Enumeration is such a handy tool that it appears ubiquitously in the .NET Framework. Imagine if there is no enumeration, you have to memorize numbers instead of words since enumeration values are actually numbers being aliased with names defined by you or by other people. You can also perform bitwise operations on numerous .NET enumerations to produce some exciting results as you will see in some of the tutorials in this site. Enumeration variables can also be converted to other types such as int or string or convert a stringvalue to an enumeration equivalent. | https://compitionpoint.com/enumeration/ | CC-MAIN-2021-31 | refinedweb | 839 | 63.59 |
I'm struggling on how to do this.
For example.
for i in range (1 ,100): if i % 3 == 2: print i, " mod ", 3, "= 2"
i think i need to understand what this does more.
Firstly what does % mean?
if i is the square route of 3?
i've attempted the question but i think i'm a while off because the example i worked to help me dont seem to be anything like this one.
i came up with:
def int(start,finish): while i in range(1,100): i % 3 == 2; print i, "mod", 3, "=2";
but as you can see i'm really just changing little bits and i suspect i'll be needing to change most of it.
Thanks in advance, and try to keep things simple for me please ^^ | https://www.daniweb.com/programming/software-development/threads/186130/changing-for-loops-to-while-loops-and-vice-versa | CC-MAIN-2018-51 | refinedweb | 134 | 87.15 |
W3C has morphed HTML into XHTML, but they still splash around in the same gene pool.
XHTML 1.0, a reformulation of HTML, appeared in 2000 as a W3C recommendation (). The simple difference between HTML and XHTML is that the tags in an HTML document are based on an SGML DTD, but the tags in an XHTML document are based on an XML DTD, and, as such, XHTML is an XML vocabulary.
An XHTML document must be well-formed XML, and may be validated against one of three official DTDs: transitional (), strict (), and frameset (). The transitional DTD permits some older HTML elements that have been eliminated in the strict DTD. The frameset DTD has elements for creating frames.
Here is an example of a strict XHTML document (Example 4-4).
Example 4-4. time.html
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns=""> <head> <title>Time</title> </head> <body style="font-family:sans-serif"> <h1>Time</h1> <table style="font-size:14pt" cellpadding="10"> <tbody align="center"> <tr> <th>Timezone</th> <th>Hour</th> <th>Minute</th> <th>Second</th> <th>Meridiem</th> <th>Atomic</th> </tr> <tr> <td>PST</td> <td>11</td> <td>59</td> <td>59</td> <td>p.m.</td> <td>true</td> </tr> </tbody> </table> </body> </html>
This document looks remarkably similar to an HTML document in text
form and in a browser (see Figure 4-2). It begins
with an optional XML declaration, followed by a document type
declaration for the string XHTML 1.0 DTD. The XHTML 1.0 namespace
() is declared on the
html element. The remaining markup is classic HTML
and CSS; however, the elements are all lowercase, as mandated by
XHTML 1.0.
You can validate this document using the W3C MarkUp Validation Service at (see Figure 4-3). You can submit a document on the Web or upload a local document, as shown. The outcome of successfully validating this document against the strict XHTML DTD is shown in Figure 4-4.
After XHTML 1.0, XHTML started trending toward modularization. This means that the DTDs were modularized—divided into smaller files—to facilitate use on smaller devices that don’t need or want an entire XHTML DTD. Work is now underway at the W3C on XHTML 2.0 (), but there are several other XHTML-related specs sandwiched in between 1.0 and 2.0 ().
XHTML Basic:
Modularization of XHTML:
XHTML 1.1—Module-based XHTML:
XHTML Events:
XHTML-Print:
No credit card required | https://www.oreilly.com/library/view/xml-hacks/0596007116/ch04s04.html | CC-MAIN-2019-18 | refinedweb | 425 | 65.42 |
Introducing Protocol-Oriented BDD in Swift for iOS Apps: Part 1
Introducing Protocol-Oriented BDD in Swift for iOS Apps: Part 1
In this step-by-step tutorial, see how to use Swift protocols, extensions, and enumerations with Behavior Driven Development, or BDD, in iOS mobile apps.
Join the DZone community and get the full member experience.Join For Free
Swift is truly a protocol-oriented language. Since it was introduced in WWDC 2014 and open sourced in December 2015, it has become one of the most popular languages for developing iOS applications. You can watch this WWDC video to know about protocol-oriented programming with Swift. In the recent days, the protocol-oriented approach is definitely dominating the object oriented approach, at least for Swift. It means we have to gradually forget about writing classes and start writing protocols instead. In this detailed and step by step blog post, we will see how to drive development from behavior, a.k.a BDD, using Swift protocols, extensions, and enumerations.
BDD+Swift+XCTest
Behaviour Driven Development a.k.a BDD in the Swift or iOS development world is always challenging because of lack of proper BDD tools like Cucumber for Ruby, SpecFlow for .NET or Behat for PHP. There is a Swift library XCTest-Gherkin from Net-A-Porter group and objective-C library Cucumberish from Ahmed Ali, which are available for iOS development to achieve BDD but they are not as powerful as from other languages. It’s still possible to make great use of Swift features like protocols, extensions and enumerations while writing acceptance tests or doing Behaviour Driven Development a.k.a BDD. We will explore that option in detail in this post.
BDD Concepts
Just in case you are new to BDD a.k.a Behaviour Driven Development is a process of bridging the communication gap between technology and business people using the same language so that business people can contribute to specification and programmers can use those specifications to write the code.
BDD is a term coined by Dan North to overcome the pitfalls in Test Driven Development and bridge the communication gap between business and technology. In a summary, it works like this:
- Write some user stories collaboratively with the team.
- Capture the intended behavior in the form of features before writing any code. Refer to the story format here.
- Think of all possible scenarios that cover the intended functionality, reducing the risk of creating bugs. Write a scenario title and write steps in the domain specific language (DSL) or human-readable format like Gherkin.
- Implement the behavior in the form of step definitions. You can refer to the step definition sample here.
- Passing scenarios verify that the implementation of the behaviour has been successful.
I assume that you probably aware of BDD process and we will jump straight into the topic.
Protocol Oriented BDD
At this point, we have basic BDD concepts; let’s apply those principles in a Protocol Oriented way.
- Write requirements collaboratively with the team anywhere in JIRA, Spreadsheet, or something similar.
- Capture intended behavior in the form of Swift Protocol which is similar to the Features format in Gherkin.
- Think of all possible scenarios in the form of XCTest test methods which is similar to scenario titles in the Gherkin.
- Write an XCTest class conforming to protocol; we need to implement all the requirements in our tests.
- Write steps in the form of methods inGiven/When/Then, a.k. a GWT-like format e.g
givenIAmOnTheHome Screen().
- Implement steps as an extension to the protocol defined for the particular feature.
- Abstract UI elements in the form of enumeration for the particular feature.
Now that we have an idea how to implement BDD steps in a protocol-oriented way, let’s dive into the code.
Protocol Oriented BDD in Action
Let’s build an app which greets users when the press the Greet button, using a protocol-oriented BDD approach. The app has following main requirements:
- The app should have a home screen with a Greet button.
- When the user presses the Greet button, they should see the welcome message. ‘Welcome to POP.’
That’s a very simple application. It’s time to dive into Xcode to build this app.
- Fire up Xcode and create new project -> iOS -> Single View Application.
- Name the application as ‘Greeter.’
- Select the box ‘Include UI Tests.
- Open the
GreeterUITest.swiftfile and delete the comments to make it bit cleaner.
Write a Protocol
Now we have template code for our new app. We also have our requirements ready. Let’s write a protocol in the UI test target so that we can list all the requirements for the Greeter feature. Create a new file called
Greeter+Protocol.swift and add our requirements.
protocol Greetable { func testHomeScreenHasGreetButton() func testUserShouldGetWelcomeMessageOnceEntered() }
Let’s make our test GreeterUITest.swift to confirm to Greetable protocol; that means we must have those methods in the XCUITest class in order to compile the test target. Let’s add them so that our test file will look like this:
import XCTest class GreeterUITests: XCTestCase, Greetable { override func setUp() { super.setUp() continueAfterFailure = false } override func tearDown() { super.tearDown() } func testHomeScreenHasGreetButton() { } func testUserShouldGetsWelcomeMessageOnceEntered() { } }
What we have done so far can be seen in the GIF below:
Write Given/When/Then Steps in The Extension
At this point, we wrote scenario titles in terms of XCTest methods. Now, it’s time to write Given/When/Then, a.k. a GWT and start implementing it. Remember, we don’t necessarily have to follow Gherkin syntax here, so feel free to use any format similar to GWT. We will write some GWT in the test methods which looks like this:
func testHomeScreenHasGreetButton() { givenTheAppIsLaunched() thenIShouldSeeGreetButton() } func testUserShouldGetWelcomeMessageOnceEntered() { givenTheAppIsLaunched() whenITapGreetButton() thenIShouldSeeWelcomeMessage() }
Now that, we have our GWT are ready but our test target will still not compile as we have to implement these steps. As discussed earlier, we will be using Swift Extensions to implement step definitions. It’s a good time to make use of them to implement steps as an extension to the Greetable protocol. Let’s create a
Greeter+Extension.swift file to add the extension to the Greetable protocol with empty methods for GWT like this:
import XCTest extension Greetable { func givenTheAppLaunched() { } func thenIShouldSeeGreetButton() { } func whenIPressGreetButton() { } func thenIShouldSeeWeocomeMessage() { } }
Use XCUI API to Drive Behaviour From GWT
At this stage, our target should compile but it’s not doing anything at the moment. We will drive the behavior for the first scenario using XCUITest API to launch the app and check if the button exists. The sample code for these will look like this:
func givenTheAppLaunched() { XCUIApplication().launch() } func thenIShouldSeeGreetButton() { XCTAssertTrue(XCUIApplication().buttons["Greet"].exists) }
Let’s try to execute the test
testHomeScreenHasGreetButton() from the Xcode Test navigator. We will see that first step for launching an app will pass but the second one for the Greet button will fail. It’s true because a button with accessibility identifier ‘Greet’ doesn’t exist yet.
Implement The Behaviour to Make Steps Pass
Let’s implement a button in the app in order to make this test pass. Follow the steps below:
- In the
Main.Storyboard, drag a button, and add accessibility identifier as 'Greet.'
- Click on the Assistance Editor to bring the
ViewController.swift.
- From the storyboard, select the button and press CTL + Drag it to view controller class.
- Select ‘Action’ as connection and name the function as ‘GreetUser.’
Now we have a button on the home screen with accessibility identifier ‘Greet,’ which isn’t doing anything, but our first scenario checks existence of the button. Let’s run the first scenario from Xcode and watch that the test is passing!
Congratulations ! You have implemented your first scenario using protocol-oriented BDD approach. Let’s carry on and implement another scenario too.
In the second scenario, we have to tap the button, and once button is tapped, the user should see the message ‘Welcome to POP.’ In our extension, there is a tep to tap the button and display the message. Add the following code to this step:
func whenITapGreetButton() { XCUIApplication().buttons["Greet"].tap() } func whenITapGreetButton() { XCUIApplication().buttons["Greet"].tap() } func thenIShouldSeeWelcomeMessage() { XCTAssertTrue(XCUIApplication().staticTexts["Welcome to POP"].exists) }
Now try to execute the second scenario; you will observe that first step will pass as we have a button with accessibility identifier ‘Greet.’ It will tap on the button and look for the message text ‘Welcome to POP’ but it will fail. It’s true because we haven’t implemented that welcome message yet.
In order to make the second scenario pass, we have to implement the welcome message. Follow these steps to do that:
- In the
Main.Storyboardadd the label.
- Bring up view controller using Assistance Editor.
- CTL + drag the label to View Controller, selection connection as Outlet and name it as WelcomeText.
Now that we have our label in place, we have to tell the button that when button is pressed then label should change to text ‘Welcome to POP.’ Add the following code the function associated with the button:
@IBOutlet weak var welcomeText: UILabel! @IBAction func greetUser(_ sender: Any) { welcomeText.text = "Welcome to POP" }
That’s it! Now execute the second scenario from Xcode and you will see it’s passing.
Watch it in action:
This tutorial will be continued in Part 2, coming }} | https://dzone.com/articles/introducing-protocol-oriented-bdd-in-swift-for-ios?fromrel=true | CC-MAIN-2020-24 | refinedweb | 1,559 | 56.96 |
Opened 8 years ago
Closed 7 years ago
#3171 closed merge (fixed)
threadDelay causes Ctrl-C to be ignored when running interpreted code
Description
the following program:
import Control.Concurrent import Control.Concurrent.MVar main = threadDelay 0 >> newEmptyMVar >>= takeMVar
will not respond to Ctrl-C when run via runghc, but does respond to Ctrl-C when compiled and executed.
If the threadDelay is removed, it does respond to Ctrl-C both compiled and interpreted.
In 6.10.1, Ctrl-C has the normal effect whether the program is run compiled or interpreted.
The editline segmentation fault bug prevented us from testing the behavior in ghci.
Change History (3)
comment:1 Changed 8 years ago by simonmar
- difficulty set to Unknown
- Owner set to simonmar
comment:2 Changed 7 years ago by simonmar
- Milestone set to 6.10 branch
- Owner changed from simonmar to igloo
- Type changed from bug to merge
comment:3 Changed 7 years ago by igloo
- Resolution set to fixed
- Status changed from new to closed
All merged.
Note: See TracTickets for help on using tickets.
The new signal handling code in 6.10.2 broke Ctrl-C in GHCi.
The following patches have been pushed to fix it:
and in libraries/base: | https://ghc.haskell.org/trac/ghc/ticket/3171 | CC-MAIN-2016-44 | refinedweb | 205 | 64.1 |
Hello. I am working on this and it asks for me to print a series of names in ascending order then descending order. This is what the projects asks:
Call both your project and class AscendDescend. The first column should be in ascending order and the second in
descending order. The output should appear as below (Be sure to include the headers):
Ascend Descend
Agnes Thomas
Alfred Mary
Alvin Lee
Bernard Herman
Bill Ezra
Ezra Bill
Herman Bernard
Lee Alvin
Mary Alfred
Thomas Agnes
This is my code so far:
Code java:
import java.util.*; public class twoforone { public static void main(String args[]) { String theArray[] = {"Bill", "Mary", "Lee", "Agnes", "Alfred", "Thomas", "Alvin", "Bernard", "Ezra", "Herman"}; for(int j = 0; j < theArray.length; j++) { Arrays.sort(theArray); System.out.println(theArray[j] + " "); } System.out.println(" "); { for(int m = 0; m < theArray.length; m++) { Arrays.sort(theArray); System.out.println(theArray[m] + " "); } } } }
I cannot get it to print in descending order. Please send me feed back on this. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36022-two-one-printingthethread.html | CC-MAIN-2017-30 | refinedweb | 167 | 65.01 |
Agenda
See also: IRC log
RESOLUTION: minutes of 2006-11-27 accepted
bob: a couple of new issues
... Jonathan noted capitalization problems
... David Illsley's comment on namespaces
DavidI: not sure if my namespace comment needs a new issue
RESOLUTION: close new issue on capitalization with Jonathan's proposal
bob: Paul Knight sent his review
to the mailing list a couple of minutes ago
... people should read and review and be prepared to discuss this on next weeks call
Paul: I'd be happy to discuss at this point . . .
Bob: given the late arrival, let's postpone for next meeting
RESOLUTION: discuss Paul's comments next meeting
<bob> scribe: bob
Gil: I wrote what I understood to
be out agreement based on the lsat call
... specifically we didn't want the absence of the assertion to be its negation since that would lead to the cr33 trap
... a number of points made on the list have called into question some of what I thought we had alredy decided
Anish: I thought that the agreement ws to make the assertions both policy and wsdl markers
Gil: We had decided against that
in the prior concall.
... wsdl would be a coarse grain marker, but policy would fine tune them
Anish: I was explicitly pushing that the new assertions be usable both in wsdl and policy
Gil: Does not make sense to
discuss anon / nonanon unless addressing is already inficated
as supported
... WS-Policy is pretty mechanical. it would need domain specific smarts
MarcG: my recollection matches what Gil said
MarcG: I don't dispute the notion
of using matching assertions
... The new proposal nests the Anon/NonAnon under UsingAddressing
... I'm having a hard time with how we would use UsingAddressing both as a WSDL marker and a policy assertion if you can nest Anon/NonAnon underneath
... I thought we had agreed to leave UsingAddressing alone
Bob: so your point is that you don't think the WSDL marker and the policy assertion can have the same QName?
MarcG: It's okay as long as
UsingAddressing is a simple policy assertion
... but when you start nesting other policy assertions beneath it, I can't see it
DavidI: I haven't thought that through
Glen: It shouldn't be a problem
Anish: There are two issues: do
we want to provide the same capabilities in both WSDL
extensions and Policy
... and do we want to nest assertions
... my understanding is that we were going to provide all the capabilities as both WSDL extensions and in WS-Policy
... Gil, I don't see the use case
Gil: server advertises "NonAnonymousResponses" client looks for "UsingAddressing" . . they don't intersect
Katy: Anish, two sub-issues of
your first issue
... providing UsingAddressing functionality in WSDL
... and providing "AnonymousResponses" functionality in WSDL
... the group determined that nobody needed "Anon" functionality in WSDL
Anish: the idea was that existing impls that don't understand the new WS-Policy assertions need "anon" in WSDL
Katy: concerned because we asked if anyone needed anon in WSDL and everyone said they didn't
Anish: we certainly need an "anon" marker in WSDL
Katy: as long as someone needs it
and someone will implement it, we should include it
... but it makes the spec a lot more complicated
Anish: You think UsingAddressing is complicated?
Katy: No, its not but the "anon" and "nonAnon" markers make things a lot more complicated
Anish: I think it doesn't.
DavidI: There are two different
ways of looking at these separate assertions.
... if we have "AnonSupported" as implying "UsingAddressing", but this breaks the intersection rules
<anish> btw, i don't care as much about whether it is a nested assertion or not, i care more about allowing it to be used in WSDL
DavidI: if we have separate
assertions so "AnonSupported" doesn't imply "UsingAddressin"
you end up with combinations that are invalid
... these invalid alternatives require domain-specific knowledge to detect and exclude
Phillpe: Katy commented that if one person needs something we have to do it. I don't agree.
Bob: Who needs "AnonResponses" in WSDL? Just Anish or anyone else ?
<silence>
Bob: Anish can you make the case that "AnonResponse" is generally useful as a WSDL extension?
Anish: I thought I did.
... It will take a while for people to get on the WS-Policy band-wagon
... Its simpler to add support for a WSDL extension
... That it is to make everyone to go to WS-Policy to learn abou Anon/NonAnon
Phillpe: Can we do a quick straw-poll?
Bob: <asks for anynone else who thinks AnonResponses need to be a WSDL extension>
<silence>
Bob: <asks counter-question>
<GlenD> ah, blessed apathy :)
RESULTS: 1 need, 5 don't need (no), 1 maybe
<Zakim> plh, you wanted to try to answer the charter question
<plh> "use of Web Services Addressing components in WSDL 1.1 and WSDL 2.0 message exchange patterns and providing the mechanisms for defining Web Services Addressing property values in WSDL 1.1 and WSDL 2.0 service descriptions"
Philippe: DavidH's question is a good one. Depending upon how you read the charter it could be construed that we need to do it in both places.
Anish: The charter talks about WSDL and not WS-Policy. It seems odd to discuss things that have nothing to do with WSDL in the "WSDL Binding Document"
Bob: We have an open issue to discuss the title of the document.
Anish: Our charter doesn't talk
about WS-Policy at all.
... Some people talked about the complexity. Whichever way we go (nested or not) making the assertions dual-purpose won't complicate anything.
... I'm willing to put together a proposal to show people how to use any assertions in both WSDL and WS-Policy
Bob: I was beginning to hope that
we were making progress. I don't want to derail that progress.
I would like to proceed along the lines
... of doing solely policy assertions then revisit the WSDL extension issue once we get those hammered out.
... Would that be acceptable?
Katy: I'm against the idea of
trying to express "AnonResponses" and "NonAnonresponses" in
WSDL
... If we do it in both places we have the problem of what to do when the WSDL and Policy disagree
... sticking to just UsingAddressing minimizes the potential for disagreements
Anish: Is anyone suggesting getting rid of dual-use for UsingAddressing?
Bob & Katy: No
Anish: The problems of WSDL and
Policy disagreeing are already there for UsingAddressing (cites
example)
... Those conflicts need to be resolved in any case so its no big deal to apply those solutions to AnonymousResponses and NonAnonymousResponses
Katy: Yeah same pattern, but more
work in each case.
... Fine, if we need both. But I don't think we need both.
... A lot of extra processing for something that nobody seems to need very much.
Bob: Our straw poll (if turned
into a formal vote) shows abstain ruling, followed by
'no'
... We could settle this and move forward by a formal vote.
... On the other hand, we could just defer this discussion until we figure out how to express what we want using WS-Policy
... I disagree that the shortest path is to try and address the WSDL extension issue now.
Katy: OK
Bob: I note that Chris Ferris has thrown a log on the fire with regards to wsdl:required="true"
<Zakim> gpilz, you wanted to discuss Chris' point
DavidO: some people are objecting
to the description of UsingAddressing based on what is need by
the WS-Policy intersection rules
... Some people say we can't have WS-Addr specific policy handling. I'm not sure if I agree with that.
... WS-Policy is in last call and WS-Addr seems to have a fairly simple use of WS-Policy. If we need WS-Policy to do something different we should tell them now.
... Its interesting that one of the first WG's to use WS-Policy is having such problems.
MarcG: If there are issues found
with WS-Policy we should let them know.
... I think the problems with the intersection algorithm are overblown. We should just focus on the assertions that we need.
... WS-SX has a large set of complex assertions and they don't seem to be having problems.
Tom: With nested assertions we don't have problems with the intersection algorithm. I agree with ChrisF . . .
Bob: Want to get back to prior call. Someone made a statement that nesting policy assertions is just too complicated. Has that changed?
Tom: We were also talking about nesting parameters which is pretty complicated.
MarcG: I was on the previous
calls. I agree that nesting policy assertions is not that
hard.
... Though I question using "UsingAddressing" as the top-level container.
Bob: Are people ok with using UsingAddressing as a container and putting AnonResponses and NonAnonResponses as child policies?
Tony: The reason that we split UsingAddressing (the WSDL marker) and AddressingRequired (the policy assertion) was because of this split in semantics
Anish: use framework attribute to define required or optional
TonyR: That won't work because there would be different meanings in different environments
Anish: It is a question of what is the default, the defaults may be different, but the semantics are the same
<dorchard> +1 to TonyR's points.
TonyR: If the defaults are different then they have different meanings
DavidI: When I think about this
stuff, I try to think about it in WS-Policy-normal form (no
"optional")
... If we don't have text saying that the normal form means something different, it doesn't.
Anish: Are you saying that "wsp:optional=true" means that the non-missing case means that addressing is required?
DavidI: Yes.
Bob: Do we have specific changes to Gil's proposal that we would like to make?
Marc: What about David's proposal on Friday?
Bob: Those are commments against Gil's
MarcG: But it changed nesting?
Bob: True
Gil: But I think Bob wants to see a proposal with David's changes to Gil's proposal.
Bob: Right
... Do folks agree that this is the direction we want to go in?
Tony: I remain concerned with one
thing about the nesting.
... If the outer container can't have "wsp:optional=true" how do you get the inner assertions to support the required ???
Paco: The point is that "optional" applies to the whole thing
Tony: How can you express that you want an inner assertion but not an outer assertion.
Gil: How/why would I say "I support non-anonymous responses" without saying "I support WS-Addr"?
Bob: Do we agree on the direction?
MarcG: I want to know if we are going to make UsingAddressing act as a container.
DavidI: Let's rename the UsingAddressing policy assertiong to AddressingRequired
MarcG: I would like that.
Anish: Would that be a replacement for UsingAdressing or in addition to?
DavidI: That would be a point for further disucssion.
Bob: I think the point is to distinguish the policy assertion from the WSDL marker.
Anish: So "UsingAddressing" is a WSDL marker and "AddressingRequired" is a policy assertion?
Bob: Yes
MarcG: I think we whould leave "UsingAddressing" alone.
Bob: Leave "UsingAddressing" alone as a WSDL marker.
MarcG: I thought we had agreed to leave UsingAddressing completely alone.
Bob: I thought we were.
MarcG: No, right now you can use UsingAddressing as a policy assertion.
Anish: How can we have both a
UsingAddressing policy assertion and a AddressingRequired
policy assertion?
... They compete and one is a superset of the other (provides examples)
MarcG: I'd like to see how this
develops. But when you start doing nested assertions you have
to express that assertion (it can't be defaulted).
... That is, if UsingAddressing has child assertions, you have to express the values of those assertions, you can't leave it empty.
... Whereas, today, you could have a policy with UsingAddressing and no child elements.
... There are implemenations that currently rely on the use of UsingAddressing as a policy assertion.
Tony: I'm reluctant to be bound
to someone's early implementation of a draft spec. They knew
the risks when they did this.
... We don't have to be bound by their decisions.
Bob: Let's figure out what we need to do then figure out how to minimize impact on existing implementations.
MarcG: I agree with Bob. We can't radically change the marker and keep the same namespace. There are implemations out there the use the current namespace.
<plh> "This namespace URI will be updated only if changes are made to
<plh> the document are significant and impact the implementation of the
<plh> specifications.This namespace URI will be updated only if changes are made to
<plh> the document are significant and impact the implementation of the
<plh> specifications."
<plh>
Tony: I'm sorry, but we were still in draft stage.
(someone): We promised that we would change the namespace if we changed the semantics?
Bob: DavidI agree to take an AI to update the current proposal to include nested assertions?
DavidI: Yes.
<scribe> ACTION: ITEM to David Illsley to update Gil's proposal to nest AnonResponse/NonAnonResponses [recorded in]
Tony: What about "WSDL Binding and Related Matters"?
Bob: "WSDL Binding, Anti-Poverty, and Peace Document"
Anish: Have we ruled out a separate document?
Bob: Phillipe do you have a position on that?
Phillipe: No
Bob: If there are conflicts between WSDL and POlicy it would be handy to have them together.
Anish: If they are not dual-use there is no conflict.
Tony: One could say one thing and one could say another.
Tony & Anish: (back and forth on possible conflicts between WSDL markers and Policy assertions)
Bob: We are well overdue on our
mandatory heartbeat requirement. We need to publish a new
version soon. Any addition of WS-Policy stuff needs
... a corresponding change to the title.
)?
Bob: We can decide to split the doc once we have figured out the content
Gil: New name should best be figured out on the mailing list
Bob: True
Anish: If everything is dual-use then it should all be in the same doc
<dorchard> Description Document
Anish: Separate use would seem to require separate documents.
<dorchard> Metadata Document
Bob: WS-Addressing Metadata
Document
... I'd like to get to a heartbeat document very shortly.
<dorchard> I take full credit for the brilliant suggestion.
Bob: I'm hopeful we'll have a section on WS-Policy assertions to add on next weeks call.
Bob: If people in this group has
a set of comments it might be helpful to combine those comments
as a group response.
... There are only three calls between now and the end of the review period for WS-Policy
... Would like to make sure we can do what we need using WS-Policy and I would like to do that in parallel with the review period for WS-Polcy.
MarcG: The WS-Policy primer has examples that show the use of the UsingAddressing assertion. Good place to start . . .
Tony: You are suggesting that if we change UsingAddressing, they might not be happy?
MarcG: No.
Anish: Is the Primer in last call?
(all): No
Bob: But we should look at the Primer as a good place to start.
MarcG: I put the link in IRC
Bob: For next weeks call we have the review of Paul Kight's AI, a review of the propsal from David Illsley.
ADJORNED | https://www.w3.org/2002/ws/addr/6/12/04-ws-addr-minutes.html | CC-MAIN-2016-22 | refinedweb | 2,577 | 63.29 |
Injecting Spring Beans with data from Config
In one of the projects, we had to externalize the Config file to be in a properties file. All configurations related to the application were to be stored in that file. We had a few spring beans, like the JMS connection factory, which needs the brokerURL which should ideally be an external configuration as it is will be environment specific. An excellent approach to this issue could be seen in the mail plugin, wherein the bean is injected from config values.
An example resource.groovy similar to that would be
import org.codehaus.groovy.grails.commons.ConfigurationHolder as CH beans={ sampleBean(com.intelligrape.Sample){ //Sample is a bean with a property value value = CH.config.app.sample.value?:'defaultValue' //app.sample.value is a property in the config.groovy file //If no config property is specified, set to 'defaultValue' } }
Hope this helps
–Vivek
That would be perfect! Thanks for the tip Tim.
Nice, but you should be able to use Springs property override configuration. Set the default value in your resource.groovy as you have above. But when you want to override the property you can do this in you Config.groovy file.
beans {
sampleBean {
value = “Whatever….”
}
}
or beans.sampleBean.value = “Whatever….”
This way you can also set that value to be environment specific.
See Property Override Configuration for more information. | https://www.tothenew.com/blog/injecting-spring-beans-with-data-from-config/ | CC-MAIN-2020-34 | refinedweb | 228 | 51.34 |
0
from PIL import Image, ImageDraw from random import randint picture = Image.new("RGB", (600, 600)) artist = ImageDraw.Draw(picture) for i in range(100): x1, y1 = randint(0, 600), randint(0, 600) x2, y2 = randint(0,600), randint(0,600) color = (randint(0, 255), randint(0, 255), randint(0, 255)) width = randint(2, 20) artist.line([x1, y1, x2, y2], color, width) #picture.convert("RGB") picture.show()
i've made this program just to demonstrate my problem. The thing is that when i run this program Windows Photo Viewer opens up but theres no image just this message:
WPM can't open this picture because either the picture is deleted, or it's in a location that isnt available
Now i'm using python 2.7 and W7 and i would really apreciate help because i cannot program further because i cannot see the result
Edited 6 Years Ago by bomko: n/a | https://www.daniweb.com/programming/software-development/threads/308081/problem-with-pil | CC-MAIN-2017-04 | refinedweb | 154 | 61.36 |
So, I am learning ruby outside of my job, which is not programming
related. Eventually want to move into programming though for a job,
because I do enjoy this stuff.
However, I am currently a little confused on the current puzzle I’m
working on and want to make sure I’m going down the right path.
Basically, I will give a snippet of the rspec puzzle that I’m working on
right now (obivously there is more, but doing this to save space and
because I feel if I can put down the correct path, I can try to figure
the rest out):
describe Name do
before do
@name = Name.new
end
describe ‘title’ do
it ‘should capitalize’ do
@name.title = “joe”
@name.title.should == “Joe”
end
it 'should capitalize every word' do @name.title = "stuart smith" @name.title.should == "Stuart Smith" end
end
My solution so far has been this:
class Name
attr_accessor :title
def initialize(title=nil)
@title=title
end
def title
@title.capitalize
end
end
I have always been used to setting up classes like this. Where I first
make an initialize method and then “accessor” methods (I think that is
the name for these. The methods that actually do stuff for classes).
Am I on the correct path here, or am I setting up this class all wrong
for these tests? The above passes the first test. I feel I may need to
do a loop for the second test.
Can someone let me know if I am doing this all wrong? Or give me some
guidance on how to write good code for this situation?
Thanks for any help. | https://www.ruby-forum.com/t/trying-to-learn-ruby-through-rspec-puzzles-help-with-this-one-class-related/238605 | CC-MAIN-2021-25 | refinedweb | 275 | 73.68 |
Hello everybody,
I have a script called "HouseProperties" and I want all the objects with this script to be in my dictionary of string and gameobjects Dictionary<string, gameObject> houseProp = new Dictionary<string, gameObject>();
Dictionary<string, gameObject> houseProp = new Dictionary<string, gameObject>();
Is it possible to write somthing like GameObject houseValue = HouseProperties.gameObject because this script is located on a gameobject?
GameObject houseValue = HouseProperties.gameObject
I got an error of course, is there a way to write it correctly?
Thank you!
An error? I can't see one because you're not posting the code, the error and a way for us, out here, to read that.
That's said, it will also be better if we understand your goals, not just what you're doing. The reason is that your goal is a larger subject and what you're doing might not be the best way to reach that goal, and we might be able to recognize that.
Only if we know what the goal is, though.
So, what are you using the dictionary for, and have you considered if the transform might be a better hook (maybe, maybe not, but then I have no real idea what you're doing).
Answer by UnityCoach
·
Jul 28, 2018 at 10:15 PM
So, you want to collect all script instances, and store their game objects and names in a dictionary ?
You can use Linq for that, it makes it easy to read.
using System.Linq;
Dictionary<string, GameObject> housesDictionary;
HouseProperties [] allHouses = FindObjectsOfType<HouseProperties>();
housesDictionary = allHouses.ToDictionary(v => v.gameObject.name, v => v.gameObject);
Thank you @UnityCoach thats exactly what I've been looking for I'll test it and update the accepted49 People are following this question.
Can't add Orbital script - "The script needs to derive from MonoBehaviour!"
0
Answers
How do you make a random 2D shape in Unity?
0
Answers
Objects, GameObjects, Lists, and lists of GameObjects with Objects, oh my!
2
Answers
How To Find GameObject From List And Add To Player List
0
Answers
How to get the Gameobject a script is attached to
2
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1535444/assigning-a-script-as-a-game-object-in-dictionary.html | CC-MAIN-2021-17 | refinedweb | 354 | 61.36 |
This article describes in detail the steps I took in setting up Elasticsearch as the search provider for Pony Foo. I start by explaining what Elasticsearch is, how you can set it up to make useful searches through the Node.js API client, and how to deploy the solution onto a Debian or Ubuntu environment.
A while back I started working at Elastic – the company behind Elasticsearch , a search engine & realtime analytics service powered by Lucene indexes. It’s an extremely exciting open-source company and I’m super happy here – and we’re hiring, drop me a note!
Thrilled to announce I’ve started working at @elastic !
Working on Kibana (ES graphs)
Great fun/team! Hiring!
— Nicolás Bevacqua (@nzgb) March 29, 2016
Possible use cases for Elasticsearch range from indexing millions of HTTP log entries, analyzing public traffic incidents in real-time, streaming tweets, all the way to tracking and predicting earthquakes and back to providing search for a lowly blog like Pony Foo.
We also build Kibana , a dashboard that sits in front of Elasticsearch and lets you perform and graph the most complex queries you can possibly imagine. Many use Kibana across those cool service status flat screens in hip offices across San Francisco.
But enough about me and the cool things you can do with Elastic’s products. Let’s start by talking about Elasticsearch in more meaningful, technical terms.
What is Elasticsearch, even?
Elasticsearch is a REST HTTP service that wraps around Apache Lucene , a Java-based indexing and search technology that also features spellchecking, hit highlighting and advanced analysis/tokenization capabilities. On top of what Lucene already provides, Elasticsearch adds an HTTP interface, meaning you don’t need to build your application using Java anymore; and is distributed by default, meaning you won’t have any trouble scaling your operations to thousands of queries per second.
Elasticsearch is great for setting up blog search because you could basically dump all your content into an index and have them deal with user’s queries, with very little effort or configuration.
Here’s how I did it.
Initial Setup
I’m on a Mac, so – for development purposes – I just installed
elasticsearch using Homebrew .
brew install elasticsearch
If you’re not on a Mac, just go to the download page and get the latest version , unzip it, run it in a shell, and you’re good to go.
Once you have the
elasticsearch executable, you can run it on your terminal. Make sure to leave the process running while you’re working with it.
elasticsearch
Querying the index is a matter of using
curl , which is a great diagnostics tool to have a handle on; a web browser, by querying (
9200 is the port Elasticsearch listens at by default) ; the Sense Chrome extension, which provides a simple interface into the Elasticsearch REST service, or the Console plugin for Kibana , which is similar to Sense.
There are client libraries that consume the HTTP REST API available to several different languages. In our case, we’ll use the Node.js client:
elasticsearch .
npm install --save elasticsearch
The
elasticsearch API client is quite pleasant to work with, they provide both
Promise -based and callback-based API through the same methods. First off, we’ll create a client. This will be used to talk to the REST service for our Elasticsearch instance.
Creating an Elasticsearch Index
We’ll start by importing the
elasticsearch package and instantiating a REST client configured to print all logging statements.
import elasticsearch from 'elasticsearch'; const client = new elasticsearch.Client({ host: '', log: 'debug' });
Now that we have a
client we can start interacting with our Elasticsearch instance. We’ll need an index where we can store our data. You can think of an Elasticsearch index as the rough equivalent of a database instance. A huge difference, though, is that you can very easily query multiple Elasticsearch indices at once – something that’s not trivial with other database systems.
I’ll create an index named
'ponyfoo' . Since
client.indices.create returns a
Promise , we can
await on it for our code to stay easy to follow. If you need to brush up on
async /
await you may want to read “Understanding JavaScript’s async await” and thearticle on Promises as well.
await client.indices.create({ index: 'ponyfoo' });
That’s all the setup that is required .
Creating an Elasticsearch Mapping
In addition to creating an index, you can optionally create an explicit type mapping . Type mappings aid Elasticsearch’s querying capabilities for your documents – avoiding issues when you are storing dates using their timestamps, among other things .
If you don’t create an explicit mapping for a type, Elasticsearch will infer field types based on inserted documents and create a dynamic mapping.
A timestamp is often represented in JSON as a
long , but Elasticsearch will be unable to detect the field as a
date field, preventing date filters and facets such as the date histogram facet from working properly.
— Elasticsearch Documentation
Let’s create a mapping for the type
'article' , which is the document type we’ll use when storing blog articles in our Elasticsearch index. Note how even though the
tags property will be stored as an array, Elasticsearch takes care of that internally and we only need to specify that each tag is of type string. The
created property will be a
date , as hinted by the mapping, and everything else is stored as strings.
await client.indices.putMapping({ index: 'ponyfoo', type: 'article', body: { properties: { created: { type: 'date' }, title: { type: 'string' }, slug: { type: 'string' }, teaser: { type: 'string' }, introduction: { type: 'string' }, body: { type: 'string' }, tags: { type: 'string' } } } });
The remainder of our initial setup involves two steps – both of them involving keeping the Elasticsearch index up to date, so that querying it yields meaningful results.
- Importing all of the current articles into our Elasticsearch index
- Updating the Elasticsearch index whenever an article is updated or a new article is created
Keeping Elasticsearch Up-to-date
These steps vary slightly depending on the storage engine you’re using for blog articles. For Pony Foo, I’m using MongoDB and the
mongoose driver. The following piece of code will trigger a post-save hook whenever an article is saved – regardless of whether we’re dealing with an insert or an update.
mongoose.model('Article').schema.post('save', updateIndex);
The
updateIndex method is largely independent of the storage engine: our goal is to update the Elasticsearch index with the updated document. We’ll be using the
client.update method for an article of
id equal to the
_id we had in our MongoDB database, although that’s entirely up to you – I chose to reuse the MongoDB, as I found it most convenient. The provided
doc should match the type mapping we created earlier, and as you can see I’m just forwarding part of my MongoDB document to the Elasticsearch index.
Given that we are using the
doc_as_upsert flag, a new document will be inserted if no document with the provided
id exists, and otherwise the existing
id document will be modified with the updated fields, again in a single HTTP request to the index. I could’ve done
doc: article , but I prefer a whitelist approach where I explicitly name the fields that I want to copy over to the Elasticsearch index, which explains the
toIndex function.
const id = article._id.toString(); await client.update({ index: 'ponyfoo', type: 'article', id, body: { doc: toIndex(article), doc_as_upsert: true } }); function toIndex (article) { return { created: article.created, title: article.title, slug: article.slug, teaser: article.teaser, introduction: article.introduction, body: article.body, tags: article.tags }; }
Whenever an article gets updated in our MongoDB database, the changes will be mirrored onto Elasticsearch. That’s great for new articles or changes to existing articles, but what about articles that existed before I started using Elasticsearch? Those wouldn’t be in the index unless I changed each of them and the post-save hook picks up the changes and forwards them to Elasticsearch.
Wonders of the Bulk API, or Bootstrapping an Elasticsearch Index
To bring your Elasticsearch index up to date with your blog articles, you will want to use the bulk operations API , which allows you to perform several operations against the Elasticsearch index in one fell swoop. The bulk API consumes operations from an array under the
[cmd_1, data_1?, cmd_2, data_2?, ..., cmd_n, data_n?] format. The question marks note that the data component of operations is optional. Such is the case of
delete commands, which don’t require any additional data beyond an object
id .
Provided an array of
articles pulled from MongoDB or elsewhere, the following piece of code reduces
articles into command/data pairs on a single array, and submits all of that to Elasticsearch as a single HTTP request through its bulk API.
await client.bulk({ body: articles.reduce(toBulk, []) }); function toBulk (body, article) { body.push({ update: { _index: 'ponyfoo', _type: 'article', _id: article._id.toString() } }); body.push({ doc: toIndex(article), doc_as_upsert: true }); // toIndex from previous code block return body; }
If JavaScript had
.flatMap we could do away with
.reduce and
.push , but we’re not quite there yet.
await client.bulk({ body: articles.flatMap(article => [{ update: { _index: 'ponyfoo', _type: 'article', _id: article._id.toString() } }, { doc: toIndex(article), doc_as_upsert: true }]) });
Great stuff!
Up to this point we have:
- Installed Elasticsearch and the
elasticsearchnpm package
- Created an Elasticsearch index for our blog
- Created an Elasticsearch mapping for articles
- Set up a hook that upserts articles when they’re inserted or updated in our source store
- Used the bulk API to pull all articles that weren’t synchronized into Elasticsearch yet
We’re still missing the awesome parts
, though!
- Set up a
queryfunction that takes some options and returns the articles matching the user’s query
- Set up a
relatedfunction that takes an
articleand returns similar articles
- Create an automated deployment script for Elasticsearch
Shall we?
Querying the Elasticsearch Index
While utilizing the results of querying the Elasticsearch index is out of the scope of this article, you probably still want to know how to write a function that can query the engine you so carefully set up with your blog’s amazing contents.
A simple
query(options) function looks like below. It returns a
Promise and it uses
async /
await . The resulting search hits are mapped through a function that only exposes the fields we want. Again, we take a whitelisting approach as favored earlier when we inserted documents into the index. Elasticsearch offers a querying DSL you can leverage to build complex queries. For now, we’ll only use the
match query to find articles whose
title match the provided
options.input .
async function query (options) { const result = await client.search({ index: 'ponyfoo', type: 'article', body: { query: { match: { title: options.input } } } }); return result.hits.hits.map(searchHitToResult); }
The
searchHitToResult function receives the raw search hits from the REST Elasticsearch API and maps them to simple objects that contain only the
_id ,
title , and
slug fields. In addition, we’ll include the
_score field, Elasticsearch’s way of telling us how confident we should be that the search hit reliably matches the human’s query. Typically more than enough for dealing with search results.
function searchHitToResult (hit) { return { _score: hit._score, _id: hit._id, title: hit._source.title, slug: hit._source.slug }; }
You could always query the MongoDB database for
_id to pull in more data, such as the contents of an article.
Even in the case of a simple blog, you wouldn’t consider a search solution sufficient if users could only find articles by matching their titles. You’d want to be able to filter by tags, and even though the article titles should be valued higher than their contents (due to their prominence) , you’d still want users to be able to search articles by querying their contents directly. You probably also want to be able to specify date ranges, and then expect to see results only within the provided date range.
What’s more, you’d expect to be able to fit all of this in a single querying function.
Building Complex Elasticsearch Queries
As it turns out, we don’t have to drastically modify our
query function to this end. Thanks to the rich querying DSL, our problem becomes finding out which types of queries we need to use, and figuring out how to stack the different parts of our query.
To begin, we’ll add the ability to query several fields, and not just the
title . To do that, we’ll use the
multi_match query , adding
'teaser', 'introduction', 'content' to the
title we were already querying about.
async function query (options) { const result = await client.search({ index: 'ponyfoo', type: 'article', body: { query: { multi_match: { query: options.input, fields: ['title', 'teaser', 'introduction', 'content'] } } } }); return result.hits.hits.map(searchHitToResult); }
Earlier, I brought up the fact that I want to rate the
title field higher. In the context of search, this is usually referred to as giving a term more “weight”. To do this through the Elasticsearch DSL, we can use the
^ field modifier to boost the
title field three times.
{ query: { multi_match: { query: options.input, fields: ['title^3', 'teaser', 'introduction', 'content'] } } }
If we have additional filters to constrain a query, I’ve found that the most effective way to express that is using a
bool query , moving the
filter options into a function and placing our existing
multi_match query under a
must clause, within our
bool query. Bool queries are a powerful querying DSL that allow for a recursive yet declarative and simple interface to defining complex queries.
{ query: { bool: { filter: filters(options), must: { multi_match: { query: options.input, fields: ['title^3', 'teaser', 'introduction', 'content'] } } } } }
In the simplest case, the applied
filter does nothing at all, leaving the original query unmodified. Here we return an empty
filter object.
function filters (options) { return {}; }
When the user-provided
options object contains a
since date, we can use that to define a
range for our filter. For the
range filter we can specify fields and a condition. In this case we specify that the
created field must be
gte ( g reater t han or e qual) the provided
since date. Since we moved this logic to a
filters function, we don’t clutter the original
query function with our (albeit simple) filter-building algorithm. We place our filters in a
must clause within a
bool query, so that we can filter on as many concerns as we have to.
function filters (options) { const clauses = []; if (options.since) { clauses.unshift(since(options.since)); } return all(clauses); } function all (clauses) { return { bool: { must: clauses } }; } function since (date) { return { range: { created: { gte: date } } }; }
When it comes to constraining a query to a set of user-provided tags, we can add a
bool filter once again. Using the
must clause, we can provided an array of
term queries for the
tags field, so that articles without one of the provided tags are filtered out. That’s because we’re specifying that the query must match each user-provided
tag against the
tags field in the article.
function filters (options) { const tags = Array.isArray(options.tags) ? options.tags : []; const clauses = tags.map(tagToFilter); if (options.since) { clauses.unshift(since(options.since)); } return all(clauses); } function all (clauses) { return { bool: { must: clauses } }; } function since (date) { return { range: { created: { gte: date } } }; } function tagToFilter (tag) { return { term: { tags: tag } }; }
We could keep on piling condition clauses on top of our
query function, but the bottom line is that we can easily construct a query using the Elasticsearch querying DSL , and it’s most likely going to be able to perform the query we want within a single request to the index.
Finding Similar Documents
The API to find related documents is quite simple as well. Using the
more_like_this query , we could specify the
like parameter to look for articles related to a user-provided document – by default, a full text search is performed . We could reuse the
filters function we just built, for extra customization. You could also specify that you want at most
6 articles in the response, by using the
size property.
{ query: { bool: { filter: filters(options), must: { more_like_this: { like: { _id: options.article._id.toString() } } } } }, size: 6 }
Using the
more_like_this query we can quickly set up those coveted “related articles” that spring up on some blogging engines but feel so very hard to get working properly in your homebrew blogging enterprise.
The best part is that Elasticsearch took care of all the details for you. I’ve barely had to explain any search concepts at all in this blog post, and you came out with a powerful
query function that’s easily augmented, as well as the
body of a search query for related articles – nothing too shabby!
To round things out, I’ll detail the steps I took in making sure that my deployments went smoothly with my recently added Elasticsearch toys.
Rigging for Deployment
After figuring out the indexing and querying parts (even though I now work at Elastic I’m pretty far from becoming a search demigod) , and setting up the existing parts of the blog so that search and related articles leverage the new Elasticsearch services I wrote for
ponyfoo/ponyfoo , came deploying to production.
It took a bit of research to get the deployment right for Pony Foo’s Debian Jessie production environment. Interestingly, my biggest issue was figuring out how to install Java 8. The following chunk of code installs Java 8 in Debian Jessie and sets it as the default
java runtime. Note that we’ll need the cookie in
wget so that Oracle validates the download.
echo "install java" JAVA_PACK=jdk-8u92-linux-x64.tar.gz JAVA_VERSION=jdk1.8.0_92 wget -nv --header "Cookie: oraclelicense=accept-securebackup-cookie" sudo mkdir /opt/jdk sudo tar -zxf $JAVA_PACK -C /opt/jdk sudo update-alternatives --install /usr/bin/java java /opt/jdk/$JAVA_VERSION/bin/java 100 sudo update-alternatives --install /usr/bin/javac javac /opt/jdk/$JAVA_VERSION/bin/javac 100
Before coming to this piece of code, I tried using
apt-get but nothing I did seemed to work. The
oracle-java8-installer package some suggest you should install was nowhere to be found, and the
default-jre package isn’t all that well supported by
elasticsearch .
After installing Java 8, we have to install Elasticsearch. This step involved copying and pasting Elastic’s installation instructions, for the most part.
echo "install elasticsearch" wget -qO - | sudo apt-key add - echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list sudo apt-get update sudo apt-get -y install elasticsearch
Next up came setting up
elasticsearch as a service that also relaunches itself across reboots.
echo "elasticsearch as a service" sudo update-rc.d elasticsearch defaults 95 10 sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable elasticsearch.service
I deploy Pony Foo through a series of immutable deployments , (that article hadtwo parts!
)building disk images along the way using Packer. For the most part, unless I’m setting up something like Elasticsearch, the deployment consists of installing the latest
npm dependencies and updating the server to the latest version of the Node.js code base. More fundamental changes take longer, however, when I need to re-install parts of the system dependencies for example, but that doesn’t occur as often. This leaves me with a decently automated deployment process while retaining tight control over the server infrastructure to use
cron and friends as I see fit.
When I’m ready to fire up the
elasticsearch service, I just run the following. The last command prints useful diagnostic information that comes in handy while debugging your setup.
echo "firing up elasticsearch" sudo service elasticsearch restart || sudo service elasticsearch start || (sudo cat /var/log/elasticsearch/error.log && exit 1) sudo service elasticsearch status
That’s about it.
If the whole deployment process feels too daunting for you, Elastic offers Elastic Cloud . Although, at $45/mo, it’s mostly aimed at companies! If you’re flying solo, you might just have to strap on your keyboards and start fiercely smashing those hot keys.
There is one more step in my setup, which is that I hooked my application server up in such a way that the first search request creates the Elasticsearch index, type mapping, and bulk-inserts documents into the index. This could alternatively be done before the Node.js application starts listening for requests, but since it’s not a crucial component of Pony Foo, that’ll do for now!
Conclusions
I had a ton of fun setting up Elasticsearch for the blog. Even though I already had a homebrew search solution, it performed very poorly and the results weren’t anywhere close to accurate. With Elasticsearch the search results are much more on point, and hopefully will be more useful to my readers. Similarly, related articles should be more relevant now as well!
I can’t wait to hook Elasticsearch up with Logstash and start feeding
nginx logs into my ES instance so that I can see some realtime HTTP request data – besides what Google Analytics has been telling me – for the first time since I started blogging back in late 2012. I might do this next, when I have some free time. Afterwards, I might set up some sort of public Kibana dashboard displaying realtime metrics for Pony Foo servers. That should be fun!
评论 抢沙发 | http://www.shellsec.com/news/23892.html | CC-MAIN-2017-13 | refinedweb | 3,579 | 52.7 |
Audio Volume CLI
By brendan on Mar 05, 2007
If you have installed fvwm2 from the companion CD and would like to try it, the easiest way is to enter a fail safe session from the login screen, then run the binary - /opt/sfw/bin/fvwm2. The proper way is to create config files under /etc/dt/config, so that the login screen provides FVWM as an option.
After getting fvwm2 running, I found my volume up/down/mute keys on this Sun type 7 keyboard didn't work. An internet search didn't find any solutions. To get these keys to work, I wrote a short C program to ioctl /dev/audioctl, and added some lines to the .fvwmrc file. I'm writing this quick blog entry to help the next person doing the same Internet search. If there is a better way to do this in Solaris already (like a shipped binary), I missed it!
This is the C program,
/\* volumeset.c - set Sun's /dev/audio play volume \*/ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <fcntl.h> #include <sys/audioio.h> void usage(char \*name) { (void) printf("USAGE: %s [+|-]volume_percent\\n", name); (void) printf(" eg,\\n"); (void) printf(" %s 100 # maximum volume\\n", name); (void) printf(" %s +5 # plus 5 percent\\n", name); exit(1); } int main(int argc, char \*argv[]) { audio_info_t ai; int fd, vol, mod, gain; if (argc < 2) usage(argv[0]); switch (argv[1][0]) { case '+': mod = 1; vol = atoi(&argv[1][1]); break; case '-': mod = -1; vol = atoi(&argv[1][1]); break; case '0'...'9': mod = 0; vol = atoi(argv[1]); if (vol > 100 || vol < 0) { (void) printf("ERROR: volume must be " "between 0 and 100.\\n"); exit(4); } break; default: usage(argv[0]); } if (mod != 0 && vol == 0) usage(argv[0]); if ((fd = open("/dev/audioctl", O_RDONLY)) == -1) { (void) perror("can't open /dev/audioctl"); exit(2); } if (ioctl(fd, AUDIO_GETINFO, &ai) == -1) { (void) perror("fetching audio state failed"); exit(3); } if (mod == 0) gain = (vol \* 255) / 100; else gain = ai.play.gain + (mod \* vol \* 255) / 100; if (gain < 0) gain = 0; if (gain > 255) gain = 255; ai.play.gain = gain; ai.output_muted = gain == 0 ? 1 : 0; if (ioctl(fd, AUDIO_SETINFO, &ai) == -1) { (void) perror("setting audio state failed"); exit(4); } (void) close(fd); return (0); }If you don't have Sun's C compiler installed, you can compile it using /usr/sfw/bin/gcc -o volumeset volumeset.c.
The following are the lines I added to ~/.fvwm/.fvwm2rc to bind the audio keys on the top left to the volumeset program (copied to /usr/local/bin); these bindings probably work for type 6 keyboards as well (haven't tried),
Key SunAudioMute A A Exec /usr/local/bin/volumeset 0 Key SunAudioLowerVolume A A Exec /usr/local/bin/volumeset -15 Key SunAudioRaiseVolume A A Exec /usr/local/bin/volumeset +15The above lines bind the keys to mute the volume, decrease by 15% or increase by 15% (it may be better to make the mute behave as a toggle, rather than always mute). After restarting fvwm, my audio keys now work fine. | https://blogs.oracle.com/brendan/entry/audio_volume_cli | CC-MAIN-2015-48 | refinedweb | 522 | 69.92 |
Release and Upgrade Information for Caché 5.1
This chapter provides the following information for Caché 5.1:
New and Enhanced Features for Caché 5.1
Welcome and thank you for using Caché!
This chapter provides an overview of the new and improved features in Caché 5.1.
See the Major New Features section for a description of the important new features and enhancements included in this release.
If you are new to Caché, you can refer to the Getting Started page which contains a variety of links to documentation organized by topic.
Upgrading and Installation
If you are upgrading existing applications and databases from prior versions, please read the Caché 5.1 Upgrade Checklist and the Upgrading chapter of the Caché Installation Guide.
For information on installing Caché, refer to the following sources:
the Caché Installation Guide for Windows, OpenVMS, and UNIX®
the list of Supported Platforms for this release
Major New Features
Caché 5.1 introduces a significant number of new features as well as enhancements to existing features. These features are focused on:
Advanced Security
Maximizing Scalability
Maximizing Development Speed
Minimizing Support Load
The new features are summarized here. For additional details, refer to the cited chapters or guides.
A host of new capabilities have been added to give Caché the most advanced security of any mainstream database.
A new integrated system management interface, built with CSP, replaces Control Panel, Explorer, and SQL Manager. This removes the requirement for a Windows PC in order to manage Caché and, because no Caché client software is required, eliminates potential client/server version mismatch issues and simplifies management of multiple versions of Caché from a single device.
Caché system improvements include many new or enhanced classes and methods, plus major enhancements such as nested rollback and the ability to map class packages to namespaces.
Nested Rollback — When nested TSTARTs are used, this enhancement enables the innermost TSTART to be rolled back, without rolling back the entire open transaction.
Namespace Mapping for Class Packages — Namespace mapping has been extended with the ability to map class packages by name, just as routines and globals are mapped.
The ObjectScript language now provides significantly improved runtime error reporting. Many other enhancements have been introduced, including the following items:
New $FACTOR Function
New $LISTNEXT, $LISTTOSTRING, and $LISTFROMSTRING Functions
New $ROLES and $USERNAME Special Variables
New Error Trapping Syntax
More Efficient Code Generation
Pattern-Match “E” Adapted For Unicode
Faster MERGE Command
With this release, Caché introduces new Perl and Python bindings, as well as an improved version of the Caché ActiveX binding.
The Caché 5.1 Class Library provides many new features and major enhancements.
Index on Computed Fields — An index definition can now reference properties defined as CALCULATED and SQLCOMPUTED.
Object Synchronization — Caché can now track records of all object filing events (insert, update and delete) for journaled classes, export the journaled object data, and synchronize it with other databases. Applications with no access to the original database can then resolve references to the synchronized objects.
Studio Enhancements — New %Studio.Extension classes provide mechanisms for custom menus and user defined data entry. %Studio.SourceControl classes now provide enhanced source control hooks, allowing customized checkout and checkin to a source control system.
Performance Improvements — Significant improvements have been made to the in-memory performance of relationships.
Syntax for defining stream and collection properties has been improved, and enhancements have been made to the behavior of streams and collections.
Caché SQL support includes many new or enhanced features, including the following items:
New SQL/XML Support Functions
JDBC 3.0 Support
SAVEPOINT: New Transaction Processing Feature
CREATE TABLE: New IDENTITY Keyword
DROP VIEW: New CASCADE Keyword
INSERT: New DEFAULT VALUES Clause
New RowId Counter Validation Option
New Query Optimizer Plan Verification
Subquery Flattening
Enhanced Locking Behavior for Foreign Key References
READONLY Tables and Fields
Support for %%CLASSNAMEQ and %%TABLENAME
CREATE BITMAP INDEX Support for Oracle Import Compatibility
Caché 5.1 introduces many new options for network connectivity.
ECP Enhancements — A number of enhancements have been made to the Caché Enterprise Cache Protocol. It is now supported in shared disk cluster configurations with OpenVMS and Tru64 UNIX®.
SNMP Support — Support for the Simple Network Management Protocol (SNMP) has been added to enable monitoring of Caché by a variety of systems management tools and frameworks.
LDAP Client — Programmatic access to LDAP servers has been added.
Mac OS X Server Support — Support has been added for Mac OS X as a server plus the following client components: ODBC, JDBC, Objects, CSP Gateway for Apache.
Caché Advanced Security
With version 5.1, InterSystems introduces Caché Advanced Security. This release of Caché contains a host of new capabilities that provide the most advanced security of any mainstream database. Caché Advanced Security provides a simple, unified security architecture that offers the following advantages:
It offers a strong, consistent, and high-performance security infrastructure for applications.
It meets certification standards.
It makes it easy for developers to build security features into applications.
There is a minimal burden on performance and operations.
It ensures that Caché can operate effectively as part of a secure environment and that other applications and Caché can work together well.
See the Caché Security Administration Guide for detailed information on Caché Advanced Security.
Key Features
Here are a few of the more important new security features offered in Cache 5.1:
Kerberos based Security Infrastructure
Two Authentication models are now available. In addition to Caché Authentication (Username/Password), Cache now provides Kerberos based Security Infrastructure. Kerberos libraries are available on all supported platforms (Windows Single Sign-on for Win32/64 platforms in an Active Directory Domain = Kerberos Realm, since Microsoft uses Kerberos at the heart of their Authentication model).
Security Management Interface
The Management Portal's web-based Security Management facility allows complete access to Users, Roles, Services, Resources (including Schemas), Auditing, and all other aspects of Caché security management.
Security Advisor Utility
The new Security Advisor utility makes recommendations for securing a Caché DBMS (Security settings, Applications and Auditing).
Authentication in ODBC/JDBC
ODBC and JDBC drivers now offer both Caché and Kerberos Authentication. Kerberos mode provides three levels of Encryption: Clear, Integrity (Source and Content Validation), and Encrypted (complete, end-to-end AES Encryption).
Auditing Facilities
Caché provides detailed auditing facilities that store audit information in a specially protected Audit Database. Auditing capabilities are available from an Automated/Management and Programmatic/API point of view.
Encrypted Database Management Facility
The new Encrypted Database facility allows you to create fully encrypted (AES, up to 256 bit) CACHE.DAT files that stay Encrypted on Disk at all times. I/O is encrypted and decrypted on the fly, with minimal performance impact. The database is encrypted with a Special Key file that is stored on removable devices (like USB Flash Drives) and must be present to mount the DB for use.
To assist system managers in securing a Caché system, Caché includes a Security Advisor. This is a Web page that.
System administrators can exercise low-level control over the security of Caché systems through two character-oriented interfaces:
^SECURITY allows examination and editing of security data related to users, roles, domains, services, applications, and auditing. An overview of ^SECURITY can be found in The CHUI-Based Management Routines.
^DATABASE provides low-level management capabilities related to Caché databases. An overview of ^DATABASE can be found in The CHUI-Based Management Routines.
Security certification is becoming an increasingly frequent requirement for government purchases, and is more and more requested for private sector purchases. Because of this, InterSystems has had Caché certified according to the Common Criteria standard. Specifically, effective February 15, 2007, Caché received certification according to the Common Criteria standard (EAL 3).
The Common Criteria provides a set of common security standards for a wide number of nations in North America, Europe, and the Far East. It provides an assurance scale from 1 to 4, where a product's rating indicates the rigor of testing to which it has been subjected; commercially available products are rated from 1 (least rigorous testing) to 4 (most rigorous). Caché is currently under consideration for a level-3 rating. Such a rating indicates that Caché can effectively serve as part of a highly secure operational environment.
Caché Advanced Security Concepts
Caché Advanced Security is based on authentication, authorization, and auditing:
Authentication ensures the verification of the identity of all users.
Authorization ensures that users can access the resources that they need, and no others.
Auditing keeps a log of predefined system and application-specific events, to provide forensic information about the database activities.
Authentication is how you prove to Caché that you are who you say you are. Without trustworthy authentication, authorization mechanisms, developed at MIT, provides mathematically proven strong authentication over an unsecured network.
Operating-system–based — Available for UNIX® and OpenVMS, OS-based authentication uses the operating system’s user identity to identify the user for Caché purposes.
Caché login — With Caché login, Caché maintains a table of hashed password values for each user account; at login, Caché confirms user identity by comparing the value in the table with a hash of the password provided by the user.
Once a user is authenticated, the next security-related question to answer is what that person is allowed to use, view, or alter. Authorization manages the relationships of users and assets such as databases, Caché services like ODBC access, and user-created applications.
In the most basic authorization model, there are all possible assets, a list of users, and all the relationships between the first group and the second.
Auditing provides a verifiable and trustworthy trail of actions related to the system. Auditing serves multiple security functions:
It provides proof — the proverbial “paper trail” — recording of automatically logs certain system events; it also allows you to enable logging for other system events, as well as site-defined application events. All audited events are placed in a tamper-resistant log file. Authorized users can then create reports based on this audit log, using tools that are part of Caché. Because the audit log can contain sensitive information (such as regarding positive or negative values for medical tests), running an audit report itself generates an entry for the audit log. The included Caché tools support report creation, archiving the audit log, and other tasks.
System Management Portal
Caché 5.1 now uses a browser-based interface, the System Management Portal, for system management. This new interface subsumes the functions previously distributed among Explorer, SQL Manager, Configuration Manager, and Control Panel functions of the Windows Caché Cube. In 5.1, these have been removed from the Cube.
An advantage of this approach is that it is no longer a requirement that Caché be installed on the system you use to manage an installation. Remote management of systems over the web, subject to access control established for the site, is now much easier. No Caché client software is required, simplifying management of multiple versions of Caché from a single device. Cross-release compatibility issues are minimized because both the data and its formatting information come directly from the system being managed.
See Using the System Management Portal for a detailed description of the new interface.
System Improvements
New Caché 5.1 system features and enhancements:
New Features:
Nested Rollbacks
Namespace Mapping for Class Packages
New Method $SYSTEM.Util.CleanDeadJobs()
New Class $SYSTEM.Monitor.Line
New Method $System.Device.GetNullDevice()
New Optional Argument for $ZF(-2)
Enhanced Features:
Option to Filter Records before Dejournaling on a Shadow
Callin Enhancements
64K Routine Buffer Support
CVENDIAN Enhancements
Nested Rollbacks
This version of Caché introduces multiple transaction levels , which make it possible to roll back part of a transaction without losing all work completed to that point. When nested TSTARTs are used, this enhancement enables the innermost TSTART to be rolled back, without rolling back the entire open transaction. When two TSTARTs are issued without an intervening COMMIT or TROLLBACK, the transaction level ($TLEVEL) is incremented by 1 (limited to a maximum of 255). When a TCOMMIT or TROLLBACK 1 is issued, the transaction level is decremented by 1. When an unqualified TROLLBACK is issued, the transaction level is decremented to 0, and the entire transaction is rolled back.
Transaction commands now work as follows:
The argumentless TROLLBACK command works as usual, rolling back to the very top level transaction and closing the transaction.
The TROLLBACK 1 command rolls the current open transaction back one level. All the globals changed within this transaction will be restored, and $TLEVEL is decremented by 1. If there is no open transaction ($TLEVEL is zero) then no action is taken. TROLLBACK 1 won't roll back globals mapped to a remote system that doesn't support nested transactions unless $TLEVEL is 1.
The TCOMMIT command works as usual. In nested transactions, it decrements $TLEVEL and writes a 'PTL' (pending commit with transaction level) journal record to the journal file.
The TSTART command also works as usual. In nested transactions, it increments $TLEVEL and writes a 'BT'(begin transaction) record in the journal file. If the new $TLEVEL is greater than 1, it writes a 'BTL'(Begin Transaction with level) instead of 'BT'.
Caché SQL now includes standard SQL commands that take advantage of nested rollbacks (see New SAVEPOINT Features).
Namespace Mapping for Class Packages
Namespace mapping has been extended with the ability to map class packages from a database to one or more namespaces, just as routines and globals are mapped. Automatic namespace mapping is provided for system classes. All the schemas that begin with '%' from %sys are mapped to all namespaces automatically. These mappings allow the user to access SQL Table, View, Procedures, and classes across multiple namespaces. For example, assume a class %Test that has the following query:
Select field1 From %Test
Without mapping, attempting to inherit from this class in a user namespace would result in error: "Table %Test not found". With mapping, the class will compile successfully in any namespace.
For detailed information, see Configuring Data in the Caché System Administration Guide.
New Method $SYSTEM.Util.CleanDeadJobs()
New class method $SYSTEM.Util.CleanDeadJobs() is used to roll back a dead job's open transaction (if any) and clean up the dead job's Process Table (pidtab) slot so it can be re-used.
New Class $SYSTEM.Monitor.Line
New class $SYSTEM.Monitor.Line is a programmer API for the line-by-line monitor (^%MONLBL). It allows integration with Studio, and is also generally useful as a programmable alternative to ^%MONLBL. For details, see the Programming Interface section in the MONLBL chapter of the Caché Monitoring Guide.
New Method $System.Device.GetNullDevice()
New class method $System.Device.GetNullDevice() returns the name of the null device appropriate for the current operating system type (/dev/null for UNIX®, _NLA0 for OpenVMS, //./nul for Windows). It facilitates development of applications that reference the Null device, and provides an OS-independent method for obtaining the name of the Null Device.
New Optional Argument for $ZF(-2)
Function $ZF(-2) now has an optional fifth argument that specifies whether or not the spawned process ID should be stored in $ZCHILD. For example:
s rc=$zf(-2,"program","","",1) s childpid=$ZCHILD
If the new argument is zero or not specified then $ZCHILD is unchanged, otherwise $ZCHILD is set to the spawned process ID when it is successfully spawned.
Option to Filter Records before Dejournaling on a Shadow
To filter journal records before they get dejournaled on a shadow, set the global node ^SYS("shdwcli",shdw_id,"filter") to the name of the filter routine (without the leading "^"). The input parameters of the filter routine are:
pid: process ID of the record
dir: SOURCE (not SHADOW) database directory
glo: global reference in the form of global(subscripts) (without leading "^")
addr: offset of the record in the journal file
type: type of the record: "S" = SET, "s" = BITSET, "K" = KILL, "k" = ZKILL
time: timestamp of the record
In compatible mode shadowing, the pid and timestamp parameters passed to the filter routine always have the value "". The filter routine should return 0 if the record should be skipped; otherwise the record will be dejournaled by the shadow. For example:
^SYS("shdwcli","MyShadow","filter")="MyShadowFilter" MyShadowFilter(pid, dir, glo, type, addr, time) ; ;skip X* globals If $EXTRACT($qs(glo,0))="X" q 0 Set Msg = pid Set Msg = Msg _ "," _ dir Set Msg = Msg _ "," _ glo Set Msg = Msg _ "," _ type Set Msg = Msg _ "," _ addr Set Msg = Msg _ "," _ time Do ##class(%Library.Device).Broadcast("",Msg) q 1
Callin Enhancements
The Callin include files ccallin.h and mcallin.h have been enhanced to merge common functionality and provide greater flexibility for building user-defined C and C++ Callin modules. Defines have been added to make building user Callin modules as independent of interface details as possible. Two features control the selection of interfaces:
#define ZF_DLL
If ZF_DLL is not defined, the Callin module is built for linking with the Caché engine. If it is defined, the module is built as a dynamic shared library using Callback and invoked through the Callout facility. This is the same define employed by cdzf.h.
#define CACHE_UNICODE
If CACHE_UNICODE is not defined, string handling functions and arguments are treated as 8-bit characters. If defined, strings are treated as 16-bit Unicode. String handling functions are available with the "A" suffix, meaning 8-bit (or ASCII), the "W" suffix, meaning 16-bit Unicode (or wide), and no suffix. In the last case the function resolves to either the "A" or "W" suffix according to the definition of CACHE_UNICODE.
New functionality has been implemented to permit NLS translation using the CacheCvtInW() and CacheCvtOutW() functions for Unicode Callin to 8-bit Caché. They will now convert data within the 8-bit character set range of the Caché engine, instead of reporting an "unimplemented" error. CacheCvtInA() and CacheCvtOutA() functions for 8-bit Callin to Unicode Caché are not currently implemented.
You can further refine 8-bit argument prototypes with the new macro USE_CALLIN_CHAR, which declares them as (char *) rather than (unsigned char *).
64K Routine Buffer Support
It is now possible to run with routine sizes up to 64K, by changing the Memory/RoutineBufSize value on the Home, Configuration, Advanced Settings page of the Management Portal from 32 to 64. The default and minimum value is still 32 (32K), but now values can be specified from 33..64 (rounded to the nearest 2K increment). Routines or class descriptors greater than 32K will be stored as two global values, the first chunk in ^rOBJ(<routine name>) as currently, and the second chunk in ^rOBJ(<routine name>,0).
CVENDIAN Enhancements
The cvendian database endian conversion utility has been enhanced to allow for positive identification of the desired endian orientation, or to optionally just inform the current endian orientation with no conversion. The command syntax is:
cvendian [-option] file1 [file2 ... file8]
where option is one of the following:
-big — convert the database to big-endian
-little — convert the database to little-endian
-report — report the endian orientation of the database
The options may be shortened to their initial letter. If this is a conversion request, and the database already is of the specified endian orientation, a warning message is displayed and no further processing is done. Prior cvendian call formats remain supported.
Object Improvements
New Caché 5.1 object features and enhancements:
Object Enhancements
New Option to Index on Computed Fields
New Object Synchronization
New Studio Extension Classes and Source Control Hooks
New Stream Syntax
New %SwizzleObject Class
Extended POPSPEC Syntax
Performance Improvements for Relationships
Enhanced VisM OCX
New Option to Index on Computed Fields
An index definition can now reference properties defined as CALCULATED and SQLCOMPUTED. The property value calculation must be deterministic, always returning the same value for a given set of parameters. For example, it would be a mistake to use a function such as $Horolog, which returns different values depending on when it is called. Indexing on a property whose computation is nondeterministic will result in an index that is not properly maintained.
To support this option, properties defined as SQLCOMPUTED are now computed in Caché Objects. A new class method, Compute, is called by the property's Get method. The Compute method generates a return value by scanning SQLCOMPUTECODE for field references and converting those references to property or literal values. If the property also has SQLCOMPUTEONCHANGE, the Compute method is called whenever the property is changed.
New Object Synchronization
This new feature enables Caché to synchronize objects between databases. All object filing events (insert, update and delete) for journaled classes are automatically tracked. Object synchronization utilities provide methods to export the journaled object data and synchronize it with other databases. Applications with no access to the original database can then resolve references to the synchronized objects.
A new class, %SYNC.SyncSetObject, supplies methods to externalize an object and apply it to the target database. All references to persistent objects from the object being externalized are converted to GUID (Globally Unique Identifier) values. The GUID values are used to look up the corresponding object on import.
Another class, %SYNC.SyncSet, implements methods to manage the set of objects being synchronized. A 'synchronization set' is a set of externalized object values which guarantee that all object references can be resolved, either because the referenced object is in the same sync set, or because it already exists in the target database.
New Studio Extension Classes and Source Control Hooks
This release enhances the flexibility of Studio by introducing the %Studio.Extension classes, which provide mechanisms for custom menus and user defined data entry. The %Studio.SourceControl classes now provide enhanced source control hooks, allowing customized checkout and checkin to a source control system.
When the user performs an action in Studio that may require user interaction with the server (for example, attempting to edit a document that is in source control but is not checked out), Studio now calls the UserAction method.
UserAction (Type, Name, InternalName, SelectedText, .Action, .Target, .Msg)
Type options are:
Server defined menu item selected
Other Studio action
Name is the menu item name if Type is a menu item, otherwise Name indicates one of the following options:
User has tried to change a document that is locked in source control
User has created a new document
User has deleted a document
InternalName is the name of the document this action is concerned with.
SelectedText contains any selected text in the document that has focus.
Action returns an action that Studio should perform:
Do nothing (this method can still perform some action such as check an item out of source control, but Studio will not ask for user input).
Display the default Studio dialog with a yes/no/cancel button. The text for this dialog is provided in the Target return argument.
Run a CSP Template. Target is the start page name for the template. The template will be passed the current document name, any selected text, the project name, and the namespace.
Run an EXE on the client. Target is the name of an executable file on the client machine.
Insert the text in Target in the current document at the current selection point
Studio will open the documents listed in Target
You can define custom menus for Studio to display. Studio obtains the menus when it first connects to a namespace by running two queries, MainMenus and MenuItems. MainMenus returns the list of top level menu names. After this top level menu is selected, MenuItems is used to return the list of items on a specific menu. MainMenus can be either a regular menu or a context submenu that is added to all the context menus. The MenuItems query is passed the current document name and any selected text in case you wish to vary the menu based on these arguments.
By default, the source control class inherits these queries from %Studio.Extension.Base, where they are defined as SQL queries against prebuilt tables. To load data into these tables, define an XData block called Menu in your source control class. When the source control class is compiled, this data is loaded and used automatically. Queries defined in the source control subclass can be changed or completely customized. When data is being returned from the MenuItems query, each menu name will generate a call to an OnMenuItem method in the source control class, where you may disable/enable this menu item. This allows simple modification of the menus without having to write a custom query.
New Stream Syntax
The class hierarchy for current stream classes has been changed so that %Stream.Object is the top class. This change does not alter stream runtime behavior.
In prior versions of Caché, it was necessary to define a stream property as type = %Stream, with a collection value of binarystream or characterstream. Now a stream property is defined by specifying the actual stream class as the type, and the collection keyword values of binarystream and characterstream are no longer used. A stream class is declared with a classtype = stream. This declaration is automatic for any class that extends a new class, %Stream.Object. For backward compatibility, the classes %Library.GlobalCharacterStream, %Library.GlobalBinaryStream, %Library.FileCharacterStream, and %Library.FileBinaryStream have been converted to use the new representation, and are to be used for all existing stream data.
For more detailed information, see the Streams chapter in Using Caché Objects.
New %SwizzleObject Class
A new class, %SwizzleObject, is now the primary (and only) superclass of both %Persistent and %SerialObject. The purpose of the new class is to define the swizzling interface and implement the parts of that interface that are common to both %Persistent and %SerialObject.
See the %Library.SwizzleObject class documentation for more detailed information.
Extended POPSPEC Syntax
The syntax of POPSPEC has been extended to allow an SQL table name and an SQL column name to be specified. When they are specified, the Populate() method constructs a dynamic query to return the distinct column values from the table. The requested number of values will then be randomly selected from the distinct column values and placed in a value set. The property will then be assigned values randomly from the resulting value set.
See The Caché Data Population Utility for more detailed information.
Performance Improvements for Relationships
The in-memory performance of relationships has been significantly improved by using additional in-memory indices to keep track of oref's and oid of items already in the relationship. Previously, when a new item was inserted into the relationship (either using the Insert method, or indirectly via the Relate method) it would scan the entire relationship to avoid inserting a duplicate item. By keeping an index of the oref's and oid's in the relationship, the cost of checking for duplication items is kept very low even for large numbers of items.
Partition memory used is lower, speed is significantly faster (94x in the second insert of 1000 items) and %Save time is faster. When measured with a small number of items in the relationship, there was no measurable slowdown in performance associated with the upkeep of the additional in-memory indices.
Enhanced VisM OCX
This release contains a new version of the Caché Direct control (VISM.OCX) that features enhancements such as security upgrades, support for multithreading, and improved error handling.
Language Improvements
New Caché 5.1 ObjectScript features and enhancements:
Improved Runtime Error Reporting
New $FACTOR Function
New $LISTNEXT, $LISTTOSTRING, and $LISTFROMSTRING Functions
New $ROLES and $USERNAME Special Variables
New $ZUTIL(62,1) Function
New $ZUTIL(69) Configuration Functions
New $ZUTIL(158) Function
New $ZUTIL(186) Function
New $ZUTIL(193) Function
New Error Trapping Syntax
More Efficient Code Generation
Pattern-Match “E” Adapted For Unicode
Faster MERGE Command
New Language Bindings:
New Perl Binding
New Python Binding
New ActiveX Bindings
Improved Runtime Error Reporting
Many runtime errors now report additional information. For instance, an "<UNDEFINED>" error will now report the name of the undefined variable.
Error information is stored in the system variable $ZERROR, which now returns more information than before. (adding " *someinfo"):
<ERRCODE>Tag^Routine+line *someinfo
A consequence of this change is that error handling routines that made assumptions about the format of the string in $ZERROR may now require redesign to work as before. For further information, see the Cache Conversion Guide, and the $ZERROR special variable in the Caché ObjectScript Reference.
New $FACTOR Function
$FACTOR is a new ObjectScript function for 5.1 that converts a numeric value to a bitstring. Its primary use is for the creation of bitslice indices. For further information, see the $FACTOR function in the Caché ObjectScript Reference.
New $LISTNEXT, $LISTTOSTRING, and $LISTFROMSTRING Functions
Caché 5.1 adds three new functions for processing list structures: $ListNext, $ListToString and $ListFromString.
$ListNext(list, ptr, val) allows extremely rapid traversing of a list structure (up to 400x faster than doing a loop with $LIST).
Before the first call to $ListNext, ptr should be initialized to 0. After each call, ptr will contain the position of the next element in list (0 if the end of the list was reached), and val will contain the value of the element at that position (undefined if there was no value at that position). $ListNext will return 1 if it found another list element, or 0 if it is at the end of the list.
$ListToString(list[,delim]) takes list, and returns the elements as a string separated by delim (default ",").
$ListFromString(string[,delim]) takes string, delimited by delim (default ","), and returns the pieces as a list.
For further information, see the $LISTNEXT, $LISTTOSTRING, or $LISTFROMSTRING function in the Caché ObjectScript Reference.
New $ROLES and $USERNAME Special Variables
At Caché 5.1, the $ROLES special variable lists the security roles currently assigned to the user. The $USERNAME special variable list the user name for the current process. For further information, see the $ROLES, $USERNAME special variables in the Caché ObjectScript Reference.
New $ZUTIL(62,1) Function
The $ZUTIL(62,1) function performs syntax checking on a line of ObjectScript code. It returns the character position of the error and the text of an error message. For further information, see the $ZUTIL(62,1) function in the Caché ObjectScript Reference.
New $ZUTIL(69) System Configuration Functions
Caché 5.1 documents the following additional system-wide configuration functions: $ZUTIL(69,19), $ZUTIL(69,21), $ZUTIL(69,31), $ZUTIL(69,35), $ZUTIL(69,37), $ZUTIL(69,44), $ZUTIL(69,49), and $ZUTIL(69,60).
Caché 5.1 also supports the new $ZUTIL(69,63) and $ZUTIL(68,63) functions that control whether a lowercase “e” should be interpreted as an exponent symbol.
For further information, see the $ZUTIL(69) functions in the Caché ObjectScript Reference.
New $ZUTIL(158) Function
The $ZUTIL(158) function can be used to return the number of installed printers and the pathname of a specified printer. For further information, see the $ZUTIL(158) function in the Caché ObjectScript Reference.
New $ZUTIL(186) Function
The $ZUTIL(186) function can be used to specify the information displayed as part of the Terminal prompt. For further information, see the $ZUTIL(186) function in the Caché ObjectScript Reference.
New $ZUTIL(193) Function
The $ZUTIL(193) function inter-converts Coordinated Universal Time and local time values. For further information, see the $ZUTIL(193) function in the Caché ObjectScript Reference.
New Error Trapping Syntax
This version of Caché implements a special syntax that allows an error trap to pass control up the program stack to a previously established error trap. The syntax is ZTRAP $ZERROR. This command will pop entries off the program stack until a level is found with an error trap. Then that error trap will be executed with $ZERROR and $ECODE unchanged.
This command replaces the two commands ZQUIT 1 GOTO @$ZTRAP, which did not work in new-style procedures. This new command syntax can be used in both procedures and old-style subroutines. The old style of passing control up to a previous error trap will continue to work in old-style subroutines. If a ZQUIT command is issued in a procedure, it will now result in a <COMMAND> error.
The ZQUIT command is obsolete as of 5.1, and should not be used for new programming.
More Efficient Code Generation
The CacheBasic compiler now uses an improved algorithm that generates significantly smaller and faster code.
Pattern-Match “E” Adapted For Unicode
In prior version of Caché, the options used with the pattern-match operator(s) assumed 8–bit characters. This caused the “E” pattern (match every character) to fail when Unicode characters above $CHAR(255) were present in the string.
In Caché 5.1, the “E” pattern matches all characters.
Faster MERGE Command
The MERGE command is now much faster and more efficient when merging two local variables.
New Perl and Python Bindings
The Caché Perl and Python bindings provide a simple, direct way to manipulate Caché objects from within Perl or Python applications. They allow binding applications to establish a connection to a database on Caché, create and open objects in the database, manipulate object properties, save objects, run methods on objects, and run queries. All Caché datatypes are supported.
See Using Perl with Caché and Using Python with Caché for more detailed information.
Improved ActiveX Bindings
Caché 5.1 includes a new version of the Caché ActiveX binding, CacheActiveX.dll. Internally this new version uses the Caché C++ binding to get object-level access to a Caché server. Using this new binding provides the following benefits:
access to the client/server security model available within Caché 5.1 (for example, the ability to use Kerberos authentication)
better performance in some cases due to more sophisticated object caching.
While every attempt has been made to make this new DLL functionally compatible with the older CacheObject.dll it is not 100% binary compatible.
To preserve complete compatibility with existing applications, Caché installs two ActiveX bindings; the newer CacheActiveX.dll as well as the original CacheObject.dll. By default, existing applications will continue to use the original CacheObject.dll. If you wish to use the newer binding you have to modify your existing application to reference this new DLL and test that your application performs as expected.
SQL Improvements
New Caché 5.1 SQL features and enhancements:
New Features
New SQL/XML Support Functions
SAVEPOINT: New Transaction Processing Feature
CREATE TABLE: New IDENTITY Keyword
DROP VIEW: New CASCADE Keyword
INSERT: New DEFAULT VALUES Clause
New RowId Counter Validation Option
New Query Optimizer Plan Verification
SQL Enhancements
JDBC 3.0 Support
GRANT and REVOKE Command Changes
CREATE USER Command Changes
Subquery Flattening
Enhanced Locking Behavior for Foreign Key References
READONLY Tables and Fields
SQLCODE Changes
Support for %%CLASSNAMEQ and %%TABLENAME
CREATE BITMAP INDEX Support for Oracle Import Compatibility
Extended Support for Milliseconds
Date and Time Function Enhancements
New SQL/XML Support Functions
5.1 implements a collection of new built-in SQL functions for transforming “flat” relational queries into hierarchical XML documents. Application programs that need to generate HTML, or that need to export data in XML format, now have a general and portable interface that has wide industry support (ANSI/ISO SQL-2003 standard).
The following SQL/XML functions are available:
XmlElement – Creates an XML element of the form: <tagName>body</tagName>, with optional attributes. XmlElement creates one tagged element that can contain multiple concatenated values.
XmlAttributes – Specifies attributes for an XML element. XmlAttributes can only be used within an XmlElement function.
XmlConcat – Concatenates two or more XML elements.
XmlAgg – Aggregate function that concatenates the data values from a column.
XmlForest – Creates a separate XML element for each item specified. XmlForest provides a convenient shorthand for specifying multiple elements nested within another element, where element instances that are NULL are omitted.
For more detailed information see XMLELEMENT, XMLAGG, XMLCONCAT and XMLFOREST in the Caché SQL Reference.
New SAVEPOINT Features
With version 5.1, Caché introduces multiple transaction levels (see Nested Rollbacks), which make it possible to roll back part of a transaction without losing all work completed to that point. Caché SQL now offers the following standard SQL commands that take advantage of this ability:
SAVEPOINT <savepointName> — establishes a savepoint within a transaction.
ROLLBACK TO SAVEPOINT — rolls back to the most recent savepoint.
ROLLBACK TO SAVEPOINT <savepointName> — rolls back to the specified savepoint.
COMMIT – commits only the current sub-transaction when $TLEVEL > 1.
For more detailed information see SAVEPOINT in the Caché SQL Reference.
CREATE TABLE: New IDENTITY Keyword
Caché SQL now supports the ability to define a column with a system-generated numeric value in a CREATE TABLE statement. An IDENTITY column is an exact non-negative integer column whose values are system-generated, and may not be assigned by the user in either INSERT or UPDATE statements. It may, however, be viewed using SELECT *. The syntax is:
CREATE TABLE <tablename> ( [ other-table-elements , ] <columnname> [ <datatype> ] IDENTITY [ UNIQUE | NULL | NOT NULL | DEFAULT [(]<default-spec>[)] | [COLLATE] <sqlcollation> | %DESCRIPTION <literal> ] [ , other-table-elements ] )
An IDENTITY column is always data type INTEGER with unique non-null values. You can specify a datatype and constraints, but these are ignored by Caché.
This syntax is consistent with Microsoft SQL Server and Sybase syntax.
For more detailed information, see CREATE TABLE in the Caché SQL Reference.
DROP VIEW: New CASCADE Keyword
Caché SQL now supports the ability to cascade the deletion of a view to also delete any view that references that view. The new keywords are CASCADE and RESTRICT. The RESTRICT keyword is the default and is the same as prior DROP VIEW behavior.
For more detailed information, see DROP VIEW in the Caché SQL Reference.
INSERT: New DEFAULT VALUES Clause
Caché SQL now supports the ability to use default field values when inserting a row into a table. The syntax is:
INSERT INTO <tablename> DEFAULT VALUES
The statement will insert a single row into the table. Each field that has a default value will have the value assigned to the column. Fields without default values will be NULL for the row.
For more detailed information, see INSERT in the Caché SQL Reference.
New RowId Counter Validation Option
A new configuration option now makes it possible to validate new system-assigned ID values. The option is activated by setting ^%SYS("dbms","validate system-assigned id") to 1. Although such validation is not normally necessary, it is possible that the ID could be invalid if the user has modified the value manually, or if objects are inserted into the table without using the object or SQL filer. Other system recovery errors could also allow this condition to exist (bad recovery of a journal file, disk failure, etc.).
When this option is enabled, the table compiler will generate a uniqueness check on insert for the Id value. If validation fails, SQLCODE=-119, will be returned to the caller and a message will be written to the console log. After writing a message to the Console.log file and before returning from the filer, the user-defined routine ^%ZOIDERROR will be called. It is important to review the console log when this error is reported.
When this error is reported, it will be necessary to bring the ID counter back into sync with the data. Each failure will cause the system ID counter to be incremented, so it is possible that the problem will correct itself over time. At the point the error is reported it is not necessarily true that the counter is wrong, since the data itself may be incorrect. It is the responsibility of the user to determine how the counter became invalid.
New Query Optimizer Plan Verification
Regression tests based on TestSQLScript now have an easy way to verify query plan stability. Defining the class parameter SHOWPLAN=1 in %UnitTest.TestSQLScript will cause the query optimizer plan to be written to an output file.
JDBC 3.0 Support
Cache 5.1 supports JDK 1.4 and JDBC 3.0. All required features and most optional features are supported.
GRANT and REVOKE Command Changes
Due to the extensive improvements to Caché security at 5.1, the SQL GRANT and REVOKE commands no longer support the following syntactical forms:
GRANT ACCESS ON namespace
GRANT %THRESHOLD number
The %GRANT_ANY_PRIVILEGE, %CREATE_USER, %ALTER_USER, %DROP_USER, %CREATE_ROLE, %GRANT_ANY_ROLE, and %DROP_ANY_ROLE privileges
The GRANT and REVOKE command support the following additional options:
Granting a role to a role, creating a hierarchy of roles
The EXECUTE object privilege
The granting of object privileges to stored procedures, as well as tables and views
The use of the asterisk (*) to grant EXECUTE object privileges to all stored procedures
For more detailed information, see GRANT and REVOKE in the Caché SQL Reference.
CREATE USER Command Changes
At 5.1, issuing a CREATE USER does not automatically assign any roles or privileges to the user, regardless of the privileges held by the creator. Privileges and roles must be assigned to a new user using the GRANT command.
For more detailed information, see CREATE USER in the Caché SQL Reference.
Subquery Flattening
In many cases the SQL engine will now attempt to “flatten” certain types of SQL queries. That is, a query will be internally converted into an equivalent form that does not contain a subquery. In many cases, it is easier for the SQL optimizer to recognize this equivalent form, and a better execution plan is generated.
Enhanced Locking Behavior for Foreign Key References
Locking behavior during table filing has been changed in the following ways:
During SQL DELETE, for every foreign key reference a long-term shared lock will be acquired on the row in the referenced table. This row will be locked until the end of the transaction. This ensures that the referenced row is not changed before a potential rollback of the SQL DELETE
During SQL INSERT, for every foreign key reference a long term shared lock will be acquired on the referenced row in the referenced table. This row will be locked until the end of the transaction. This ensures that the referenced row is not changed between the checking of the referential integrity and the end if the INSERT's transaction.
During SQL UPDATE, for every foreign key reference which has a field value being updated, a long-term shared lock will be acquired on the old referenced row in the referenced table. This row will be locked until the end of the transaction. This ensures that the referenced row is not changed before a potential rollback of the SQL UPDATE.
During SQL UPDATE, for every foreign key reference that is being changed, a long term shared lock will be acquired on the new referenced row in the referenced table. This row will be locked until the end of the transaction. This ensures that the referenced row is not changed between the checking of the referential integrity and the end if the UPDATE's transaction.
READONLY Tables and Fields
Prior to this version of Caché, trying to INSERT, UPDATE, or DELETE into a ReadOnly table would not result in an error until the statement was executed. In this version, an SQLCODE=-115 error will be raised during compilation.
When a property is defined as ReadOnly, the field in the corresponding SQL table is also now defined as ReadOnly. READONLY fields may only be defined via an initialexpression or SQL Compute code; they may never be explicitly insert or updated via SQL statements. Any attempt to INSERT or UPDATE a value for the field (even a NULL value) will result in an SQLCODE=-138 error ("Cannot INSERT/UPDATE a value for a ReadOnly field").
SQLCODE Changes
The following SQLCODE error codes have been added for 5.1:
-129: This error is raised when you attempt to set a Caché Locale setting to an invalid value. See SET OPTION in the Caché SQL Reference for further details.
SQLCODE = -129: Illegal value for SET OPTION locale property
-138: This error is raised when you attempt to compile an INSERT or UPDATE that references a read-only field. See INSERT in the Caché SQL Reference for further details.
SQLCODE = -138: Cannot INSERT/UPDATE a value for a ReadOnly field
-142: This error is raised when the CREATE VIEW command contains a mismatch between the number of columns in the view definition and number of columns in the query. See CREATE VIEW in the Caché SQL Reference for further details.
SQLCODE = -142: Cardinality mismatch between the View-Column-list and View Query's SELECT clause
-308: This error is raised when you attempt to define more than one IDENTITY field for a table. See CREATE TABLE in the Caché SQL Reference for further details.
SQLCODE = -308 Identity column already defined for this table
-316: This error is raised when a Foreign key references a non-existent column.
SQLCODE = -316 Foreign key references non-existent key/column collection
-321: This error is raised when you attempt to drop a view when another view references that view. See DROP VIEW in the Caché SQL Reference for further details.
SQLCODE = -321 Cannot DROP view - One or more views reference this view
-356 and -357: These two errors may be raised by an attempt to use a user-defined SQL function.
SQLCODE = -356: SQL Function (function Stored Procedure) is not defined to return a value
SQLCODE = -357: SQL Function (function Stored Procedure) is not defined as a function procedure
-375: This error is raised when you attempt to roll back to a savepoint that was either never established or has already been rolled back.
SQLCODE = -375 Cannot ROLLBACK to unestablished savepoint
-417: This error is raised when login fails. Usually this is due to username and password checking failure. It can also occur if the username is not privileged.
SQLCODE = -417 Cache Security Error
-431: This error is raised when you attempt to pass a literal as a stored procedure parameter when the underlying argument type is an object type.
SQLCODE = -431 Stored procedure parameter type mismatch
-459: This error is raised when you try to connect using Kerberos and security authentication fails. Possible reasons include: the Kerberos security executable cconnect.dll is missing or fails to load; your connection is rejected because of the Kerberos credentials you supplied.
SQLCODE = -459 Kerberos authentication failure
The following obsolete SQLCODE values have been removed:
SQLCODE -340, -341, -342, -343, -344, -345, -346, -347
For a complete list of SQLCODE values, refer to the “SQLCODE Values and Error Messages” chapter of the Caché Error Reference.
Support for %%CLASSNAMEQ and %%TABLENAME
Caché SQL now supports {%%CLASSNAMEQ} and {%%TABLENAME} references in class definition SQL specific COS code in the following locations:
SQL Computed field code
SQL Trigger code
%CacheSQLStorage conditional map condition expression.
{%%CLASSNAMEQ} (not case-sensitive) will translate to the quoted string for the name of the class which projected the SQL table definition.
{%%TABLENAME} (not case-sensitive) will translate to the quoted string for the qualified name of the table
For example, assume the following trigger in the class User.Person:
Trigger AfterInsert1 [ Event = INSERT, Order = 1, Time = AFTER ] { Set ^Audit("table",{%%TABLENAME},$j,"AFTER INSERT TRIGGER")=1 Set ^Audit("class",{%%CLASSNAMEQ},$j,"AFTER INSERT TRIGGER")=1 }
If User.Employee extends User.Person, the following SQL trigger code will be pulled as an AFTER INSERT trigger in the SQLUSER.EMPLOYEE table:
Set ^Audit("table","SQLUser.Employee",$j,"AFTER INSERT TRIGGER")=1 Set ^Audit("class","User.Employee",$j,"AFTER INSERT TRIGGER")=1
CREATE BITMAP INDEX Support for Oracle Import Compatibility
When loading an Oracle SQL script file through $SYSTEM.SQL.DDLImport() or $SYSTEM.SQL.Oracle(), Caché SQL now recognizes the CREATE BITMAP INDEX statement.
Extended Support for Milliseconds
Caché SQL now supports fractional seconds in all date/time functions. The DATEADD, DATEDIFF, DATENAME, and DATEPART functions now support a datepart of "ms" or "milliseconds". The ODBC Scalar functions {fn TIMESTAMPADD()} and {fn TIMESTAMPDIFF()} now support the SQL_TSI_FRAC_SECOND parameter.
See DATEPART in the Caché SQL Reference for more detailed information.
Date and Time Function Enhancements
The SQL Scalar functions TO_DATE and TO_CHAR now accept %Library.TimeStamp logical values as input. In addition, the following format codes have been added for support of TimeStamp values:
HH – hour of day (1-12)
HH12 – hour of day (1-12)
HH24 – hour of day (0-23)
MI – minute (0-59)
SS – second (0-59)
SSSSS – seconds past midnight (0-86388)
AM – meridian indicator
PM – meridian indicator
There is a new configuration setting for the default format value for the TO_DATE() function. The default format is still "DD MON YYYY", but it can be changed using the following commands:
Do $SYSTEM.SQL.SetToDateDefaultFormat(<value>)
or
Do SetToDateDefaultFormat^%apiSQL(<value>)
For example:
Do $SYSTEM.SQL.SetToDateDefaultFormat("YYYY-MM-DD HH24:MI:SS")
The current setting for the TO_DATE() default format can be displayed with:
Do CurrentSettings^%apiSQL
or
Do $SYSTEM.SQL.CurrentSettings()
The following CAST and CONVERT operations are now supported for %FilemanDate and %FilemanTimestamp:
CAST (<%FilemanDate value> AS CHAR)
CAST (<%FilemanDate value> as DATE)
CAST (<%FilemanDate value> as TIMESTAMP)
CAST (<%FilemanDate value> as VARCHAR)
{fn CONVERT(<%FilemanDate value>, SQL_DATE)}
{fn CONVERT(<%FilemanDate value>, SQL_TIMESTAMP)}
{fn CONVERT(<%FilemanDate value>, SQL_VARCHAR)}
CAST (<%FilemanTimeStamp value> AS CHAR)
CAST (<%FilemanTimeStamp value> as DATE)
CAST (<%FilemanTimeStamp value> as TIME)
CAST (<%FilemanTimeStamp value> as TIMESTAMP)
CAST (<%FilemanTimeStamp value> as VARCHAR)
{fn CONVERT(<%FilemanTimeStamp value>, SQL_DATE)}
{fn CONVERT(<%FilemanTimeStamp value>, SQL_TIME)}
{fn CONVERT(<%FilemanTimeStamp value>, SQL_TIMESTAMP)}
{fn CONVERT(<%FilemanTimeStamp value>, SQL_VACHAR)}
Connectivity Improvements
New Caché 5.1 connectivity features and enhancements:
New ECP Cluster Support
New SNMP Support
New LDAP Client
New Mac OS X server support
New ECP Cluster Support
Enterprise Cache Protocol is now supported in shared disk cluster configurations with OpenVMS and Tru64 UNIX®.
Differences between ECP cluster and failover cluster:
Faster failover
Active shared disk(s).
No network reconfiguration.
Roll in and out cluster member for repair, upgrade, maintenance and etc.
All cluster members are live.
Features
ECP Cluster server will provide higher availability.
Locks and transactions are preserved during failover.
Only the cluster master serves the ECP clients.
The cluster members could be used for other applications.
InterSystems strongly recommends the use of ECP for clustered systems. ECP represents a significant advance over predecessor networking approaches such as DCP. Customers currently using DCP for communications among members of a cluster will see improvements in performance, reliability, availability, and error recovery by converting to ECP.
New SNMP Support
To enable monitoring of Caché by a variety of systems management tools and frameworks, support for the Simple Network Management Protocol (SNMP) has been added. The %SYSTEM.MonitorTools.SNMP class allows for control of SNMP agents and functions. This class contains methods to start and stop the Caché SNMP agent, as well as the CreateMIB() method which generates a custom MIB file based on an application description in the Monitor Framework.
For details, see Using SNMP to Monitor Caché in the Caché Monitoring Guide.
New LDAP Client
Programmatic access to LDAP (Lightweight Directory Access Protocol) servers has been added. See the %Net.LDAP.Client.Session class documentation for details.
New Mac OS X server support
This version of Caché now installs and executes natively on Mac OS X 10.3. The installation kit is a standard ".dmg" distribution produced by PackageMaker.
Support has been added for Mac OS X as a server plus the following client components:
ODBC
JDBC
Objects
CSP Gateway for Apache
A native Objective-C binding is also available.
Caché 5.1 Upgrade Checklist
The purpose of this chapter is to highlight those features of Caché 5.1 that, because of their difference in this version, affect the administration, operation, or development activities of existing systems.
Caché version 5.1 is a significant improvement in functionality and security over its predecessors. In making this advance, InterSystems goal was to provide a compatible evolutionary path forward whenever possible. However, many of the features, such as the System Management Portal, replace functions in previous releases with new mechanisms. Furthermore, the addition of the new security features required a partial redesign and reorganization of the underlying system. These introduced incompatibilities with previous versions of Caché.
Other InterSystems documents describe the features of Caché 5.1 in more depth and breadth. For example,
The Getting Started With Caché section of the documentation Home page provides the Release Notes for this release, information on Installing Caché, and a list of the target Supported Platforms.
The Caché System Administration section contains information on Administering Caché including using the new System Management Portal, and also Administering Caché Advanced Security.
A new section called Caché System References provides two new books. One describes the format of the Caché Parameter File. A second, the Caché Advanced Configuration Settings Reference, explains the parameters in the Home, Configuration, Advanced Settings page. It also shows where those settings were found in the Caché version 5.0 user interface.
Administrators
This section contains information of interest to those who are familiar with administering prior versions of Caché and wish to learn what is new or different in this area for version 5.1. The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation.
New License Keys Required
Caché version 5.1 introduces new capabilities and a new key format. The license servers from prior releases and the license server for Caché 5.1 do not recognize each others key formats. Existing users MUST obtain new licenses from InterSystems in order to run Caché version 5.1. Please contact your local sales representative to obtain the new keys corresponding to your existing license or to discuss new licensing options.
If a site wishes to run a 5.1 installation on a system where 5.0.x systems will run concurrently, the systems must obtain the correct license keys from their respective servers. This is done by individually setting the ports to the license servers on each system. Caché 5.0.x systems default the license server port to 4001. The license server port for the version 5.1 system(s) should use a different port number for accessing their license server.
Multiple Caché instances that share a key must all be upgraded to 5.1 together.
Recompilation After Upgrade
As noted in the Release Notes, and elsewhere in this document, the changes made to Caché for this version are extensive and pervasive.
All user application classes must be recompiled after upgrading to version 5.1. And, all user routines that contain embedded SQL statements must also be recompiled as well.
Failure to recompile after upgrade may result in unexplained failures during application execution, and possible data loss..
An advantage of this approach is that (which few exceptions) it is no longer a requirement that any Caché component be installed on the system you use to manage an installation. Remote management of systems over the web, subject to access control established for the site, is now possible and easy. Cross-release compatibility issues are eliminated because both the data and its formatting information come directly from the system being managed.
This new interface subsumes the functions previously distributed among Explorer, SQL Manager, Configuration Manager, and Control Panel functions of the Windows Caché Cube. Because it combines these functions, operators and some developers will also use the portal to accomplish their tasks as well.
The version 5.1 management portal cannot be used to manage earlier versions of Caché. The opposite is also true; the management functions of earlier versions cannot be used to manage Caché configurations running version 5.1.
More information on the System Management Portal can be found in the System Administrator documentation.
Portal and Application Name Conflicts
In Caché 5.1, the instance name chosen at installation time, is used to construct the name of the CSP application that runs the System Management Portal. For example, assume a Caché system had a CSP application called, “/appserver”. If the installation name chosen for this system was “APPSERVER”, the upgrade procedures would construct a CSP application to run the System Management Portal called, “/appserver/csp/sys”. After the upgrade this would effectively block access to the previously available CSP application.
When upgrading from an earlier version, care must be taken to ensure that there is not already a CSP application with the same name as the installation (ignoring differences in case).
Security Advisor
To assist system managers in securing a Caché system, version 5.1 includes a Security Advisor. This utility.
Defaults for Security Settings at Installation Time
The use of Caché security begins with the installation (or upgrade) of version 5.1. During Caché installation, the person doing the installation is prompted to select one of three initial security settings:
Minimal
Normal
Locked Down
The selection determines the initial configuration settings for Caché services as follows:
And the following table shows what services are enabled by default:
Emergency Access
As a contingency, Caché provides a special emergency access mode that can be used under certain dire circumstances, such as severe damage to security configuration information or “unavailability” of any users with the %Admin_Manage:U or %Admin_Security:U privileges. (Although Caché attempts to prevent this situation by ensuring that there is always at least one user with the %All role, that user may not be available or may have forgotten his or her password.)
When Caché is running in emergency access mode, only a single user (the “emergency user”) is permitted. Caché is started in emergency access mode through a command-line switch, which passes a user name and password for the emergency user. This user name does not have to be previously defined within Caché. (In fact, even if the user name is defined in Caché, the emergency user is conceptually a different user.) The emergency user name and password are only valid for a single invocation of emergency mode.
The user starting Caché in emergency access mode must have operating-system level system management privileges. (On Windows systems, the user must be a member of the Administrators group. On UNIX® systems, the user must be root. On OpenVMS systems, the user must have a system UIC.) Caché authenticates this user by checking his or her operating system level characteristics. When Caché is started in emergency access mode:
The emergency user is the only permitted user. Any attempt by another user to log in will fail.
The emergency user automatically has the %All role.
The Console and CSP services are enabled. All other services are disabled. This does not affect the enabled or disabled status of services in the configuration; only the current “in memory” information about services is affected.
Caché password authentication is used and unauthenticated access is forbidden for all services.
If possible, auditing is enabled for all events. Caché startup proceeds even if this is not possible.
Configuration File Changes
One consequence of the new capabilities added in version 5.1 is that major differences have been made to the form and content of the configuration file that provides many of the initialization values when Caché starts up. This section does not detail every change in the configuration file, only the more apparent ones.
As of this version, the parameter file MUST be named cache.cpf.
New Parameter Reference
If a site controls operation by editing the .cpf, each of these controls must be examined to make sure they are still applicable. Administrators are strongly urged to review the Caché Parameter File Reference book for the current organization, and the valid parameters and their allowed settings.
Startup Check for Configuration Changes
Caché configuration information is stored outside of Caché and (by design) can be modified when Caché is not running. Therefore, a special protective option have been added to this version. Rather than protecting the contents of the configuration file, Caché controls the ability to start the system or modify the configuration of a running system. This protection is enabled by turning Configuration Security on. (Of course, the configuration file can and should be protected outside of Caché by strictly limiting at the operating system level the ability of users to modify that file.)
During startup, if Caché detects that the configuration (.cpf) file has changed since the last time the Caché instance was started, the user running startup will be asked to enter a username and password. This data will be used to verify that the user is authorized to start Caché with altered configuration parameters. If the user is successfully authenticated, and the user has %Admin_Manage:Use, Caché will be started with the new configuration parameters.
Otherwise, Caché will start with the values of the last known configuration. When this happens, the configuration file supplied will be copied to cache.cpf_rejected (overwriting any file by that name), and the configuration parameters actually used to start Caché will be written to the file specified as the configuration file.
Low-level Management Interfaces
In addition to the new Management Portal, system administrators can exercise low-level control over the security of Caché systems through character-oriented utilities. The available routines are described in The CHUI-Based Management Routines.
Web Server Changes
The evolution of security capability in Caché 5.1 has affected the way hypertext information is served to browsers.
The Caché Private Web Server is Now Apache
Each installation of Caché 5.1 also installs an instance of Apache as its private web server. Sites may change the configuration to use a different one, but Caché will always install its private server regardless.
Default Port Changes
By default, Caché now chooses as the superserver port number the first unused port at or after 1972. Applications which depended on the value, 1972, may fail to contact the superserver. In addition, Caché now uses a separate port number for its private web server. The value chosen is the first unused port at or after 8972. Despite both ending in “72”, the two port number are not correlated; the superserver port might be 1973 and the web server port number could be 8977.
During a custom installation, user may explicitly set both the superserver port and the private WebServer port numbers.
CSPGateway Changes
Caché version 5.1 has changed the CSPGateway implementation on two of the platforms.
Support Removed for OpenVMS
The CSPGateway is no longer supported on OpenVMS. The material referencing it has been removed from the OpenVMS installation script and the code is no longer part of the product for OpenVMS.Note:
Any existing CSP Gateway files will be removed during a upgrade from a previous version.
WebServer Parameter for OpenVMS
The OpenVMS .cpf files now contain the WebServer parameter again in the format
WebServer=[ON|OFF],<server>:<port>
If no such parameter is specified in the .cpf file, the defaults chosen are “OFF”, “”, and the first available port on or after 8972, respectively.
Default Username and Password
When the CSP gateway connects to Caché the first message it sends over is a login message that can contain a username and hashed password defined in the CSP gateway management pages. If the server $username="" which means that CSP gateway did not connect with Kerberos, it will use the default username and password to attempt to login this service.
If this fails, the CSP gateway halts after recording an entry in the audit log (if auditing is enabled). If it succeeds, then $username will not be null and then the CSP server is allowed to call other functions such as ones to display CSP pages. While $username="" the CSP server will only call the login method.
Caché Permissions on UNIX®/Linux are Those of the Installer
In prior versions of Caché, it was necessary to install Caché under the username, root. This was not a good practice because Caché does not require root privilege for normal operation. In version 5.1, this requirement has been eliminated.
When Caché starts, it now sets its userid to that of the user that installed it and its groupid to “cacheusr”. One consequence of this is that devices which are inaccessible to that installer may also be inaccessible to Caché. If you wish this installation to have access to devices available only to root, Caché must be installed by root.
For example, on many UNIX® systems root owns the Ethernet devices. A version of Caché installed by a non-root user would not (by default) be able to communicate using the Ethernet.
The “cacheusr” groupid must have write permission for the files needed to operate Caché, for example, the files in the Journal directory and the directory itself. Failure to meet this requirement will result in erratic operation possibly leading to system failure.
This warning extends to any file and/or directory used by Caché created by an administrator outside of Caché. For example, if the journal files are assigned to a separate disk to improve throughput and system reliability, it is not enough that they be created by a user with, for example, “root” access. They must be writable by the “cacheusr” group.
New Password Hash Function
Caché has always stored its password data for users as the result of applying a hash function to the characters of the password. When you attempt to login, a hash of the characters you enter is calculated and it is compared to the hashed value stored internally. If the two calculated values match, the passwords are assumed to match.
This version of Caché uses a computationally stronger function to compute the password hash; one that produces different hash values than before and is harder to “crack”.
Because different password strings can hash to the same value, there is no way to compute the actual user's password starting from the hash value. This means there is no way to compute the new password hash value starting with the old hash value. Therefore, all userids on the system must be given new password hash values when upgrading to version 5.1.
User data exported by prior versions of Caché (for example those produced by $SYSTEM.SQL.Export(...)) contains the password hash values used for that version. Care must be taken when importing such data into version 5.1. All such users will need new passwords assigned. Users whose ids are imported will have their password hashes reset preventing them from logging in until it is reassigned under 5.1.
An exception to this is users who have null (no assigned) passwords. These users will be processed automatically.
After an upgrade, Caché 5.1 will assist in the computation of the new password hash values. When a user attempts to login for the first time, the password hash will be calculated using the previous algorithm. This value will be compared against the stored value. If they match, the password hash will be recalculated using the new algorithm and this new value will be stored in the database. Thus the conversion of passwords will be made as existing users login for the first time.
Read-Only Databases
Representation
To improve consistency in handling read-only databases, the way they are identified to Caché has changed in this version. Caché now recognizes read-only databases that are marked as such in properties for that database, or through declared read-only via the ^MOUNT.
Write Daemon Access Determines Database Mode
In Caché version 5.1, when a database is mounted the write daemon checks whether it has sufficient permission to update the database. If it does not, it will force the database to be mounted in read-only mode.
Cluster Changes
Improved Cluster Join Logic
Caché 5.1 has been enhanced so that a cluster member is no longer allowed to fully join a cluster while the trio of switches 10, 13, and 14 (which disables database access) is set on the cluster master. In prior releases, the new system would be allowed to cluster mount and read/write from databases. If the cluster was in the process of performing a backup, this could cause problems.
Now the new cluster member will detect the switches which have been set cluster-wide and set those switches locally while it starts up. This may mean that the Caché startup process on the member attempting to join the cluster will hang if a switch is set which blocks global access. A console log message will be generated if this occurs.
This version can interoperate with older versions but the new functionality will not be present unless the master and the system joining the cluster have not been upgraded to version 5.1.
Journaling Changes
As a result of experience with prior versions of Caché, journaling in version 5.1 has been significantly improved as one part of the underlying support for highly available systems. The goal of the changes in 5.1 has been to make the behavior of journaling safer and more consistent; for example, journaling is now a property of databases rather than individual globals. The operator interface has been changed to incorporate this new approach, and Caché 5.1 provides for common management and auditing of changes to journal settings via the System Management Portal.
Journaling is Now a Database Attribute
Caché 5.1 sets the specification of the journal state on a databases basis. This greatly improves reliability of the system because it addresses inconsistencies (after crash recovery) that could arise in earlier versions due to change in globals that may or may not be journaled, and that may or may not be involved in transactions explicitly or implicitly via %Save or SQL UPDATE statements. The changes in more detail are:
The journaling state is a property of databases, not individual globals. All globals within a database are journaled or not, depending on this setting. There are only two states - YES and No.
The default setting of the journal state for new databases is YES. When a database from an earlier version is first mounted, the value is set to YES, regardless of the previous setting for "default for new globals" and regardless of the settings of individual globals within that database.
In a transaction, Caché writes changes into the journal, regardless of the settings of the databases in which the affected globals reside. Rollback will work as before.
Nothing mapped to CACHETEMP is ever journaled; its journaling behavior is unchanged.
Journal restore respects the current settings of the database. Nothing is stored in the journal about the state of the database when the journal is written. The state of the database at time of restore determines what action is taken. This means that changes to databases with JOURNAL=YES will be durable, but changes to other databases may not be. Caché will ensure physical consistency, but not necessarily application consistency if transactions involved databases with JOURNAL=NO.
Databases mounted on a cluster have their globals journaled or not, depending on the database setting.
The setting for a database can be changed on a running system. If this is done, the administrator will be warned of the potential consequences and the change in state audited.
In recognition of this change, Caché 5.1 also:
changed the default purging settings to somewhat mitigate the diskspace consequences of this change;
removed the routine, ^%JOURNAL, which in prior releases enabled or disabled journaling on a per-global basis;
modified ^%GCREATE and ^%SYS.GCREATE so they no longer ask whether to journal globals.
One aspect of the new journal design is that restores are performed only to databases marked to be journaled at the time of a journal restore. The ^JRNRESTO program now checks the database journal state the first time it encounters each database and records the journal state. Journal records for databases not so marked are skipped during restore.
If no databases are marked as being journaled, the ^JRNRESTO program will ask if the operator wishes to terminate the restore. Administrators can change the database status to journaled and restart ^JRNRESTO if desired.
Journaling Z-Globals
In prior releases, the JournalZGlob parameter was used to indicate whether z/Z* globals should be excluded from journaling (even inside transactions). In version 5.1, to make journaling more robust, it has been removed. When upgrading an earlier Caché system with the flag set, the existing individual z/Z* globals in every defined database are given the journal attribute of that database. (For CACHETEMP, the journal attribute defaults to off).
If a site needs to exclude new z/Z* globals from journaling, the administrator will have to map z/Z* globals to a database with the journal attribute turned off.
Since the globals in a namespace may be mapped into different databases, some may be journaled and some not. It is the journal setting for the database to which the global is mapped that determines how the global will be treated.
To replicate the behavior in prior versions, when the flag to exclude journaling of z/Z* globals is set, the z/Z* globals in every namespace must be mapped to the CACHETEMP database. The difference between CACHETEMP and a database with the journal attribute set to off is that nothing in CACHETEMP, not even transactional updates, gets journaled.
Changes to Journal Purge
Prior to this release, the default behavior was to purge the journal files after 7 days. In Caché version 5.1, the default has been changed.
You may have Caché purge journal files after either X days, or Y successful backups have occurred. Normal recommended settings are:
1 <= X <= 100
1 <= Y <= 10
If both X and Y are > 0, files will be purged after X days or Y successful backups, whichever comes first. If either X or Y is zero, purging is done on the basis of the remaining criteria. Setting X and Y to zero prevents purging entirely.
Journal files are now purged after 2 consecutive successful Caché backups.
Those customers who do not use Caché backup facilities should consider scheduling the appropriate journal maintenance using, for example, the Caché Task Manager to manage the amount of journal information retained.
Shadowing Changes
This version of Caché has significantly improved facilities for system shadowing. It is better at shadowing to and from clusters. The latency reporting on both the sender and shadow systems have been improved and there is better control over suspending/resuming and starting/stopping shadowing.
Journal Applied Transactions to Shadow Removed
The setting to choose whether or not to journal applied transactions on shadow databases no longer exists. In earlier Caché releases the default behavior was to not journal updates to shadow databases; but you could enable journaling on the shadow by selecting the Journal Applied Transactions check box. This maintained a separate journal file showing the activity on the shadow database.
With the option removed in Caché 5.1, journaling of the shadow databases is determined by the global journal state of the databases themselves. After an upgrade, journaling on the shadow databases is enabled. To mitigate the increased demand on the storage capacity of the shadow, Caché purges the destination shadow copy of a source journal file once it is dejournaled and does not contain any transactions open on the shadow.
InterSystems recommends you journal all databases that are the destination of shadowing. However, if you do decide not to journal the destination shadow databases, you must also disable journaling on the CACHESYS database. Caché stores the journal address and journal file name of the journal record last processed by shadowing in the ^SYS global in the CACHESYS database. This serves as a checkpoint from which shadowing will resume if shadowing fails.
On the shadow destination, if you journal the CACHESYS database, but not the destination shadow databases, there is the possibility that if the shadow crashes and restarts, the checkpoint in CACHESYS could be recovered to a point in time which is later in the journal stream than the last record committed to the shadow databases.
Compatible Mode (Record Mode) Shadowing Removed
There is no longer an option to choose the method of journal transmission. All shadowing uses the fast mode, apply changes method.
Prior to Caché version 5.1, there were four methods of journal transmission for shadowing:
Fast mode, apply changes
Fast mode, don’t apply changes
Compatible mode, apply changes
Compatible mode, scan changes
Compatible mode (previously called record mode) was most often used for compatibility among heterogeneous platforms, and sometimes to support different Caché releases. Fast mode (previously called block mode) now supports heterogeneous platforms since it automatically performs any necessary byte reordering for different endian systems.
If you wish to support multiple production servers running different Caché releases from a single shadow, then InterSystems recommends that you set up multiple Caché instances on the shadow server, one for each Caché version, and use fast mode rather than compatible mode on older versions. This provides the best performance and reliability.
A Caché upgrade converts existing compatible mode shadows to fast mode. The converted fast mode shadows may or may not work with the sources, depending on the source configuration. Caché 5.1 automatically performs endian conversion for fast mode shadowing.
Changes in Shadowing Defaults
In Caché 5.1, the following databases are not shadowed by default:
CACHEAUDIT
CACHELIB
DOCBOOK
SAMPLES
You can click Add next to the Database mapping for this shadow list on the Home, Configuration, Shadow Server Settings, Edit Shadow Server page of the System Management Portal if you wish to shadow them.
CACHETEMP
Caché 5.1 handles CACHETEMP differently from its predecessors. The changes are a result of security requirements and customer requests.
Expansion and Size Characteristics Preserved
Caché 5.1 preserves the expansion and size settings of CACHETEMP across restarts. After restart, the size reported by Caché will be the minimum of 240MB or the allocated size of the file, whichever is smaller. If the size of the file allocated by the operating system is larger than 240MB, Caché will only initialize the map blocks to describe the first 240MB and will expand the map later as needed. It will not, however, shrink the physical size of the file.
Collation
After restart, the collation of CACHETEMP will be reset to Caché Standard regardless of its prior setting. Those sites that wish a different collation should add code to the “SYSTEM” callback of the ^%ZSTART routine to set the collation desired.
Conditions for CACHETEMP Deletion
Under the following circumstances:
CACHETEMP is a 2KB database
CACHETEMP is mounted when STU (system startup) runs
Caché attempts to delete and recreate CACHETEMP. Condition #2 occurs, for example, if Caché is started in “nostu” mode, and then the operator later runs STU manually.
When CACHETEMP is recreated, the initial size is set to 1MB, the expansion factor to 0 (indicating growth by the larger of 10% or 10MB), and the maximum size to 0 (no limit).
ShutDownTimeout Parameter Now Enforced
Beginning with version 5.1, the ShutDownTimeout parameter will be enforced on all platforms. Shutdown will not spend more than the value of ShutdownTimeout (less about 10 seconds) in user-defined shutdown routines. Once the limit is reached, shutdown will proceed to completion (including final force cleanup) even if user-defined shutdown routines have not completed.
Collation for Locales Now on by Default
When a national collation is available in a locale (for example: Spanish1, Portuguese2, German2), it is now set as the default collation for that locale, instead of "Cache Standard". When a locale has more than one collation (such as German1 and German2), the one with the greatest suffix was selected.
Locales that don't have national collations (English, Hebrew, and so on), continue using "Cache Standard" as their default collation. The changes are summarized in the following tables:
This affects only the creation of local arrays, because new globals have their collation taken from the database's default (unless explicitly created by %GCREATE).
Accessing the Online Documentation
On Windows, when trying to access the documentation via the Cube, the userid assigned for the attempt is “UnknownUser”. When installing Caché with a security level of Normal or Locked Down, this username only has %DB_DocBook:R permission.
This is insufficient to read the Caché class reference documentation. Access to the class reference documentation requires that the user attempting to read the class documentation be authenticated.
Running program examples in the online documentation requires %DB_SAMPLES:W. If UnknownUser lacks this permission, then the button labeled Run It will not appear in any of the executable program examples.
Defining one or more roles which have the necessary permissions and assigning it to UnknownUser will establish the prior behavior. Alternatively, you may edit the application definition of “/csp/docbook” to add the role(s) whenever it is run.
Upgrading from a Prior Release
This section covers issues related to upgrading an existing Caché system to version 5.1.
No Upgrade from Field Test Versions
Customers running on any Caché 4.1.x or 5.0.x version may upgrade to Caché 5.1 at installation.
InterSystems does not support an upgrade from any of the versions used for field test of Caché 5.1. This includes the version of Caché 5.1 distributed to selected customers at DevCon 2005.
Use of DDP
If you were running DDP on an earlier version of Caché, you must edit your configuration file to allocate the proper number of network slots. They are no longer calculated by default.
In the [Net] section of the configuration file, set the value of maxdsmport to the number of ethernet cards used for DDP.
In the [config] section of the file, change the fourth parameter of the LegacyNetConn from 0 to 1.
DDP will not start if these changes are not present.
$SYSTEM.OBJ.UpgradeAll()
The change in the compiler version, the reorganization of globals and routines, and the changes in Caché classes may generate a bewildering swarm of errors if $SYSTEM.OBJ.UpgradeAll() is invoked without prior planning and preparation.
Synthesized Role: %LegacyUnknownUser
In order to mimic prior behavior, during upgrade to version 5.1 a default role is created. This role is named %LegacyUnknownUser. The idea is that after upgrade from 5.0 and earlier versions of Caché where advanced security was not implemented, it will be common for no users to be defined. In this case, all users will be logged in as UnknownUser. If UnknownUser has no access privileges, the customer's operations will not be accessible to existing users until the administrators configure the system.
The %LegacyUnknownUser role is granted Read/Write access to the resource created for each customer-defined database that exists at the time of the upgrade installation and the resources shown
Name: %LegacyUnknownUser Description: Legacy Unidentified Users Roles granted by this role: <none> Resources owned by this role: Resource Permission -------- ---------- %System_CallOut U %Service_SQL U %Service_Object U %Service_Console U %Service_CallIn U %Service_CacheDirect U %Development U %DB_USER RW %DB_SAMPLES RW %DB_%DEFAULT RW Users owning this role: <none>
In addition, use access to the following service resources will be granted subject to the indicated conditions:
After the administrator has configured the system appropriately, the UnknownUser user can either be disabled or the resources assigned to the role%LegacyUnknownUser can be gradually reduced via ^SECURITY or the System Management Portal as additional aspects of the application environment are brought under the control of Caché Advanced Security. This reduction of the privileges of the %LegacyUnknownUser role or its removal is a manual step in the transition. It is not done automatically by Caché.
%LegacyCD and %LegacySQL
These are applied automatically to existing users only during upgrades to ensure that those users continue to have the same level of access in 5.1 that they did previously. New users aren't required to have these roles specifically.
Allow %-Global Access as in Previous Versions
The value of Security.System.PercentGlobalWrite is set true for upgrades. (For new installations it is set false.) This makes access to %-globals consistent with earlier versions. The value can be changed via the ^SECURITY routine.
All Members of a Cluster Must Run the Same Caché Version
All members of an ECP cluster must be running the same version of Caché. If you upgrade one, you must upgrade all the rest.
Removal of CSP Gateway On OpenVMS Upgrade
The CSPGateway is no longer supported on OpenVMS. The material referencing it has been removed from the OpenVMS installation script and the code is no longer part of the product for OpenVMS.
Any existing CSP Gateway files will be removed during a upgrade from a previous version.
Removal of Global & Package Mappings
During an upgrade from an earlier version, the following mappings to globals will be removed:
all globals whose names begin with “^odd”
^rINDEXCLASS, ^rOBJ, ^ROUTINE
^mdd
Packages whose names start with “%Z”, “%z”, “Z” and “z” will have their definitions retained (“^oddDEF”), but will have their compiled class information removed. The “^odd” globals will be recreated when the classes are recompiled via $SYSTEM.OBJ.Upgrade().
In addition, ALL class mappings that were defined in the configuration file (.cpf) will be discarded.
If access to these globals is required, the administrator must manually construct the required mapping; they cannot be automatically converted. This will be the case if, for example, the system had defined global mappings so that multiple namespaces could share the same class definitions.
Trusted Application Definitions Removed
The Caché 5.1 security model does not support trusted applications. Anytime a user connects (or reconnects), he or she is prompted for a password if password authentication is turned on. If this is not what is desired, the administrator should turn authentication off for the Cache direct service.
Any 5.0.x trusted application definitions are thrown away during a 5.1 upgrade.
Windows Network Server Username and Password Change
In previous versions, Caché would install and start its services on Windows with a default username of “_SYSTEM”, and a password of “_sys” unless configured otherwise. The values for the username and password were set via the Configuration Manager using the Advanced tab and the “Input/Output” option.
In version 5.1, during upgrade, these values (or the default values if none were set) will be stored in the Windows service definition created for the upgrade.
Administrators may change the values using the Windows Management Interface. From the Windows Start menu, navigate to Programs, then Administrative Tools, and finally Component Services. Select the appropriate Caché service and from the Action menu choose Properties. The username and password are accessed via the Log On tab.
Global Replication Removed
Global replication is no longer supported in Caché 5.1. If found, the upgrade process will remove its use from the system and note this fact in the console log. The capability formerly provided by this can now be achieved by the use of shadowing. Please consult the Shadowing chapter of the Caché Data Integrity Guide for details.
Java and Kerberos
Before using Java on Caché with Kerberos, you must edit certain configuration files, among them krb5.conf. Parameters in this file are set by running
java com.intersys.jgss.Configure
and responding to the prompts. On Windows, Solaris, and Linux, if krb5.conf is not found in the default location, Configure will search for it in the following locations:
Windows
c:\winnt\krb5.ini
Solaris
/etc/krb5/krb5.conf
Linux
/etc/krb5.conf
to obtain any template file information to be used when the file is created in the default location.
Recommendations
InterSystems has several recommendations to administrators setting up a Caché 5.1 system.
Enterprise Cache Protocol (ECP)
InterSystems strongly recommends the use of ECP for distrubuted systems. ECP represents a significant advance over predecessor networking approaches such as DCP. Customers currently using DCP will see improvements in performance, reliability, availability, and error recovery by converting to ECP.
Change Default Password Setting
When installing Caché with a security setting of “Minimal”, the default passwords of all users created are set to “SYS”. InterSystems suggests strongly that the password of these users be set to a different value as soon as possible so that, even though the security level of the system is low, control over access is established from the start of operation.
CACHELIB as Read-Only Database
In Caché version 5.1, for security reasons InterSystems has made CACHELIB a read-only database. This is a change from the previous practice. InterSystems strongly recommends that sites maintain CACHELIB as a read-only database. Those site- and application-defined globals, classes, tables and so on which might previously have been placed in CACHELIB should be moved elsewhere.
Limitations
The following limitations apply to Caché 5.1 or specific operating systems running Caché:
Maintaining Information Coherence Across Systems
On clustered Caché systems, it is highly desirable to maintain the same list of users, roles, applications, and so on across all the systems of the cluster. The initial release of Caché version 5.1 does not provide facilities for propagating changes in one system to others. This must be addressed by assuring that administrators manually make the same changes on each system.
Maintaining Coherence with Kerberos
Sites using Kerberos as the authentication mechanism must manually propagate changes to the list of valid users held by Kerberos to Caché. Future versions of Caché may provide mechanisms for doing this automatically but the initial release does not.
Consolidating Audit Logs
If a site wishes to run audit reports, or other analyses of their devising, on the audit data from several systems (for example, all the systems of a cluster), the individual audit logs must be consolidated manually.
Write Image Journal Files
The format of the WIJ (write-image journal) file has changed for Caché 5.1 to improve recovery in clustered systems. This has two consequences:
If an unresolved failure remains on any system to be upgraded, be sure to restart Caché and do a recovery before you upgrade to the new version.
If you do not, the journal purge utility will not recognize journal files in the old format and will complain that there are corrupt journal files. To avoid this error, move the old journal files to a backup directory using the appropriate operating system commands before beginning the upgrade.
All members of an ECP configuration must be running the same version of Caché. If you upgrade one, you must upgrade the rest as well.
If you need to restore an older journal file to Caché 5.1, you can use the JConvert and %JRead routines.
Shadowing
A Caché upgrade converts existing compatible mode shadows to fast mode. The converted fast mode shadows may or may not work with the sources, depending on the source configuration. Caché 5.1 automatically performs endian conversion for fast mode shadowing.
Compatible mode (previously called record mode) is not supported.
Shadowing in Caché 5.1 is not compatible with any prior release of Caché. Both the source server and destination shadow must be running on Caché 5.1.
Clusters
Caché 4.1, Caché 5.0, and Caché 5.1 clusters can coexist on the same hardware, but they cannot cluster together. If these clusters need to communicate with each other they need to use DCP, or preferably, ECP.
Management of Non-% Variables in Embedded SQL
Any non-% variables used by embedded SQL statements within an ObjectScript procedure need to be added to the procedure's public variable list and be Newed within the procedure. While this is still a limitation in Caché, a change has been made to the macro preprocessor to make it easier to manually add these variables to the public and new lists.
When the Home, Configuration, Advanced Settings SQL setting Retain SQL Statement as Comments in .INT Code is “Yes”, along with the SQL statement in the comment, the non-% variables used by the SQL statement are listed in the comment text. This variable listing makes it easier to identify and cut and paste the variable lists into the MAC code public list and a new list.
Unicode in Global Names
Support for Unicode in global names is not yet fully operational and should be avoided.
Caché RPM kits
The Caché RPM kit installs into /usr/cachekit/5.1. Your /usr directory may be mounted read-only or may contain little free space, so you may want to change the location.
Database Interoperability
Databases created on earlier versions of Caché can be mounted on version 5.1 and, once they are upgraded, can be used there. But this process is not reversible. Upgraded databases cannot be moved back to earlier versions.
InterSystems advises users who wish to move data bi-directionally between systems running version 5.0 and 5.1 to use the %GOF/%GIF routines to move data between the versions.
Upgrade Only Processes Local Databases
In Caché 5.1, $SYSTEM.OBJ.UpgradeAll() only scans local for local databases to upgrade. It ignores remotely mounted databases. These must be upgraded by running UpgradeAll() on the remote systems.
Caché Versions for ECP
Because of the upgrade to the compiler, systems in an ECP configuration must either be:
all on version 5.1, or
the data server must be on version 5.1 and the application servers can be either on version 5.0 or version 5.1.
It is possible to run version 5.1 application servers with version 5.0 data servers, but this requires that the routines used by the application servers be mapped to databases local to the application servers. If you believe you have need to do this, please contact the InterSystems Worldwide Response Center (WRC) for assistance.
Moving Applications from 5.1 to Earlier Versions
Porting an application from Caché 5.1 to an earlier release is problematic and depends on what features in this version the application depends on (compiler behavior, new streams implementation, changes in exported XML for applications, new class representations — to name a few). If you believe you have need to do this, please contact the InterSystems Worldwide Response Center (WRC) for assistance.
ODBC and JDBC Compatibility
Due to a change in protocol, the ODBC and JDBC clients supplied with Caché 5.1 are compatible only with Caché servers from version 5.0.13 and later. Attempts to use connections to Caché servers in versions before 5.0.13 will result in errors.
RoseLink
RoseLink currently attempts to access Caché using only the standard SQL username and password. Therefore, it will not be supported on systems whose default installation security level is Normal or Locked Down. This restriction will be lifted in the next maintenance version of Caché 5.1.
In addition, the user must have %Development:Use permission in order to access classes for its use.
Dreamweaver
The connection that Dreamweaver MX uses to access Caché is not available with this version. This restriction will be lifted in a future maintenance release of Caché 5.1.
Perl and Python Language Bindings
The Perl and Python language bindings are supported only on 32-bit versions of Windows.
C++ Language Binding
The C++ language binding is supported only on the Windows platform using Visual Studio 7.1.
Platform-Specific Items
This appendix holds items of interest to users of specific platforms.
Windows
Help Format Change
The usage information for the commands css.exe and ccontrol.exe is now provided in HTML. Executing either command with a first argument of “help” will now invoke the default browser to display the help file.
New Japanese Locale
There is now a new Japanese locale for Windows (jpww). It is like the standard jpnw locale except that the default for Telnet Terminals is UTF8 instead of SJIS. This new locale is now installed by default for new Japanese installs on Windows. Upgrades to Caché maintain the previous locale.
Changes to %path% Environment Variable
To improve system security and the uniformity of locating Caché components, version 5.1 adds the file system directory name
\Program Files\Common Files\InterSystems\Cache
to the %path% system environment variable on Windows systems. It is added to the HKEY_LOCAL_MACHINE hive so that it applies to all users of this machine.
Visual Studio 7.1
On Microsoft Windows, Caché is now compiled with Visual Studio 7.1. User applications communicating with Caché (for example, those using the CALLIN or CALLOUT interfaces) must be upgraded to this version of Visual Studio.
Windows XP Professional
Mapped drives are not supported on Windows XP Professional — Due to security improvements in Windows XP Professional, Microsoft discourages users from using mapped drives; using them results in different behavior than in the past.
We recommend that XP Professional users follow these procedures to access mapped drives from the GUI tools or from telnet sessions:
For remote mapped drives, enter the user name and password in your configuration as before. In addition, edit the ZSTU startup routine and add this line for each drive you have mapped.
Set x=$zf(-1,"net use z: \\someshare")Copy code to clipboard
For virtually mapped drives, add this line for each drive mapped with the subst command:
Set x=$zf(-1,"subst q: c:\somedir\someotherdir")Copy code to clipboard
You cannot add more mappings after startup.
The above procedure is meant for development situations where only one user is expected to log on to Windows, and the user name entered in your configuration is the same user. In any other situation, such as a Terminal Server environment, the results are unpredictable.
The following notice from Microsoft refers to this problem:
[Redirected Drives on Windows XP Professional: On Windows XP Professional, drive letters are not global to the system. Each logon session receives its own set of drive letters A-Z. Thus, redirected drives cannot be shared between processes running under different user accounts. Moreover, a service (or any process running within its own logon session) cannot access the drive letters established within a different logon session.]
Another approach to using the mapped drives is to start Caché like this:
\cachesys\bin\ccontrol start configname
With this approach you do not have to add anything to the ZSTU routine, and you do not have to enter a user name and password. In addition, drives you map or map with a path using the subst command after startup are available. The limitation of this approach is that Caché only runs as long as the user that starts Caché stays logged on.
Windows Enterprise Server 2003
The version of Internet Explorer distributed with this version of Windows has every security related configuration option disabled. The result is that various pages displayed by the System Management Portal are affected; for example, information generated by scripts will not materialize because the scripts will not be run. The proper behavior can be restored by changing the Internet security level setting from “High” to “Medium”.
The first time a user accesses the System Management Portal on a particular system, Internet Explorer will prompt to ask if this site should be added to the “trusted” list. Answering in the affirmative, will also change the Internet security level for that site to Medium.
Mac
Support for Xalan, an XSLT (Extensible Stylesheet Language Transformation) processor, is only available on OS x 10.4.
OpenVMS
ECO Required for Access Using Kerberos on Itanium
Applications attempting to access OpenVMS servers that use Kerberos authentication must install the patch, HP-I64VMS-TCPIP-V0505-11ECO1-1, available at the ftp site. The ECO is for TCP/IP, not the actual operating system. Without this patch, the server will often transmit erroneous response packets back to clients using the C++ binding, ODBC, JDBC, and Studio.Note:
This ECO applies only to OpenVMS on Itanium hardware. It is not needed for OpenVMS on Alpha.
CSP Gateway Removed
Support for the CSP Gateway on OpenVMS has been removed.
Password Masking Limitation in GSS
When attempting to access Caché via JDBC on systems using Kerberos, if no credentials for the user are found, and the identity of the user is not supplied by the caller, JDBC will ask Kerberos to authenticate the caller. When this happens, due to the characteristics of terminal IO on OpenVMS, echoing of the password will neither be suppressed nor masked.
Using the SOAP Client Wizard from Studio
An attempt to start the SOAP wizard from Studio will fail unless the application, /isc/studio/template, is set up to point to the current web server used for OpenVMS.
Caché Processes and /SYSTEM
All processes that are part of Caché ru with UIC=[1,4]. Therefore, all Caché-related logical devices used by these processes, for example, those mentioned in the .cpf file, must be defined in the system table (defined with /SYSTEM) to avoid access errors.
WebServerName and WebServerPort
In version 5.1, Studio is unable to access the Class Documentation unless both the WebServerName and the WebServerPort are defined. These are found in the Miscellaneous category of the System Management page, Home,Configuration,Advanced Settings.
AIX®
IBM Java Runtime and Kerberos
On systems using the IBM Java runtime environment (AIX® 32–bit, 64–bit and SUSE Linux Enterprise Server), use of kinit is not compatible with Kerberos principal name and password prompting, or the principal name and password API. To use kinit, change the file
${java.home}/lib/security/iscLogin.conf
so that the module, com.sun.security.jgss.initiate, has the option
useDefaultCcache=true
With this runtime, only the Java routine at
{java.home}/bin/kinit
works and not the native Kerberos routine at
/usr/krb5/bin/kinit
NFS-Mounted Filesystems And Exclusivity
Cache uses O_EXCL (exclusive access) when creating Caché database (.dat and .ext) and lock (.lck) files. However, it is a known limitation that NFS does not guarantee this exclusivity.
Linux
On Netapp NFS-mounted filesystems under Linux, a file created by a suid:sgid executable has different, non-UNIX® standard, owners than on standard filesystems. The sgid bit on the executable fails to take effect, while the suid bit succeeds in setting the owner of the file to the owner of the executable. This behavior has been observed only on Netapp systems.
Red Hat 3.0 / 4.0 And IBM WebSphere MQ
If you plan to use the MQ interface, IBM WebSphere MQ version 6.0 is required when running Caché 5.1 on Red Hat version 3.0 and 4.0.
Linux / AMD64
When upgrading from Caché from a Linux implementation on an Intel processor to Linux on AMD64, a new Caché license is required. As noted on the InterSystems Web site:
“Because of the significant differences between 32-bit and 64-bit CPUs, InterSystems delivers different Caché software for them and, consequently, they are different platforms for licensing purposes. As a result, Platform Specific Caché licenses cannot be transferred from one to the other. (Normal trade-in policies apply.) Platform Independent licenses can, of course, be transferred at no charge.”
SUSE Linux Enterprise Server
IBM Java Runtime And Kerberos
On systems using the IBM Java runtime environment (AIX® 32–bit, 64–bit and SUSE Linux Enterprise Server), a different Kerberos kinit is needed. See the description includes with AIX®.
Terminal With Kerberos Authentication
SUSE Linux Enterprise Server 9 running on AMD64 handles packages slightly differently from other versions of Linux. A result of this is that attempting to use Terminal on this system with Kerberos authentication may encounter errors in the case where the installer has not chosen to install the developer packages. In this instance, the following packages must be installed to ensure proper operation:
heimdal-devel
heimdal-devel-32bit
The packages are most easily located by using the search facility to locate all the packages whose name begins with “heimdal”. In most installations (except “full”), the list will show the two packages named above as unselected. Select them and continue with the installation.
UNIX®
Users may install Caché on UNIX® so that cachesys is not the default directory. The directory path is assumed to be in an environment variable, CACHESYS. The ccontrol and csession commands use this environment variable. If it is defined at installation time, Caché is installed in that directory. If it is not defined, Caché is installed in the standard UNIX® location, /usr/local/etc/cachesys.
Both ccontrol and csession expect to find the registry in the same directory where their executable was found. For security reasons, ccontrol verifies that the protection on the registry is root as the owner and writable only by root.
Tru64 UNIX®
For Tru64 systems, unlike other UNIX® file systems, group ownership does not come from the group id of the creating process. Instead, the group ID of the file is set to the group ID of its parent directory.
However, when the vfs subsystem attribute “sys_v_mode” is set to 1, the group ID of the file is set either to the group ID of the process or, if the S_ISGID bit of the parent directory is set, to the group ID of the parent directory If the group ID of the new file does not match the effective group of the process or one of its supplementary group IDs, the S_ISGID bit of the new file is cleared.
In general, this will present no problems since the groupid of all directories created by Caché utilities is properly set to the correct group owner. But there are circumstances which can cause problems. For example, if an administrator uses ^DATABASE to create a database in an nonexistent directory, ^DATABASE will create the directory, but it does not adjust the groupid of the newly-created directory, which is inherited from the parent directory. As a result, the database, with its groupid inherited from the directory, may be inaccessible to cacheusr. Other Cache utilities (e.g., journal and shadow) that create directories have the same problem.
It is recommended that System Administrators set the sys_v_mode to 1 on all file systems and directories used by Caché to ensure smooth functioning of the system. For further information, please refer to the manpages for the open(2) system call.
HP-UX
The Caché cryptographic random number generator (use, for example, to encrypt and decrypt databases) requires a source of true randomness (entropy) in order to initialize its internal state. All supported UNIX® platforms except HP-UX 11i provide the special device file, /dev/urandom, that provides a source of true entropy based on kernel thread timings. On HP-UX, this functionality is part of the HP-UX Strong Random Number Generator available as a free, optional component supplied and supported by HP.
If this component is not installed, Caché uses other sources of entropy available on the system. However, these have not been analyzed for randomness, and therefore the encrypted values generated by Caché are not as strong as they could be otherwise.
Solaris
Applications running on Solaris will fail to obtain an initial set of credentials when using a password. This happens, for example, when trying to access a Caché instance requiring Kerberos authentication via TERMINAL. Sites intending to use Kerberos authentication with Caché will require patches to Solaris, namely,
For Solaris 10, 121239–01 and 120469–03 (or greater).
For Solaris 9, 112908–22 and 112907–06 (or greater).
Developers
This section contains information of interest to those who have designed, developed and maintained applications running on prior versions of Caché. Although InterSystems placed great importance on upward compatibility in version 5.1, the increased emphasis on security resulted in the redesign and re-implementation of some core parts of Caché. The effects of this necessarily affect existing applications.
The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation..
Although mainly for administrators and operators, developers may occasionally need to use some of its functions. A brief summary can be found in the Administrator section of this document and more complete information on the System Management Portal can be found in the System Administration documentation.
Privileged Operation
Caché has always had the concept of “privileged” operations. In Caché 5.1, this concept has been more clearly defined, strengthened and made more granular. Commands, routines, functions, methods and so on that are privileged must meet one of two criteria before Caché will allow them to proceed:
They must be invoked by a unmodified routine that is loaded from the CACHESYS database.
They are invoked by a user who holds a role granting permission to perform the operation. In most cases, privileged operations require %DB_CACHESYS:W, but certain operations may deviate from this.
It either of these conditions is true, then the requested operation will proceed.
Recompile User Applications After Upgrade
User application classes, and routines containing embedded SQL statements must be recompiled after upgrading to this version of Caché as noted in the Administrator section of this document.
CACHESYS and CACHELIB Reorganized
Any robust security implementation shares a number of characteristics with other like systems. For example:
The number of functions implementing the security module should be kept as small as necessary.
These should be collected together and isolated from other system functions so they can be protected.
They must be independently verified for correct operation and benign failure.
As part of the effort to add increase security in Caché, InterSystems has reviewed the low-level routines present in the CACHESYS and CACHELIB databases in light of these requirements. The result is that the contents of these database have been reorganized. CACHESYS (the manager's database) now contains only the low-level routines necessary for system management. Everything else has been moved to CACHELIB.
The following are brief guidelines to the changes.
For System classes:
Methods in classes of the %SYS package:
These reside in the manager database (CACHESYS) because they invoked protected system routines.
Since their names start with “%”, they are mapped into all other namespaces.
Method of classes in the %System package:
These reside in the CACHELIB database which is mounted read-only by default. They do not invoke protected functionality, but reside there to support legacy applications.
Since their names start with “%”, they are mapped into all other namespaces.
Methods in the Sys and System packages reside where the %Sys and %System packages reside, respectively. However, because their names do not start with “%” they are visible only within those databases.
For the system functions whose name is of the form, $SYSTEM.<name>:
If the name is one associated with a method in an internally known system class, it invokes that method.
Otherwise, it attempts to invoke the %System method by that <name>. So $SYSTEM.SomeClass.That Method() is equivalent to ##class(%System.SomeClass).ThatMethod().
And finally, for globals:
All globals whose names start with “%q” are mapped to CACHELIB (default read-only).
All other globals map to CACHESYS (default read-write).
The mappings can be displayed in more detail using ^%SYS.GXLINFO.
CACHELIB Is Mounted As Read-Only
As part of the reorganization of CACHESYS and CACHELIB, all of the information that is modifiable during normal operation has been collected into CACHESYS. Therefore, CACHELIB is mounted as a read-only database by default.
Access To %-Globals Is More Restrictive
By default, routines do not have write permission on %-globals that reside in other databases. In version 5.1, these rules are now consistently enforced. This can be changed via the System Management Portal at Home,Security Management,System Security Settings by changing the setting for “Enable writing to %-globals” to “Yes”.
Permissions On CNLS
The CNLS application is used to change the locale of a Caché installation. Running it now requires %Admin_Manage:U.
Authenticated Namespace And Routine Override Command Line
In prior versions, when a namespace and routine were supplied as a parameter on a command line, the process created would always use that namespace, and would override any namespace or routine specified by the security mechanisms of that version.
In version 5.1, if Caché is installed with the MINIMAL setting, csession will work as before. If the user needs to be authenticated, the namespace and routine for that user will override any namespace or routine setting supplied on the command line.
Changes To Routines
Routines And Globals Moved
In Caché 5.1, all %-routines and %-globals were reorganized as noted above.
If there are user- or site-supplied routines whose names begin with “%”, they must obey these same rules. These changes require administrative privilege because, by default, the CACHELIB database is set read-only at installation time and cannot be altered.
Unless routines added at the site need to create globals in CACHELIB during normal operation, InterSystems recommends that, after installing these routines, CACHELIB be made read-only once again.
Routines Renamed By Removing “%”
The review of Caché system functions resulted in a number of routines being designated as system management functions whose use needed to be controlled. Therefore, the following routines have been renamed by removing the “%” from their name, thus placing them within the protection of the manager's database:
BUTTONS
COLLATE
DMREPAIR, DSET
LANG*
MATH
NLS, NLSCOMP, NLSLOAD, NOREP
ROLLBACK*
SS, SSVNJOB, SSVNLOCK, SSVNROUTINE, ST
UPDATECLASS
Wcomm, Wpfiles, Wr, Wr1, Wsback, Wsdb, Wsdba, Wsmkey, Wsnls, Wsnls2
This change means that these routines must be invoked from a non-edited routine in the CACHESYS database and not be part of any indirection; or else be in a process that holds WRITE permission on the CACHESYS database.
Routines Renamed
To further emphasize their relationship to system function, some routines were renamed:
Routines Eliminated
During the review, some routines were identified as duplicating functionality provides elsewhere. These were removed:
%CLI — The same functionality is available from Caché through $zf(-1). On UNIX®, OpenVMS, and Mac, command line interpretation is done via !<command>. On Windows systems, the DOS command START performs this function.
%DKIOERROR — Calls to it should be replaced with $$$ERROR or $SYSTEM.Error usage.
%GED — Use %GCHANGE and %Library.Global methods instead.
%GDOLD — This routine has been removed.
%GROWTH — The functions of this routine have been moved to the SYS.Database class.
%GTARGET — This routine has been removed.
%LM — The functions of this routine have been included in the SYS.Lock class.
%LMFCLI — The functions of this routine have been included in the $SYSTEM.License class.
%qserver — The user accessible entrypoints have been moved into $SYSTEM.SQL.
%RMAC — This routine has been removed.
%START — This routine has been removed.
%USER — This routine has been replaced by $USERNAME.
%UTIL — This is an internal routine which has been removed. Its message logging function has been converted into a system macro, LOGMSG.
Stub Routines Added
Some frequently-invoked routines were moved to CACHESYS (%DM, %LICENSE, %GD, and %SS) and were renamed. Stub routines that call the new routines were left in their place as a compatibility aid. Applications are encouraged to move to using the new names.
In adding the stub routines, the tag, SYS, has been removed from the %SYS routine.
Routines Deleted
In addition to the changes noted above, internal and obsolete routines were removed from these libraries. If you suspect that this may be affecting your application, please contact the InterSystems Worldwide Response Center (WRC) for assistance.
No Mapping For %LANG Routines
Caché 5.1 ignores any routine mappings for the %LANG* routines that are used to provide language extensions. The routines executed will always be those in he %SYS namespace (CACHELIB).
Class Changes
During the development of version 5.1, a number of changes were made to improve development accuracy and reduce ambiguity when using classes. They are collected in this section
Classes Replaced
The following classes have been removed from the system because they have been replaced with classes providing better functionality. They are listed below:
New Classes
The following classes are new in this version of Caché:
%Library.AbstractResultSet
%Library.CacheCollection, %Library.CacheLiteral, %Library.CacheObject, %Library.CachePopulate, %Library.CacheString, %Library.Collate
%Library.DataType, %Library.Device
%Library.Global, %Library.GlobalEdit, %Library.GTWConnection, , %Library.GTWResultSet
%Library.JavaDoc, %Library.JournalRecordType, %Library.JournalState
%Library.ObjectJournal, %Library.ObjectJournalRecord, %Library.ObjectJournalTransaction
%Library.PersistentProperty, %Library.Prompt
%Library.RemoteResultSet, %Library.RowSQLQuery
%Library.ShadowState, %Library.ShadowType, %Library.SwizzleObject
Programmers who relied on an unqualified class name resolving to the correct location may discover that the new classes added to %Library now cause naming conflicts if the application defined classes with any of these names.
Name Case Conflict In Class Compilation
In version 5.0, Caché allows a class to define a method which has the same spelling as a method of its superclass, but differs in case. For example, a method called %save(), defined in class that inherits from %Library.Persistent would be considered a different method from the %Save() method of the superclass.
In version 5.1, this situation produces a compilation error. For example, CSP applications that had defined methods of include() or link() will find that these are now in conflict with %CSP.Page.Include() and %CSP.Page.Link() respectively.
Ambiguity Resolution When Simple Class Names Used
If an application makes a reference to a class whose name begins with a percent-sign and which does not specify its package name, the class compiler looks for the class in the %Library package for the class. Thus,
Set BaseDir = ##CLASS(%File).ManagerDirectory()
is interpreted as if it had been written
Set BaseDir = ##CLASS(%Library.File).ManagerDirectory()
Programmers who relied on the unqualified class name resolving to the correct location will discover that the new classes added to %Library may now cause ambiguity in naming if the application defined classes with the same name, for example %Utility.
Class Member Naming Checks Made More Strict
In Caché version 4.1, you were not allowed to define two class members with the same name but different case. In version 5.0, however, a bug was introduced that failed to report these errors.
In Caché version 5.1, this bug is fixed but in order to allow these classes that did previously compile on 5.0 to still compile application developers can disable this check by setting the 'Strict Checking' flag is set to off. This is done by executing the command:
Set ^%qCacheObjectSys("strictchecking") = 0
In order to set this flag, you will have to change the permission on CACHELIB from Read-Only to Read/Write. This is done using the Management Portal, Home, Configuration, Local Databases.
New Methods Added To %Library.Persistent
Several new methods are implemented in %Persistent and are inherited by all persistent classes. These methods are:
%LockId: acquires a lock on an instance of a class
%UnlockId: releases a previously acquired instance lock
%LockExtent: acquires a lock on every instance in an extent
%UnlockExtent: releases a lock ion extent
%GetLock: attempts to get a lock on an instance, but will escalate the lock to the extent of which it is a member, if necessary
More detail on these methods can be found in the class documentation for %Library.Persistent.
Conflicts With User-Written %-Methods
User applications with method names that start with “%” should check to make sure that there are no conflicts with methods supplied by InterSystems in “base” classes such as %LIbrary.Persistent. This version of Caché has significantly increased the number of such names.
IdKeys Now Have <Indexname>Exists and <Indexname>Open Methods
This version of Caché now supplies <indexname>Exists and <indexname>Open methods for IdKeys.
All persistent class have an IdKey. If one is not explicitly defined or inherited from a superclass, then an index named IdKey<n> will be generated where <n> is an integer that is appended to the root of "IdKey", if another index named IdKey already exists. This index is defined as a system generated index.
In prior versions, the index was generated during class compilation. No inheritance resolution was applied to the generated index, and no index methods were generated.
IsValidDT Changes
Method No Longer Generated By Default
In version 5.1, user-written datatype classes that extend the%Library.DataType class no longer have the IsValidDT() method automatically generated. The previous behavior can be restored by executing
Set ^SYS("ObjectCompiler","GenerateIsValidDT") = 1Note:
Each namespace where the older behavior is and then recompile all affected routines.
Method Return Type Changed
In previous versions, the class instance method, %IsValidDT(), returned a value of type %Integer. In Caché 5.1, it more correctly returns a %Boolean.
Methods Supporting SQLCOMPUTECODE
Caché allows classes to define SQL computed properties by declaring them with the attribute, SQLCOMPUTED, and providing SQL code to computed the desired value. The value can be transient, calculated or storable.
For computed properties a <property>Get() method is generated that invokes <property>Compute() as needed. SQLCOMPUTECODE allows for other values to be referenced during computation. These references are to SQL columns (preserved for backward compatibility) and are converted to property names during method generation.
If the SQL column references a column projected from an embedded class, then <property>Compute() will generate an extended reference to the embedded property.
Using embedded properties in SQLCOMPUTE code breaks encapsulation. One problem with breaking encapsulation is with "triggered" computed fields, that is, when SQLCOMPUTEONCHANGE is declared. Embedded property references are not supported in SQLCOMPUTEONCHANGE.
Changes To Inheritance Processing
In previous versions, the inheritance rules for classes were not always as expected. For example, if a user created a class, User.MyClass, that was a subclass of %Library.Persistent, Caché would automatically inherit the default package name of the superclass, %Library, as a #import into User.MyClass. A consequence of this is that if User.MyClass contained a property declared such as
property A as String;
Caché would try resolve this by looking in both the User and %Library packages. If User had a class, User.String, Caché would report a classname conflict even though the user probably intended to reference User.String. The workaround was to fully qualify the property name as in
property A as User.String;
Caché 5.1 will still inherit any explicit #import settings from the superclasses, but it will not automatically add the superclass package names to the #import. So in the example given the A property would resolve to 'User.String' without any name conflict errors.
Caché still uses the current class packagename in resolving names; User.MyClass will still use 'User' as a #import for its name. But this is no longer true for subclasses.
More explicitly, Caché will always resolve the name in the context where it was first defined and not the current classname. For example, suppose User.MyClass defines a method, X(). If a class MyPackage.MyClass inherts from User.MyClass, when it is compiled Caché will compile the inherited X() method in MyPackage.MyClass but resolve any unqualified classnames used in this method in the context of User.MyClass because this is where X() was defined.
Stream Implementation Has Been Modified
In version 5.1, cloning a class containing a stream member works differently from earlier releases. What happens now is:
If the stream member is a serial stream, the oref of the stream is copied to the clone.
If the stream is an instance of the “older” stream implementations:
%Library.FileBinaryStream
%Library.FileCharacterStream
%Library.GlobalBinaryStream
%Library.GlobalCharacterStream
the oref of the stream is copied to the clone.
In all other cases, Caché will make a copy of the stream contents in a new stream of the same type and place the oref of the new stream into the clone.
If an application wishes to retain the oref of the original stream, it can do so with
Set ..ThatStream = oref.%ConstructClone(0)
XML Export Replaces CDL
In this version, CDL is no longer available as an export format for classes. Users should export their classes in XML instead. CDL will still be accepted as an import format for this release.
Persistent Superclasses Must Reside Outside of CACHELIB
Subclasses of persistent classes currently store some of their extent information with the extent information of their superclass. Because CACHELIB in Caché version 5.1 is now a read-only database, it is no longer possible to subclass persistent classes residing in CACHELIB by default. Attempting to do so will result in a <PROTECT> error. This is true even if the persistent classes were created locally and stored in CACHELIB.
The only exception to this is classes which are marked as SERIAL. They do not have extent information since their instances are embedded in the class that references them.
TRUNCATE Default Changed For %Library.String
Strings have, among their other parameters, settings for MAXLEN and TRUNCATE. The value of MAXLEN specifies the maximum permissible length of the string. The value of TRUNCATE specifies how to enforce the maximum length limit.
If TRUNCATE is set to true, Caché will store only the first MAXLEN characters in a variable declared as type %Library.String ignoring the rest of the string.
If TRUNCATE is set to false, an attempt to assign more than MAXLEN characters to the variable will return an error status.
In Caché version 5.1, the default value of TRUNCATE for new instances of %Library.String will be false. In previous versions it had been true. Note that this applies only to new strings created in version 5.1. Older items of type string will still have the defaults from the time they were created.
Support For Legacy %Close() Behavior Dropped
In version 5.0, Caché changed how it handled objects that were closed. The object was destroyed upon %Close if its reference count went to 0. The OREF associated with the object would be removed once it was marked “inactive”; that is, all references to it were gone.
When this behavior was introduced, it was possible to have Caché use “legacy support” for %Close instead — the method used in versions prior to 5.0 — via the call
Do $ZU(68,56,1)
In this mode, Caché decrements an object's object-level reference count upon %Close() and removes it from memory when the count reaches 0. No provision was made to prevent re-use of the OREF.
In Cache 5.1, legacy mode has been removed. Calling this function will result in a <FUNCTION> error.
%DeleteExtent() Behavior Improved
In prior versions, the %DeleteExtent() method always returned $$$OK, even if not all instances in the extent were deleted. In version 5.1, its behavior now better matches expectations; it only returns $$$OK if all instances of the extent were successfully deleted.
Method Compilation And Return Values
In previous versions, if a method was declared to return a value, the method compiler would insert a
Quit ""
if the last line of the method did not begin with a Quit command. This approach, however, hid subtle programming bugs because the method the developer wrote did not, in fact, return a value when it was supposed to.
In version 5.1, this is no longer done. The method compiler will only insert a simple
Quit
instead if the last line of the method does not contain one. Thus, invoking a method (function) that does not return a value when it is declared to will result in a <COMMAND> error.
Required Relationship Collections Cannot Be Empty
If an application specifies that a “child” or “many” side of a relationship collection is required, Caché now make sure this contains at least one element. If the relationship is empty at the time the instance is saved, Caché reports an error on %Save, for example:
ERROR #5662: Relationship child/many property 'Sample.Company::Employees (1@Sample.Company,ID=)' is required so must have at least one member
Cycle Checking For XML Exports
In Caché 5.1 XML-enabled classes check their hierarchy before export to determine if there is a cycle present. This check is on by default, but may be disabled by appending “,nocyclecheck” to the Format property of %XML.Writer.
If this check is disabled, and a cycle is present, a <FRAMESTACK> error will result.
Task Changes
Task Manager Hierarchy Upgraded
The Task Manager now uses the Caché class hierarchy more completely. All user tasks must now subclass the class, %SYS.Task.Definition.
Subclasses can thereby introduce additional properties which will be available during the task execution. The user interface will then interrogate the definition to request the property values from the user. For example,
Class %SYS.Task.IntegrityCheck Extends %SYS.Task.Definition { Property Directory As %String [ InitialExpression = {$zu(12)} ]; Property Filename As %String [ InitialExpression = "INTEGRIT.LOG" ]; ClassMethod DirectoryIsValid(Directory As %String) As %Status { If '##class(%Library.File).DirectoryExists(Directory) { Quit $$$ERROR($$$GeneralError,"Directory does not exist") } Quit $$$OK } /// This method is responsible for executing the task /// At the scheduled time, the Task Manager /// - creates an instance of this object, /// - sets any property values using the stored "settings" for the task, /// - and invokes this method to execute the task. /// In order to execute a real task, override this method in a subclass. Method OnTask() As %Status { Do Silent^Integrity(..Directory_..Filename) Quit $$$OK } }
ContinueAfterError Property Removed
The ContinueAfterError property in the class %SYS.TaskSuper has been removed because it is too general. Tasks which depended on it must be redesigned to handle their own error conditions.
Other Task Improvements
In addition, the following improvements have been made to task management in Caché 5.1:
There is a RunLegacyTask class that provides the ExecuteCode() method for compatibility with earlier versions.
If a running task encounters any kind of error, it is suspended.
The methods, RunOnce() and RunNow(), will now make a suspended task active.
Task startup is prevented if there is no license found for this instance of Caché.
CDL Support Dropped In Following Releases In Favor Of XML
In Caché 5.1, CDL was removed as an option for exporting classes (see XML Export Replaces CDL) in favor of the industry-standard XML. Caché 2008.1 will complete this transition. CDL will no longer be available as a format for import to Caché, either as import or output.
Furthermore, for new platforms added in 2007.1 that are not newer versions of existing platforms, InterSystems may decline to provide support for CDL at all. For the exact details on each Caché platform, please refer to the Supported Platforms documentation.
For those customers that have program archives in CDL format, InterSystems recommends importing them into Caché 2007.1, and exporting them as XML.
SQL Differences
In the transition to version 5.1, the following changes were made in SQL that may affect existing programs.
Caché And SQL Users Unified
In prior versions, the list of valid Caché user names and the list of valid SQL user names were unrelated and were governed by different security mechanisms. In version 5.1, this is no longer true. All SQL user names are Caché user names, and vice versa. What each is permitted to do is determined by the same security mechanism.
Atomic SQL statements
In Version 5.1, the SQL statements DELETE, UPDATE, and INSERT...SELECT have been made atomic. That is, the statement either completes successfully or no rows in the table are modified.
In previous versions, it was the responsibility of the application to detect an incomplete operation and roll back the update (if desired). Now, if any row fails to update, none of the rows in the table will be updated by the statement.
SQL Passwords Are Case-Sensitive
In Version 5.1, for security reasons, SQL uses the same password mechanisms as Caché. One consequence of this is that SQL passwords are now case-sensitive. Previously, they were not.
Table Ownership Interaction With $USERNAME
This means that tables created through the use of DDL will have as owner the value of $USERNAME at the time they were created. When creating a class or table by any other means, the class's OWNER keyword is not defined unless the developer explicitly defines it. When a class is compiled that projects a table and the class's OWNER keyword is NULL, the table's owner is set to _SYSTEM.
This interpretation is the same as in previous versions. What has changed is that there is no default to an OWNER of _SYSTEM when creating tables through DDL in 5.1.
Delimited Identifiers Are The Default
Caché version 5.1 installs with the value for “Support Delimited Identifiers” as true. This means that a double-quoted string (“My String”) is considered a delimited identifier within an SQL statement. Prior versions of Caché had this parameter set to false: a double-quoted string was treated as a string constant or literal string. The value of this parameter can be changed via the System Management Portal at Home,Configuration,Advanced Settings.
%msql Eliminated
This variable was used in ObjectScript to specify a valid user name for SQL access from embedded SQL. A valid user name was one that was registered in the User Table.
In Caché 5.1, the SQL username is now extracted from the $USERNAME special variable which is set when the user is authenticated.
Cached Query Changes
Cached Query Interaction With Read-Only Databases
In Caché 5.1, SQL queries that require Cached Queries will not work against read-only databases unless the ^mcq global is mapped to a database mounted as read-write. An example of this interaction is attempting to create a table in a read-only database using SQL DDL.
Cached Query Changes Are Not Journaled
In version 5.1, when cached queries are modified, the changes are no longer journaled. This prevents changes to cached queries inside a transaction from being written to the journal. Thus, shadowing will not apply cached query changes across systems.
Purging Cached Queries Is Immediate
Caché 5.1 no longer supports the concept of purging cached queries after N days, where N is a number of days defined in the configuration setting. When an application calls Purge(), it will purge all cached queries.
Cached Queries On Read-Only Databases
This version of Caché permits applications to Prepare and Execute Dynamic SQL cached queries that do SELECTs against the table of the database.Note:
Any attempt by the application to purge such queries will result in a <PROTECT> error. Purging cached queries requires write access to the database they are associated with.
Cached Query Purge Does Not Propagate To Other Systems
If a package is mapped to multiple namespaces within a single system, a compile/delete of a class in that package that has cached queries created by it will purge all the cached queries that use that class in each of the namespaces However, if the package mappings are to different machines via ECP and the class is recompiled/deleted, the purge of the cached queries will only occur on the system where the class is compiled. Cached queries form this class on other networked systems must be manually purged.
^mcq(“init code”) Execution Order Changed
The global node, ^mcq(“init code”), can be set to contain Caché commands to be executed when a connection is made via JDBC or ODBC. In Caché 5.0.12, the order of events during the connection was:
Read the connection info from the client
Run the code in ^mcq(“init code”)
Run the login code
In Caché 5.0.13, the sequence was changed to:
Run the code in ^mcq(“init code”)
Run the login code, including reading the connection info from the client
In Caché 5.1, this sequence is now
Run the login code, including reading the connection info from the client
Run the code in ^mcq(“init code”)
Ambiguous Names In SQL Queries
In FROM Clause
In previous versions of Caché, ambiguous field names in SQL queries were assumed to be associated with the earliest table mentioned in the FROM clause. This same situation in version 5.1 reports an error. The ambiguous names must be qualified to show their origin.
In ORDER BY Clause
Caché 5.1 reports an error if the field names in an ORDER BY clause are ambiguous, for example:
SELECT TOP 20 %ID as ID, ((ID * 2) # 20) as ID FROM Sample.Company ORDER BY ID
This corrects a previously undetected bug in the query processor. Previous versions of Caché would associated the ambiguous name (ID in this case) with the last occurrence of that column name.
Privileges Required To Set Certain Options
You must have %Admin_Security:Use permission to execute the following SQL SET OPTION statements:
SET OPTION SUPPORT_DELIMITED_IDENTIFIERS = {TRUE | FALSE} SET OPTION PKEY_IS_IDKEY = {TRUE | FALSE}
If you do not, the attempt to execute the statement will return an SQLCODE value of —99; this is a Privilege Violation error value. The reason is that these modify Caché configuration settings and you must be privileged to change them.
Changes To SQL GRANT And REVOKE
The SQL GRANT and REVOKE commands no longer support the following general administrative privileges:
%GRANT_ANY_PRIVILEGE
%CREATE_USER
%ALTER_USER
%DROP_USER
%CREATE_ROLE
%GRANT_ANY_ROLE
%DROP_ANY_ROLE
In previous versions, SQL permissions were separately maintained. In version 5.1, these privileges are managed by Caché. SQL code which attempts one of these operations will be interpreted as granting a role having this name.
CREATE ROLE Details
In order to create a role definition through the SQL CREATE ROLE statement, the user must hold the %Admin_Security:Use privilege. If this privilege is not held by the user, an error -99 will be returned with an appropriate message.
DROP ROLE Details
In order to drop a role definition through DROP ROLE, at least one of the following must be true:
The user has the %Admin_Security:Use privilege.
The user is the owner of the role.
The user was granted the role with the admin option.
SQL %THRESHOLD Removed
The SQL %THRESHOLD feature is no longer used in this version of Caché. Code that attempts to grant a threshold, for example,
GRANT %THRESHOLD ### TO SomeUser
will now receive an error at compile time. And code such as
REVOKE %THRESHOLD FROM SomeUser
will no longer revoke the threshold. The interpretation has changed; Caché 5.1 will attempt to revoke a role called %THRESHOLD from the user.
SQL Privileges On SAMPLES Granted To User _PUBLIC
At installation time, all SQL privileges for all tables, views, and procedures in the SAMPLES namespace are granted to the user named, _PUBLIC.
SQL Catalog Info For System Tables
The SQLTables() query of the %Library.SQLCatalog class returns a list of tables and views defined in the current namespace. In earlier versions, this list included System tables. In version 5.1, the System table information will only be included if the query is executed while in the %SYS namespace.
Collated Fields May Return Results In Different Order
Due to optimizations made in Caché SQL for 5.1, the results returned by queries on collated fields may be different. For example, consider
SELECT Dept, AVG(Salary) FROM Personnel GROUP BY Dept
where Dept is collated according to %SQLUPPER where the values entered in various rows are indiscriminate about case — some are uppercase, some lowercase, some with capital letters beginning each word, and so on. However, because of the GROUP BY clause, all departments are to be collected according to their value when converted to uppercase.
In prior versions of Caché, when this query's results were returned, the value of Dept returned was the actual value of one of the selected rows. In Cache 5.1, the value returned for Dept will always be represented in its collated form, in this case, %SQLUPPER.
This means that two queries such as
SELECT IdNum, Dept FROM Personnel
and
SELECT Dept, COUNT(IdNum) FROM Personnel GROUP BY Dept
may not return the expected results. The first will return the actual values stored in the Dept column and the second will return those values converted to uppercase. This may not be what is desired by the application.
The prior behavior can be restored via the %exact qualification for Dept as in
SELECT %exact Dept, COUNT(IdNum) FROM Personnel GROUP BY Dept
Control Of Time Precision
There is a new SQL configuration setting which allows the specification of the precision of the time value returned by the GETDATE(), CURRENT_TIME(), and CURRENT_TIMESTAMP() SQL scalar functions. The default time precision can be set using the new API call:
PreviousValue = $SYSTEM.SQL.SetDefaultTimePrecision(value)
where value is the precision (the number of decimal places for the millisecond portion of the time value).
The default is 0; milliseconds are not returned in the values returned by these functions. The function returns the previous (or default) time precision setting. For example: After executing
Do $SYSTEM.SQL.SetDefaultTimePrecision(3)
GETDATE() will return a value in the format: 'YYYY-MM-DD HH:MM:SS.sss'. An application can still override this default by passing a specific time precision value to GETDATE(). For example: GETDATE(5) returns: 'YYYY-MM-DD HH:MM:SS.sssss'.
This setting is used during the code-generation phase of the SQL engine. If you change the default time precision setting, you must purge any cached queries and recompile any class queries, embedded SQL routines, etc. for the new setting to take affect for that SQL statement.
While CURRENT_TIME() will return the time with a precision as specified in the default time precision setting, the LogicalToOdbc conversion of this time value does not support milliseconds. So if you have a default precision defined and CURRENT_TIME() is returned in a query through ODBC or JDBC, the milliseconds will be dropped from the value.
Owner Checked On DDL Create And Drop
When a users executes DDL to create or drop a procedure, query, or method in an existing class, Caché will not allow the action if the class has an OWNER defined, and the user is not the OWNER of the class.
ODBC & JDBC Permission Checking
Caché now checks the EXECUTE privilege for Stored Procedures invoked through ODBC and JDBC. A user may not call the procedure through ODBC or JDBC if the user has not been granted EXECUTE privilege on the procedure.
When looking at the list of procedures in the System Management Portal or from an ODBC or JDBC catalog query, the user will only see procedures that the user has privilege to call.
When creating a procedure through DDL, the creator user name is set as the default owner of the procedure. (This may be changed later by editing the class definition.) When the procedure is compiled, the owner of the procedure is granted EXECUTE privilege WITH GRANT OPTION if the owner does not have the %All role. If there is no owner specified in the class definition that projects the procedure, the owner is considered to be the user compiling the class.
When a procedure is dropped, or the class that contains the procedure definition is deleted, any execute privileges that had been granted on the procedure are dropped.
^mdd Information Moved to ^oddEXTR
The ^mdd global has been removed. In prior versions, this held SQL–related information. The information has been incorporated into the ^oddEXTR structures.
You can mount an earlier database in Caché 5.1, but to use it you must upgrade it with the commands
Do $SYSTEM.OBJ.UpgradeAll() Do $SYSTEM.OBJ.CompileAll()
After you run UpgradeAll and CompileAll, you cannot use the database in anything earlier than Cache 5.1.
Comparisons Involving NULL
This release of Caché corrects previous improper behavior in some SQL predicates involving constants and host variables whose values were NULL.
Application relying on the previous incorrect behavior of NULL testing for constants and host variables might have to be modified. This affects predicates of the form
field <> parameter field > parameter field >= parameter
where the value of “parameter” may be set to the NULL value. (Predicates involving the comparison operators “<”, “<=”, and “=” behaved correctly.) This means that existing predicates such as
field <> :hostVar
will have different behavior if :hostVar is bound to "" in COS. According to SQL three-state logic, this predicate should evaluate to NULL and fail rather than treating NULL as a value and succeeding for every field value other than NULL.
The previous behavior of a specific query could be restored, if necessary, by adding specific tests for NULL. AN existing query such as:
field<>:hostVar
needs to be rewritten as
(field <> :hostVar OR (:hostVar IS NULL AND field IS NOT NULL))
to produce the same results as before.
System Error Code Changes
New Error Codes
This version of Caché adds news system error codes:
Error Codes Removed
This release of Caché no longer supports the system error, <DISCONNECT>.
Globals Reorganized
Caché version 5.1 has reordered the subscripts in the globals that store user and system messages: ^CacheMsg and ^%qCacheMsg. The new order is domain, language, and id which allows subscript mapping of the message globals by domain.
The class dictionary version number is upgraded to 20 which will result in the user being asked to run $SYSTEM.OBJ.Upgrade() which will reorder the subscripts of existing ^CacheMsg globals. All error macros, routines and methods will keep the original arguments in the same order. therefore, no change in user code will be needed unless an application directly addresses the message global.
ObjectScript Changes
New System Variables
$USERNAME
In version 5.1, $USERNAME contains the name by which a user is known “inside” Caché for security purposes.
For example, suppose a user can successfully login to a Windows XP system with the username, “Smith”. If that user then attempts to access the Caché online documentation via the Cube, he or she is assigned the name, “UnknownUser”, for security purposes. If UnknownUser has no access to the online documentation, the user may be asked (depending on how Caché security is configured) to authenticate himself by supplying a userid and password known to Caché.
Caché 5.1 also retains the routine, ^%USER that prints the name of the user running the current process as it is known to the operating system.Note:
^%USER and $USERNAME are not required to be identical. The former results from the operating system login. The latter is based on a user successfully authentication to Caché security.
For example, suppose a user logs on Windows XP as user "Smith". However, if that user selects Documentation from the Caché Cube, the DocBook CSP application starts with a $USERNAME of "UnknownUser".
$ROLES
This variable contains a comma-separated list of all the roles held by the current user at any point during execution.
ObjectScript Compiler Upgrades
The ObjectScript compiler has been improved in version 5.1. As a result, it now generates code that cannot be run on previous releases. An attempt to do so will result in a <RECOMPILE> error.
The converse is not true. Compiled code from Cache 5.0 systems will run unchanged on version 5.1.
The following table gives the relationship between a version of Caché and a version of the ObjectScript compiler. The version number is made up of a “major” number and a “minor” number separated by a decimal point. The major and minor version of the ObjectScript compiler are returned by the ObjectScript functions $ZUTIL(40,0,68) and $ZUTIL(40,0,69), respectively.
A routine compiled on a version of Caché can be run on another version of Caché without re-compilation if
the major version of the compiler for each Caché release is the same, and
the compiler version of the system on which the routine will be run is greater than or equal to the compiler version of the system where the routine was compiled.
The Caché Basic compiler uses the same version number as the ObjectScript compiler and is subject to the same compatibility rules.
This change means that ECP configurations are limited to having their servers on version 5.1 with clients on either version 5.0 or 5.1. ECP servers running Caché 5.0 cannot serve code compiled under version 5.1.
Permission Requirements For Some ObjectScript Commands
Because of their effect, some commands under certain circumstances now require the user to have specific permissions for them to succeed.
$ZTRAP Change
The reference material for $ZTRAP states that when the name of the error trap starts with an asterisk (*), it indicates that Caché should invoke the error handler at the context level where the error occurred. However, if the error trap is within the context of a procedure, then Caché cannot simultaneously establish the error context and the proper context for the local variables of the procedure. In Caché 5.1, the compiler has been changed to detect this usage and report it as an error at compile time.
Applications that wish to use this feature must ensure that either
the subroutine containing the error recovery code is not a procedure, or
the error handling logic is altered so it does not need to be run in the context of the error.
$ZERROR Contains Additional Information For Some Errors
In the event an error occurs, information about it is stored in the system variable, $ZERROR. In Caché 5.1, the string stored in $ZERROR includes more information than in previous versions.:
<ERRCODE>Tag^Routine+line *someinfo
A consequence of this change is that error handling routines that made assumptions about the format of the string in $ZERROR may now require redesign to work as before. For example, the following will no longer work in version 5.1:
Write "Error line: ", $PIECE($ZERROR, ">", 2)
and should be changed to be something like
Write "Error line: ", $PIECE($PIECE($ZERROR, ">", 2), " ", 1)
The following table gives a list of errors that include additional info and the format of that information. The new info is separated from the previous text by a space.
The names of variables local to routines (or methods) as well as the names of class properties and methods are indicated with an asterisk preceding the name. Global variable names are prefixed with a caret as expected.
Examples:
<UNDEFINED> *x <UNDEFINED> *abc(2) <UNDEFINED> ^xyz(2,"abc") <PROPERTY DOES NOT EXIST> *SomeProp,Package.Classname <METHOD DOES NOT EXIST> *AMethod,SamePackage.DifferentClass <PROTECT> ^%GlobalVar,c:\cx\mgr\notyours\
$ZUTIL
Permission Changes
The authority needed to execute certain $ZUTIL functions is more specific in version 5.1 than prior releases. Unless otherwise noted in the following table, published $ZUTIL options require no special permission.
$ZUTIL(4) Change
This function is used to stop processes in Caché. In version 5.1, it has been refined to protect system jobs from interference by unprivileged applications. For example, $ZUTIL(4, <pid>) will no longer terminate a daemon. Instead, it will return an error status of 0.
Moreover, a process which is exiting and running %HALT, or any of its subroutines such as %ROLLBACK, will not respond to this function. The process issuing the RESJOB will now receive an error status of -4 meaning that the target ignored it.
Shutting down the system with “ccontrol stop” will terminate these processes as it has in the past. This can also be done in version 5.1 with the function, $ZUTIL(4, <pid>, -65).Caution:
When , $ZUTIL(4, <pid>, -65) is used for this purpose, any open transactions will not be rolled back even though the locks which protected it will be released.
$ZUTIL(49) Change
The information it returns has been extended to better describe the database:
$ZU(49, <sfn>, 3) — Version 5.1 adds several fields to the end of the previously returned information:
<SysNumber>: the remote system number, or zero if the database is local
<DirPath>: the database path on the system
<ResourceName>: the resource associated with the database
<BlockSize>: the database block size in KB
<Collation>: the database collation
<DirectoryBlock>: the DB directory block number. For ECP clients this is the local cache directory block number of the database in CacheTemp. The ECP server does not send the database directory block number to the clients.
All the values returned are separated from each other by “^”.
$ZF
Permission checking is also applied to some operations of $ZF; the following table lists the permission needed for those whose execution is restricted.
The other $ZF functions remain unprivileged operations as in previous versions of Caché.
Storage Changes
%CacheStorage Changes
New Property Method: GetStored()
A new property method is now implemented for all storable properties of persistent classes that are using default storage (%CacheStorage). It is <propertyname>GetStored(). This method accepts an object ID value (not an OID) and returns the logical value of the property as stored on disk. If <property>StorageToLogical() is used then it is applied to convert the stored value to a logical value.
This method is not valid for collections stored in a subnode structure. If an object identified by the supplied ID does not exist, an error will be reported. This method is not implemented for properties that are not stored: transient, multidimensional, or calculated properties. In these cases, Caché will report a <METHOD DOES NOT EXIST> error.
ID Counter Check Available With Default Storage
A new API function to check some system assigned object ID counters is now available in this version. An id check expression is generated by the compiler for each class using default storage with system assigned id values. (Child classes using default storage and system assigned IDs do not have this expression.)
The function, $$CheckIDCounters^%apiOBJ(.errorlog), will examine all extents in the current namespace. If an idcheckexpression is found, it will be invoked. The id check expression will fail if the last id in the extent has a value higher than the id counter location. In this case, an entry is placed in the errorlog array, subscripted by extent name. The id check expression is also included in errorlog so that the user can repair the problem.
An application should invoke this utility with:
Set sc = $$CheckIDCounters^%apiOBJ(.errarray)
After the utility returns, sc will be set to a standard status message. If errors were found, they will be stored in the multidimensional array, errarray.
%ExistsId() Is Now Generated For Default Storage
In Caché 5.1, this generated method will now validate the id value passed. If any components of the id are null, %ExistsId() will return zero (the object does not exist). If all components are not null, then the object reference will be checked for existence.Note:
This method is meant to be called in the class that is an extension of %Library.Persistent. Passing in an ID value that is not constructed by the class of which it is an instance not recommended as it breaks encapsulation.
%CacheSQLStorage Changes
Valid Row References Required For $PIECE Access Types
Applications using %CacheSQLStorage that employ two or more subscript levels of Access Type, $Piece, and have specified Data Access expressions for those subscript levels, must supply a valid Row Reference in the map definition. This version of Caché will no longer automatically generate one under these circumstances.
New Dynamic Value Substitution
Caché now supports the use of
{%%CLASSNAME}: expands to the name of the class without quotes, for example,
Do ##class({%%CLASSNAME}).MyMethod()Copy code to clipboard
{%%CLASSNAMEQ}: expands to the name of the class within quotes,
Set ThisClass = {%%CLASSNAMEQ}Copy code to clipboard
{%%TABLENAME}: expands to the name of the table within quotes,
Set MyTable = {%%TABLENAME}Copy code to clipboard
in the following locations within a %CacheSQLStorage map definition:
Map Subscript
Data Access expression
Invalid Conditions
Next Code
Access Variable Expressions
Map Data
Retrieval Code
Java
Package Names May Not Be SQL Reserved Words
If a Caché class is to be projected to Java, and any component of the package part of the projected class name matches an SQL reserved word (ignoring case), the attempt to project the class will report an error that the metadata for the Java class is missing its column names. This error can be avoided by using package names that are not the same as any SQL reserved word.
Terminal
Terminal Is Always Unicode Now
There is now only one version of TERMINAL which runs internally using Unicode characters. By default, it starts with the ISO network encodings "Local Encoding 2" and "Network Encoding 2". In order to display characters > 255 you must change the encoding to UTF8. As a result of this enhancement, the “Pass 8–bit Characters” setting has been removed.
When there are multiple instances of Caché, some Unicode and some 8 bit, it is good practice to set the encoding explicitly for each TERMINAL instance. Then the defaults no longer apply.
Argument Changes
Terminal no longer supports command arguments /size, /pos, and /ppos. It has been enhanced to handle characters internally in Unicode and provide for the proper translations to and from servers in different locales.
Password Echo
In previous releases, when the TERMINAL prompted for a password, it did not echo any characters to the output device. As of version 5.1, when Caché password login is enabled, each character of the password will be echoed as an asterisk (*). Any application that performs a login supplying a userid and password at the TERMINAL prompt must be made aware of the echoing behavior if it does pattern matching on the characters TERMINAL transmits.
Launching From The Windows Cube
In this version of Caché, the way the Cube determines whether to use Telnet or TRM for a remote TERMINAL session has changed. Servers are placed in the list displayed under the "Remote System Access" menu according to these rules:
Remote servers are always shown as enabled.
Local servers where the IP address of the server is not that of the local host, and the server name is not a local instance name will be treated like remote servers because the server is not associated with a local instance.
Local servers (where the IP address is the local host address, and the server name is the same as a local instance name) will be grayed if the configuration is down or telnet is disabled for that instance. Otherwise the server name will be enabled.
A telnet connection will always be available when using the Remote System Access menu to launch a terminal.
When the terminal is launched from the main cube menu:
If the active preferred server is associated with that instance, a private terminal connection will be made. The ttle bar for the TERMINAL windows will contain “TRM” followed by the process id and the name of the Caché instance. If an instance is not running, the instance will be started before launching the terminal.
Otherwise, a telnet connection will be made. The terminal's title bar will contain the hostname followed by “NTI - Cache Telnet". The cube never starts an instance of Caché different from the one it was installed with.
Local And Network Encodings Are Now Distinct
The local and network translation settings for Terminal are now stored separately for 8-bit and Unicode installations to permit the user to choose different behavior for Unicode and 8-bit installations which may exist on the same host. In prior versions, they had been the same.
SOAP Parameter Location Changes
This version changes the location of the parameters that control SOAP logging behavior. In previous versions these were in ^%SYS. In 5.1, they reside in the namespace from which the SOAP request is made. The parameters at issue are:
^ISCSOAP(“Log”) — set to “1” when web services requests and client responses should be logged
^ISCSOAP(“LogFile”) — the full pathname of the file where the logged information is placed
Callin And Callout
On The Windows Platform
On Microsoft Windows, Caché is now compiled with Visual Studio 7.1. User applications communicating with Caché using the CALLIN or CALLOUT interfaces must be upgraded to this version of Visual Studio.
CSP Changes
CSP Grace Period Changed
As part of the licensing changes introduced with Caché version 5.1, how CSP treats sessions has changed.
If a CSP session visits more than one page and the session is ended either from a session timeout or from the application setting %session.EndSession=1, CSP will release the license immediately rather than adding on an extra grace period.
If the session is just active for a single page, CSP will hold the session open for a five–minute grace period when the session is ended.
CSP Page Timing Statistics Default To Off
In Caché 5.1, the class parameter, PAGETIMING, has been changed to have a default value of zero. In earlier versions, its default value was 1. The zero value turns off the collection of page timing statistics for all classes that inherit from it. If an application relies on the page timing statistics being collected for CSP pages, then it will need to be modified to inherit from a superclass that has PAGETIMING set to 1.
Collation For Locales Now On By Default
Please refer to the discussion in the Administrator section.
Caché Dreamweaver Extension Revised
The Dreamweaver extension has been extensively revised to improve its security in version 5.1. It now uses the C++ binding exclusively. Users who wish to use this extension must have the %Development privilege. The extension will continue to work for those users without this privilege but no data from Caché will be visible in accordance with our security rules.
As a result of this change, the following must be true:
The Dreamweaver extension now requires Cache 5.1 on the server for operation.
The following directory must be present in the %path% environment variable,
\Program Files\Common Files\Intersystems
Operators.
A brief summary can be found in the Administrator section of this document and more complete information on the System Management Portal can be found in the System Administration documentation.
PERFMON And %SYS.MONLBL Coordination
These utilities each use some of the same Caché data structures for gathering data. So they should not execute at the same time; otherwise there is a risk that they may compromise each other's data. In version 5.1, program checks have been added to prevent their simultaneous execution.
Backup Information Changes
Beginning with version 5.1, the location where the backup database list is maintained has been changed. ^SYS("BACKUPCHUI") is no longer used. The list is maintained by the methods, Backup.General.AddDatabaseToList() and Backup.General.RemoveDatabaseFromList(). Moreover, InterSystems strongly recommends against setting it manually since this works at cross-purposes with the methods. Use the System Management Portal or the ^BACKUP utility instead.
New Question In Journal Restore
In prior versions, the journal restore routine, ^JRNRESTO, did not properly handle the restoration of journal files written on different operating systems. The error occurred in the handling of directory names specified by those systems.
Caché version 5.1 now accounts for this by asking whether the journal was produced on a different kind of operating system. However, if a site is using a script to drive the journal restore, the script will have to be modified to provide an answer to the new question.
Cluster Member Startup Improved
The logic for a Caché instance to join a member of a cluster has been improved to avoid confusion between systems making up the cluster. For details, please see the Administrator portion of this book. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCRN_51 | CC-MAIN-2021-10 | refinedweb | 27,087 | 54.32 |
Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Jan 30, 2008 8:34 PM
Hello,
We have several processes running on jBPM BPEL Beta3. We have created a deployment system that deploys our BPEL processes to the server. Now if we upgrade to GA, will we have to change that system? or is it backward compatible?
In beta version, it required bpel-application and bpel-definition files but now in new model it doesn't.
Thank you,
Meghana
1. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 5, 2008 12:06 AM (in response to Meghana Joglekar)
The change in the deployment model was intended to reduce the "paperwork" required to deploy a process to the bare minimum. The separate web module deployment was eliminated and is now handled automatically by the engine on the server side.
The descriptors you mention still exist but have undergone changes.
bpel-definition is mostly unchanged. It was given a stable schema and made optional.
bpel-application was renamed to bpel-deployment and given a stable schema as well. As you know, in 1.1.Beta3 this descriptor was provided as part of the web module. In 1.1.GA, it is generated automatically. If you desire, you can provide your own version in your process archive, in this exact location: WEB-INF/classes/bpel-deployment.xml. In fact, the contents of the WEB-INF/ directory in your process archive are copied verbatim to the resulting web module. Any missing artifact is generated by the process deployment servlet.
2. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 5, 2008 3:58 PM (in response to Meghana Joglekar)
Thanks Alex.
It seems we will have to redeploy already deployed BPEL processes. I can change bpel-application.xml to bpel-deployment.xml but cannot do the same with bpel-definition.xml which has its namespace changed :(
I am not expecting any workaround in this case but if there is any please advise.
Thank you,
Meghana.
3. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 5, 2008 4:57 PM (in response to Meghana Joglekar)
Hello Alex,
After making the necessary changes to bpel-definition.xml and bpel-deployment.xml [renamed and changed namespace etc.] in our deployment module, when I tried to deploy same BPEL process again on updated JBPM database, I get following error -
13:50:23,654/feb5.ws dl'. 13:50:23,674/Convert Temperature.wsdl'. 13:50:24,155 INFO [STDOUT] deploying process definition: file=feb5-feb5-process.zip 13:50:25,307 WARN [JDBCExceptionReporter] SQL Error: 547, SQLState: 23000 13:50:25,307 ERROR [JDBCExceptionReporter] The INSERT statement conflicted with the FOREIGN KEY constraint "FK_TO_QUERY" . The conflict occurred in database "OldJbpm", table "dbo.BPEL_SCRIPT", column 'ID_'. 13:50:25,307 INFO [STDOUT] could not deploy process definition: feb5 could not insert: [org.jbpm.bpel.graph.basic.assig n.ToVariable] 13:50:25,317 ERROR [STDERR] D:\Program Files\Serena\Business Mashups\Common\jboss405\server\default\work\jboss.web\local host\mashupmgr\alf\78740638-9c44-4da2-b182-8a50503206fd\feb5\build.xml:358: ERROR: Axis fault - Transport error: 400 Err or: Bad Request 13:50:25,317 ERROR [STDERR] at org.jbpm.bpel.ant.DeployProcessTask.execute(DeployProcessTask.java:125) 13:50:25,317 ERROR [STDERR] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) 13:50:25,317 ERROR [STDERR] at org.apache.tools.ant.Task.perform(Task.java:364) 13:50:25,317 ERROR [STDERR] at org.apache.tools.ant.Target.execute(Target.java:341)
Any idea about what could be causing this error?
Thank you,
Meghana
4. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 5, 2008 5:11 PM (in response to Meghana Joglekar)
Seems like a database schema conflict. There were some changes in the database layout between 1.1.Beta3 and 1.1.GA. Does this occur on a clean or an existing database?
5. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 5, 2008 5:22 PM (in response to Meghana Joglekar)
I think so too. This is upgraded database from older Beta3 to now GA. I have changed the hibernate config property to update schema but it seems to be not taking care of it. My setting is -
<property name="hibernate.hbm2ddl.auto">update</property>
If database upgrade doesn't work, it may prohibit us from taking up BPEL GA :(
Thanks,
Meghana.
6. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 5, 2008 5:27 PM (in response to Meghana Joglekar)
Can you test it on a new database to confirm this is a schema update issue?
7. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 5, 2008 5:32 PM (in response to Meghana Joglekar)
Yep. Tested already. It works on new database if I use BPEL console to deploy my process. It doesn't work on new database with our current deployment module. I get exception -
14:08:49,095 ERROR [STDERR] java.lang.ClassCastException: org.jbpm.bpel.graph.def.BpelProcessDefinition 14:08:49,095 ERROR [STDERR] at org.jbpm.bpel.ant.ServiceGeneratorTask.execute(ServiceGeneratorTask.java:55) 14:08:49,095 ERROR [STDERR] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) 14:08:49,095 ERROR [STDERR] at org.apache.tools.ant.Task.perform(Task.java:364) 14:08:49,095 ERROR [STDERR] at org.apache.tools.ant.Target.execute(Target.java:341) 14:08:49,095 ERROR [STDERR] at org.apache.tools.ant.Target.performTasks(Target.java:369)
Any suggestion?
Thanks,
Meghana
8. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 5, 2008 5:59 PM (in response to Meghana Joglekar)
Try moving from org.jbpm.bpel.ant.ServiceGeneratorTask to org.jbpm.bpel.tools.ant.WsdlServiceTask (in jbpm-bpel-tools.jar). The interface is largely similar, although the tool has been upgraded to generate bpel-deployment.xml along with the WSDL documents.
Apart from that, several things had to change between the last beta and the GA version to allow the latter to be stable. It might be best to keep the current versions of your processes on your Beta3 installation and deploy new versions on a GA installation. Also, you should consider migrating to the new deployment procedure as it will put much less burden on what your own module has to do.
9. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 5, 2008 6:09 PM (in response to Meghana Joglekar)
Thanks Alex. That is exactly what I am planning to do i.e. moving to new deployment module. Though that changes a lot of things for us e.g how the endpoint is going to look now will be controlled by JBPM GA rather than we controlling it. It means educating everyone again, finding out the limitations, documentation changes etc...
That aside what I see in eclipse debugger is 'BpelProcessDefinition' so it puzzles me why it throws ClassCastException at that point.
Anyway, what I am more interested in is finding out if we can move whole JBPM DB from Beta3 to GA. Unfortunately, it is not possible to keep previous ones running on Beta3 and new on GA
Thanks, I sincerely appreciate your help.
Meghana
10. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 6, 2008 7:00 PM (in response to Meghana Joglekar)
You can still control how the web service is deployed if you like. To do so, provide your own web.xml in the process archive and set the URL pattern on the servlet mapping elements to whatever value you like.
To control the context root, either provide a jboss-web.xml descriptor in the process archive or take advantage of the following fact. Say your process archive is called myprocess.zip. The generated web module will be called myprocess.war. In absence of jboss-web.xml, the context root will be myprocess.
11. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 6, 2008 7:22 PM (in response to Meghana Joglekar)
Thanks Alex,
I got our deployment module working with fresh GA and also upgraded to GA engine. I had to drop 2 foreign keys, FK_TO_QUERY and FK_ALIAS_QUERY to make the upgrade work. But it is looking good so far. The problem was that Snippet.hbm.xml uses BPEL_SNIPPET as its table now and not BPEL_SCRIPT.
While we are at this discussion, can you provide me some information about how I can undeploy BPEL processes? Is there any API to do that? Do you prefer separate thread for this question?
Thanks,
Meghana
12. Re: Is jBPM-BPEL GA backward compatible to beta 3?Alejandro Guizar Feb 6, 2008 7:38 PM (in response to Meghana Joglekar)
I get it now. Due to the renaming, the old BPEL_SCRIPT table and the foreign keys referencing it still exist. When you insert a new record in BPEL_TO, the constraint is violated because the query is inserted into BPEL_SNIPPET, a. o. t. BPEL_SCRIPT. Thanks for figuring it out.
I'd prefer to discuss undeployment on a separate topic.
13. Re: Is jBPM-BPEL GA backward compatible to beta 3?Meghana Joglekar Feb 6, 2008 7:41 PM (in response to Meghana Joglekar)
Yes. That's exactly what was happening.
I will create separate topic for my other question.
Thanks,
Meghana | https://developer.jboss.org/thread/116893 | CC-MAIN-2018-39 | refinedweb | 1,591 | 60.21 |
klish 2.0.1
Features at a glance
The software is highly configurable through XML files, and provides a long list of attractive features, including namespaces or logically nested views, support for optional arguments, support for subcommands, and support for switching subcommands.
Among other features, we can mention CISCO-like config support, a configuration daemon, support for nested parameters, support for namespaces with prefix, as well as the initial view redefinition function.
Supported operating systems
This is a cross-platform software supported on several UNIX-like operating systems, including Linux, BSD (FreeBSD and OpenBSD), Solaris and QNX. It should also work well with other UNIX OSes and has been successfully tested with both 32-bit (x86) and 64-bit (x86_64) instruction set architectures.
Getting started with Klish
Installing Klish on your GNU/Linux distribution is an easy task. We strongly recommend users who don’t want to compile the latest sources of the project to first search for Klish in the main software repositories of their Linux distributions.
If you don’t find Klish in your distro’s software repos, download the latest version from Softpedia, save the bz2 archive on a location of your choice, unpack it and open a terminal emulator, where you will have to use the ‘cd’ command to navigate to the location of the extracted archive files and run the ‘./configure && make’ command to configure and compile the project.
To install it system wide, execute the ‘make install’ command in the terminal emulator as root (system administrator) or with sudo. To use it, run the ‘clish’ command or ‘clish --help’ to view its command-line options.
Reviewed by Marius Nestor, last updated on December 16th, 2014
- price:
- FREE!
- developed by:
- Serj Kalichev
- klish.libcode.org
- license type:
- BSD License
- category:
- ROOT \ System \ Networking
In a hurry? Add it to your Download Basket!
0/5
- Fix access rights checking.
- Fix debug mode.
Application descriptionklish is a free and open source command-line software implemented in C and designed from the ground up as a drop-in re... | http://linux.softpedia.com/get/Programming/Libraries/klish-60780.shtml | CC-MAIN-2014-52 | refinedweb | 341 | 52.09 |
Hello all.
A friend is shipping some items in a container so it gave me the inspiration to write a quick little program. This only took 20minutes at best so please bear that in mind.
The basic idea is if the containers size is known, and the cost is known then the program will work out the cost of an individual item providing you know its dimensions.
As I'm very much a C++ beginner would anyone be able to criticize the layout of the program or in fact its functionality? Also if my maths is wrong anywhere please point it out.
Thank you in advance!
//Basic program to calculate the shipping cost of an individual item on a container. //Program assumes you will make use of every cubic metre //Program uses pre-defined values for total container cost and size of container. #include <iostream> using namespace std; double convertToM(double nSize) { return nSize / 100; //Convert value from CM to M } double convertToCubicMetres(double Length, double Width, double Height) { return Length * Width * Height; //Calculate items size in Cubic Metres } int main() { const double containerCost = 3000.00; //cost of 40ft container const double containerSize = 67.11; //size of container in cubic metres double itemLength = 0.0; double itemWidth = 0.0; double itemHeight = 0.0; int again = 0; do { //Currently assumes input is "Number Number Number" ie: 100 100 100. cout << "Please enter the size of your item (Length x Width x Height) in CM." << endl << endl; cin >> itemLength; cin >> itemWidth; cin >> itemHeight; itemLength = convertToM(itemLength); itemWidth = convertToM(itemWidth); itemHeight = convertToM(itemHeight); double itemCubicMetres = 0.0; itemCubicMetres = convertToCubicMetres(itemLength, itemWidth, itemHeight); cout << "Your item is: " << itemCubicMetres << " Cubic Metres."; double itemCost = 0.0; itemCost = (containerCost / containerSize) * itemCubicMetres; cout << endl << endl; cout << "Your item would cost: " << (char)156 << itemCost << " to ship."; cout << endl << endl; cout << "Would you like to calculate the cost of another item? ((1) Yes (2) No)?"; cin >> again; cout << endl << endl; } while (again == 1); return 0; } | https://www.daniweb.com/programming/software-development/threads/413410/basic-application-to-calculate-shipping-costs | CC-MAIN-2022-05 | refinedweb | 325 | 65.73 |
New to .NET? It's a big bugger, ain't it?! Have no fear, below you'll find a load of links to great information available here at SitePoint to get you started!
Enjoy and hope to see you in the forums,
D
[b]<font size='3'>Books</font>
[/b] [rule=100%]Orange[/rule]
[b] Build Your Own ASP.NET 3.5 Web Site Using C# & VB, 3rd Edition
[/b] Build Your Own ASP.NET 3.5 Web Site Using C# & VB, 3rd Edition is packed full of practical examples, straightforward explanations, and ready-to-use code samples in both C# and VB. The third edition of this comprehensive step-by-step guide will help get your database-driven ASP.NET web site up and running in no time.
[b][]()[/b]
[b]<font size='3'>Articles</font>[/b]
[rule=100%]Orange[/rule]
[b][ASP.NET Graphs: Raise the Bar]()[/b]
by Pat Wong
If you use static images to present graphs and charts online, now's the time to make your efforts more dynamic. In this results-focused tutorial, Pat explains how easy .NET makes the dynamic generation and display of bar charts online.
[rule=90%]Orange[/rule]
[b][ASP.NET 2.0: A Getting Started Guide]()[/b]
by Cristian Darie and Zak Ruvalcaba.
[rule=90%]Orange[/rule]
[b][Interview with Dino Esposito, ASP.NET Expert]()[/b]
by Sara Smith.
[rule=90%]Orange[/rule]
[b][The ASP.NET Web.config File Demystified]()
[/b]by Ruben Heetebrij
The Web.config file can seem like a technical miasma - but once you delve a little deeper, with Ruben's practical guide, you'll be amazed at its flexibility and capabilities! Understand each section of the file and what control it provides with this hands-on introduction.
[rule=90%]Orange[/rule]
[b][Generating ASP.NET Images on the Fly]()
[/b]by Peter Todorov
With ASP.NET and the .NET Framework, it's easy to generate images dynamically. As Peter explains in this hands-on tutorial, .NET classes can be used to generate cool text images and thumbnails on the fly with a minimum of hassle!
[rule=90%]Orange[/rule]
[b][Use Amazon Web Services in ASP.NET]()[/b]
By Philip Miseldine
Amazon Web Services can push fresh content to your site, and help you make some cash in the process. Use ASP.NET with the Amazon Web Service to query the company's catalogue and return results to your site -- Philip's practical tutorial shows how.
[rule=90%]Orange[/rule]
[[b]Host .NET In SQL Server 2005 Express[/b]]()
By Philip Miseldine
SQL Server 2005 goes beyond T-SQL to provide the full power and breadth of functionality available in the .NET Framework. In this hands-on tutorial, Philip shows how to build stored procedures that host CLR-code using SQL Server 2005 Express.
[rule=90%]Orange[/rule]
[b][Get Started with Mono]()[/b]
By Philip Miseldine
Here's a disclaimer: I avoid Linux and am no Linux expert by any means. I shudder at the thought of black screens with a flashing cursor. I find myself moving my mouse around trying to find an icon to click or a menu to select.
[rule=90%]Orange[/rule]
[b][ASP.NET 2.0 Security]()[/b]
By Zak Ruvalcaba
I!
[rule=90%]Orange[/rule]
[b][A Single Sign-in Web Service in ASP.NET]()[/b]
By Philip Miseldine
Most of today's sites require users to undertake a registration process to allow the site owners to keep in touch with, or offer services to, those visitors. Building up a user base like this requires patience and dedication. Offer a new service or a new Website, however, and, typically, you'll need to start your user base from scratch yet again.
[rule=90%]Orange[/rule]
[b][Target Your Visitors Using GeoIP and .NET]()[/b]
By Philip Miseldine
While the Internet is a global phenomenon that connects different people in different countries, many sites fail to target their content or functionality to visitors who speak languages other than English, or who live outside countries with the largest Internet user bases, like America. But, with nations like China using the Internet more and more, English-only is no longer a smart decision.
[rule=90%]Orange[/rule]
[b][Securing Passwords in Your Database]()[/b]
By Zak Ruvalcaba
When ASP.NET developers think of Web security and authentication, three options typically come to mind: Windows authentication, forms authentication, and passport authentication.
[rule=90%]Orange[/rule]
[b][Use XML Query Definitions in .NET Applications]()[/b]
By David Clark
The Command objects in ADO.NET (such as OleDbCommand and SqlCommand) are a central aspect of the .NET database access strategy. When used properly, they provide excellent performance and security.
[rule=90%]Orange[/rule]
[b][Build Your Own ASP.NET Website Using C# And VB.NET, Chapter 4 - Web Forms and Web Controls]()[/b]
By Zak Ruvalcaba
As you might have realised from our work in the previous chapter,.
[rule=90%]Orange[/rule]
[b][Build Your Own ASP.NET Website Using C# And VB.NET, Chapter 3 - VB.NET and C# Programming Basics]()
[/b]By Zak Ruvalcaba.
[b][Why Use .NET?]()[/b]
By Philip Miseldine
.NET hasn't traditionally been the SitePoint community's framework of choice for Web development. A simple comparison of the activity within the PHP and the .NET forums highlights this fact. But with the release of SitePoint's first ASP.NET book, I thought it was about time us .NETers stood proud, and shouted from the rooftops exactly what makes this technology so good.
[rule=90%]Orange[/rule]
[b][Build Your Own ASP.NET Website Using C# And VB.NET, Chapter 2 - ASP.NET Basics]()
[/b]By.
[rule=90%]Orange[/rule]
[b][Build Your Own ASP.NET Website Using C# And VB.NET, Chapter 1 - Introduction to .NET and ASP.NET]()[/b]
By Zak Ruvalcaba.
[rule=90%]Orange[/rule]
[b][Build an RSS DataList Control in ASP.NET]()
[/b]By Philip Miseldine
RSS is finally getting the recognition it deserves. SitePoint now publishes RSS, and large news agencies like the BBC, the New York Times and CNN also publish RSS feeds. Now, developers can integrate content from a wide range of producers within their own applications, giving users a greater incentive to return, and opening up new possibilities for application development.
[rule=90%]Orange[/rule]
[b][Back to Basics: XML In .NET]()
[/b]By Philip Miseldine
One of the most exciting recent advances in computing has been XML. Designed as a stricter and simpler document format than SGML, XML is now used everywhere to produce cross-platform interoperable file formats.
[rule=90%]Orange[/rule]
[[b]Prepare Yourself for Whidbey[/b]]()
<font color='#5f5f5f'><font color='black'>by Philip Miseldine</font></font>
If you're champing at the bit to get your hands on Whidbey, the next generation of .NET, wait no more! Philip takes the Beta for a spin to find out what's on offer - from Master Pages and Themeing, to Visual Studio .NET Whidbey - in the looming product release.
[rule=90%]Orange[/rule]
[[b]Generate .NET XML Documentation With NDoc[/b]]()
<font color='black'>by Chris Cyvas</font>
<font color='black'>If project documentation is the last thing on your mind - and your priority list - you need NDoc, an XML documentation facility with both C# and VB.NET support. Chris shows how easy it is to use in his hands-on tute.</font>
[rule=90%]Orange[/rule]
[[b]Unified Data Access for .NET[/b]]()
<font color='black'>by Philip Miseldine</font>
<font color='black'>Make your .NET Web applications support countless database solutions with the help of the ADO.NET factory pattern. Philip explans the basics, before diving into a practical example that produces extensible and reusable application code.</font>
[rule=90%]Orange[/rule]
[[b]DataSet Vs. DataReader[/b]]()
<font color='black'>by Philip Miseldine</font>
Ah, the DataSet. It can be filled and ready to go in just 3 lines of code, and iterated using a nice, simple foreach loop. What could be easier? Well, with a little extra upfront work, the DataReader can increase performance drastically. Philip explains...
[rule=90%]Orange[/rule]
[[b]Paranoia: Cross Site Scripting[/b]]()
By Tiberius OsBurn
They’re watching you, you know that? They’ve been scoping you out for quite some time, looking at ways to screw with you and your site.
[rule=90%]Orange[/rule]
[Send Email Using ASP on .NET Server or WinXP Pro]()
by Andrew Wasson
CDONTS is on its way out and CDOSYS, Windows' new mail object, is where the future lies! Climb aboard the bandwagon as Andrew shows how to tweak a classic ASP mail script to work under the new regime.
[rule=90%]Orange[/rule]
[Create Your Own Guestbook In ASP.NET]()
by Sonu Kapoor
Having trouble finding an ASP.NET guestbook for your site? So was Sonu, so he developed one himself! Here, he shows exactly how it's done.
[rule=90%]Orange[/rule]
[url=""]Drilldown Datagrid Searching with ASP.NET
by Dimitrios Markatos
Allowing your users to refine their search results with .NET can be tricky - unless you know how to drilldown through the Datagird. Dimitrios shows the way, using the Dataset's Dataview Rowfilter property.
[rule=90%]Orange[/rule]
[Building an ASP.NET Shopping Cart Using DataTables]()
by Zak Ruvalcaba
Save yourself the time, cash, and hassle of buying a commercial shopping cart solution. Zak walks us through his 5-step guide to building a fully functional ecommerce shopping cart in ASP.NET!
[rule=90%]Orange[/rule]
[Build a WHOIS Lookup in ASP.NET]()
By Peter Todorov
Checking domain availability has never been easier - thanks to ASP.NET! Peter shows how to build your own WHOIS lookup in 6 easy steps.
[rule=90%]Orange[/rule]
[Sending Web eMail in ASP.NET]()
By Peter Todorov
Web email just got a whole lot easier... thanks to ASP.NET! In just 3 simple steps, Peter shows how to get your Webmail up and running.
[rule=90%]Orange[/rule]
[Interview - Doug Seven of DotNetJunkies.com]()
By Chris Canal
DotNetJunkies.com has achieved cult status across the globe. But as Chris discovers, the site represents just one aspect of co-founder Doug Seven's passion for .NET...
[rule=90%]Orange[/rule]
[Build an XML/XSLT driven Website with .NET]()
By Victor Pavlov
In this advanced tutorial, Victor wastes no time in getting your XML and XSLT- driven Website up and running. Leverage your existing skills to build your own site with .NET now!
[rule=90%]Orange[/rule]
[Threading in ASP.NET]()
BY Joshua Waller
Threading allows .NET developers to give their users the impression that mutiple tasks are executing at the same time. Josh rolls up his sleeves and shows how it's done.
See below for a much updated list of .NET resources.
This will be a living list, please post any suggested updates to this thread. If they pass muster, they will be included in the lead post.
Resources:
Last Updated 2007.03.02.
SitePoint
.NET Articles.NET BlogBuild Your Own ASP.NET Website Using C# & VB.NETBuild Your Own ASP.NET 2.0 Web Site Using C# & VB, 2nd Edition
Key Microsoft Resources (Framework)
Microsoft Developers Network (MSDN)<snip/>.NET 2.0 SDK: [x86 [url=]x64]().NET Framework 3.0.NET 1.1 SDKMSDN Library: Online <snip/>Visual Basic Developer CenterVisual C# Developer Center
Key Microsoft Resources (ASP.NET & Sql Stack)
ASP.NET 2.0 SiteMSDN ASP.NET Developer CenterMSDN Web Services Developer CenterMSDN SQL Server Developer CenterMicrosoft Patterns & Practices Team<snip/>
Express Editions DownloadsMicrosoft has released free, but limited versions of Visual Studio for use by hobbyist developers. In addition they have also released Sql Server Express for use with smaller applications. Get them from the links below. [NB: Some of the Visual Studio Express editions include Sql 2005. Downloads do require registration.]
Express Editions Home PageVisual Web Developer ExpressSql Server ExpressVisual Basic ExpressVisual C# ExpressVisual C++ ExpressVisual J# ExpressThe .NET Show: Microsoft Videos on all things .NET
Other Free Development Environments#Develop: free, open-source C# development environment.ASP.NET Web Matrix: Microsoft's free ASP.NET 1.1 development environment. Very limited, but great grandpappy of Visual Web Developer Express
Mono Project (or .NET on non-Windows platforms)The Mono project is a very well-run effort to port the .NET runtime to the *nix environment.
Mono Project HomeMono Framework DownloadsMonoDevelop: Cross-platform development environment for .NET code.Mono Migration Analysis Tool: a tool to check if your code will work under Mono.DotGNU: technically not Mono, but another .NET stack for *nix so it falls in the similar category.
Getting Started Guides & Tutorials<snip/>Learn ASP.NETC# Station C# TutorialASP.NET Starter KitsMicrosoft Developer's Network Learning Center for Beginning Programmers
.NET Oriented Websites411 Asp Resource Guide4 Guys From Rolla: Many tutorials and guides.<snip/>: a plethora of .NET articles.ASP.NET Resources: handy ASP.NET resources; emphasis on standards compliance.CodePlex: MS' open-source project home.CodeProject: Reams of user committed code. Non-reviewed so YMMV.ConnectionStrings.com: for when you cannot remember that connection string.DotNetKicks: d1gg for .NET land.<snip/>: Home for alot of older .NET projects.Grid View Guy: Handy ASP.NET guides and tutorials.PINVOKE.NET: Wiki site for calling unmanaged APIs from your managed code.The Server Side .NET: Enterprise-oriented development guide.The Daily WTF: because knowing what not to do is as important as knowing what to do.
BlogosphereAsp.NET blogs: The grandaddy of 'em all, Microsoft's ASP.NET mass blogging site.Geeks With Blogs: Another meta-blogging site with many different folks blogging about .NETIE Team Blog: Blog of the Internet Explorer teamBCL Team Blog: blog for the team dedicated to maintaining and expanding .NET's Base Class Library.ADO.NET Team Blog: blog for the team dedicated to maintaining and expanding ADO.NET.Scott Guthrie's blog: weblog of Scott Guthrie, General Manager for just about everything .NET.CodeBetter.com: Another great .NET blogYou've Been Haacked: Phil Haack's weblog, lots of great cutting edge stuff.Coding Horror: Jeff Atwood's rather insightful blog.
Development, Building & Testing ToolsNUnit: a very popular unit testing framework, along the lines of JUnit.MbUnit: another unit testing framework for .NET applications.NAnt: an xml-driven build tool, similar to Java's ant.CruiseControl.NET: a continuious integration framework for .NETnCover: A code coverage framework, allowing one to see which lines of code are tested using your testing framework of choiceTestDriven.NET: a visual studio add-in allowing for nearly one-click execution of unit tests and integrated code coverage. Free for hobbyists, cheap for professionals.
Miscellaneous Important Libraries & FrameworksNHibernate: a .NET port of the venerable hibernate library of java fame.[The Castle Project: an umbrella project for some open source tools designed to simplify enterprise .NET development. Key projects inlcude [url=]MonoRail and [url=]ActiveRecord]().The Microsoft Enterprise Library: a set of library and frameworks for building large applications. Key parts include a database abstraction layer as well as an exception handling and logging framework.CSS-Friendly Control Adapters: Adapts the standard .NET controls to render output using CSS for layout.SubSonic: the hawt, new data access framework for .NETlog4net: A .NET port of the popular log4j logging framework.IronPython: python interpreter/environment for .NET and the CLR. Write python, compile to IL.
ASP.NET Ajax FrameworksASP.NET AJAX: Microsoft's ASP.NET Ajax framework. .NET 2.0 only.Magic Ajax: Lighter-weight Ajax framework for .NET.Ajax.NET Professional: Another light-weight Ajax framework. Works with .NET 1.1.
Popular ASP.NET ApplicationsSubTEXT: an open-source, .NET blogging engine.<snip/>: the other open-source, .NET blogging engine.Rainbow Portal: an open-source .NET web portal application.Umbraco: an open-source .NET web portal application.DotNetNuke: a web portal application; one of the original .NET open-source applications.Community Server: A community-oriented application, featuring forums, blogs and other goodies. Free for non-commercial use and relatively cheap otherwise.<snip/>: An open-source .NET wiki application.
Not very related to the topic, but as C# and J# pages are listed, I think I'd be useful to list two more languages, which have open source compilers, supporting the .NET 2.0 platform: [Boo and [url=nemerle.org]Nemerle](boo.codehaus.org).
Nice post, wwb_99
want to get up to speed fast?
covers all main aspects of asp.net development
if you just started you'll want 2 get both sets.
i learned a lot of it through forums, but i wish total training would have had those videos when I first started. Would have saved me hours of time.
wwb_99, great summary of the .net world.
Thanks guys, interesting suggestions.
@earl-grey: I think I will add Iron Python to the list shortly, as it is rolling very, very well and is somewhat unqiue--a fully functional, dynamically typed language living in statically typed .NET land. C# and J# made the cut because they are offically supported and blessed by MS. Iron Python is not "offical" but it is real close.
@I87: not to sound rude, but what rug have you been living under?
?
please also include my site
?
you've listed all the main sites that i found when googling [yahoo/msn] on specific questions.
I tend to have a lot of resources to my disposal.
Example my school has a safari account so i can read about any technical subject I'm attracted 2 at any time.
I also get a lot of free stuff through usual networks. [ebooks, cbts, vtc's ect] approaching 300 GB; Use google desktop to index the ebooks.
and i wasn't a total noob @ web programming as i started with php v3. I moved to .net this past summer. So i knew what issues I would need to address in the .net world. Attending devconnections this past november, really showed me what I've been missing and got a lot of different approaches from different attendies.
here is a list of subjects I learned through the resources above:
here is what I've learned though the total training series:[note: i'll fill this in as i finish each dvd]
1st dvd)covers the page processing model: - big help for people comming from dynamic languages.
html vs webcontrols and attaching event handles and assinging properties
validating user input - like his example on custom validator.
visual studio: quick watch, help/index, use the bottom tag breadcrumb to select controls ; showed cool tricks I hadn't used before.
navigating a website - doesn't tackle target attribute or roles but maybe in a later dvd nor a provider for db.
2nd dvd)wow! I wrote a lot more notes on this one for sure.
describes what an assembly is and how to work with assemblies with namespaces.consumes a web service and provides creates a webservice.talks about tracingcovers web.config 80% [no discussion on defining your own handlers instead of app settings]talks about databind -- really covers 80% to bad doesn't discuss object data source, uses xmldatasource which was pretty cool.everything you would want to know about the gridview.. except placing a total at the bottom... or inputs for a new item in the footer.talks about options of deployment in vs and things to consider on iis.
4th dvd)had to skip to this section as i needed to create custom composite controls @ work and his explanation of webcontrol library was excellent...
Leblanc Meneses
Excellent collection of resources! You might want to add BlogEngine.net to the Popular ASP.NET applications list. I just launched my new ASP.NET blog using this engine and it was a snap.
@pinch - I have a PHP background, but am just getting into .NET for work, and I have had a similar experience. I find that much of the documentation for this stuff leaves out a lot of stuff or just assumes that you already know certain things. If they want more people to use .NET, they should make it easier on the noobs!
It's a really weird experience going from open source to closed source - things are just different!
I'm just using Visual Web Developer 2005 Express Edition, because that seemed to be the only free option.
I've found the tutorials at LearnVisualStudio.Net to be pretty noob friendly so far.
Some of the "LearnVisualStudio.Net" tutorials I links above can be found for free here:
good resource related to .net..............
asp.net<snip/>
Maybe you can find this resource useful too:
Great collection of resources wwb_99! Thank you!
I also recommend asp.net/learnThere is a great amount of video tutorials from Microsoft (and partners) for beginners and advanced developers.
For any ASP.NET beginner out there looking for guided learning via video instruction, then check out Essential ASP.NET hosted by Fritz Onion. It's free and it's very well done. I'm not sure why Microsoft doesn't make these webcasts more visible.
Essential ASP.NET Webcasts (Learning ASP.NET)
Here are couple of more resources for .Net
C# Code snippetsC#
VB Code snippetsVB
C++/CLI Code snippetsC++
Most of the code snippets are written in C#, VB and C++ which gives a good code comparison on each language and very useful for those who converting their project codes from one language to another.
Learn .NET from your iPhone while on the go:
Compliments of dotnetIQ.
Try these C Sharp Video Tutorials which cover most of the basics and there are always new ones being added.
I also created a nice C# Programming and Tech Forum where I am always there to answer questions.
Another good website I found.
C# Code Snippets
VB/C# Code ConverterSimple VB to C# and C# to VB code converter.
Site with useful articles about c# , .Net and programming.Programming Concepts Articles :: BlackWasp Software Development | https://www.sitepoint.com/community/t/net-resources-2-0/3035 | CC-MAIN-2016-36 | refinedweb | 3,643 | 60.72 |
Agenda
See also: IRC log
Agenda same as last week. . .
Plus vote to publish interim CR draft:, dated 10 May
RESOLUTION: Accept minutes of 7 March as published
HT: Next meeting is 21 May
HT: Regrets from HST for 21 May
HT: Norm distributed a pointer to his latest draft:, dated 10 May
HT: We are not in immediate reach of a complete test suite
HT: and it's been more than three months, so we should publish something
RESOLUTION: Ask the editor to publish the draft of 10 May as an interim CR draft as soon as convient
HT: PG raised some questions by email
PG: What's TimBL's current opinion wrt the pipeline model -- broken?
HT: It doesn't do what he wants,
but he's not opposed to it
... because he's interested in the semantics of XML documents
PG: His version does look like the kind of top-down recursive story you told
HT: Right, and that's what I was trying to get at in the elaborated infoset story
Minutes of last week:
PG's emails:
PG: So there isn't anything in our current model which implements that kind of multi-threaded recursive story
HT: Correct
AM: Is it that there's a step or two missing, or is it more fundmental?
HT: More fundamental
HT: The basic model of the proc model is infoset to infoset transforms.
HT: It would be problematic to use our existing framework to do a standard recursive down-and-up.
HT: goes a lot further in trying to formalise this stuff.
PG: In a full recursive-descent process, you can do things on the way down as well as on the way back up
HT: That's true, you can, but you typically don't
PG: What about namespace decls
HT: Good example, not context-free
AM: What about XSLT2 -- it would
let you do a lot of that, wouldn't it?
... both down and up
HT: TBL's idea is to produce some kind of semantic object, not an infoset.
HT: So there is a more fundamental reason that what TBL wants is not what our proc model does. If for instance your document combines XHTML, SVG, etc., what TimBL wants at the end is not an infoset, but a page (description).
<PGrosso>
PG: Surprised to see you mention XInclude, XML Sig, XML Encryption, but not xml:id and xml:base
HT: Good point, and I think
you're right on both counts
... Leaving out xml:base and xml:id was accidental
... xml:base comes for free with XProc
... but xml:id does not, in two ways:
... 1) We didn't require the xinclude step to recognise xml:ids as anchors for uris with fragids
PG: And wrt anchors there are
further questions wrt DTDs and XSDs
... it's all intertwingled, and we appear to need to de-confuse the order
... to say nothing of adding in recursive descent
HT: coming back to XProc vs
xml:id
... 2) we don't currently say that e.g. when parsing a character stream to produce an infoset, XProc processors should set xml:id attr IIs to have type ID
... or when we introduce xml:id attrs via e.g. p:add-attribute, that they should get that type
... does this matter? Is it detectable whether we do or not?
... What if we write type-aware XPaths, which look for type ID -- should/do/how do we know if they match xml:id?
... Need Norm for that
... Coming back to Decryption and signature verification
... For a long time I have wanted to include these, because I think the world would be a better place if use of the XML security technologies was much more widespread. But I've finally given up: of necessity, decryption and signature verification involve out-of-band appeal to key files and passphrases. Without those, the data just isn't secure. And you may need more than one set of them for a given document. This just doesn't fit well with a notion of default processing model which is pervasive, simple, and often unattended.
... Good news: Without these, since Xinclude is itself recursively specified, we don't have to implement fixed-point detection for the DXPM in XProc
... So maybe we could write an XProc pipeline which implemented the DXPM:
... [Straw man] An XProc pipeline consisting of an XInclude step
... (modulo some uncertainties wrt xml:id)
PG: So all we need from a small-s schema is IDness?
HT: That is the problem
alright
... There's a chicken and egg problem
... Imagine two stages: we publish an DXPM spec; we publish a new edition of XInclude which references the new DXPM spec
... We won't get everything we want until the second step
PG: Do we have to worry about schemas?
HT: Yes, because of the way we wrote XPointer wrt IDness
PG: Any other way?
HT: External entities
PG: They get expanded, don't they?
HT: Not by all the browsers
PG: Assuming they have been
expanded, there's nothing except IDness you need from schemas,
in order to resolve XPointers and do xinclude
... assuming only element and framework
HT: And 3023bis
HT: Well, if we think allowing some
kind of parameterisation/optionality, for use by specs. which reference DXPM
and/or implementations which appeal to it, so that, for example, if five years
from now we add XML Excision to the core XML specs (remove this bit of
the document before further processing), should it be easy to add it to the
DXPM? Should we have a core plus optional bits? In either case, we could use
that flexibility to allow e.g. XSD or RNG into the DXPM in some situations.
... Open questions: 1) What about the flexibility in the XML spec itself? Do we want to require the 'full' well-formedness parse?
... 2) Parameterisable/extensible/fixed+optional --- or not?
... If it were up to me, I'd say "yes" to 'full' WFP
PG: I thought you didn't want to bring in the DTD?
HT: No, just not all the other schema languages | http://www.w3.org/XML/XProc/2009/05/14-minutes.html | CC-MAIN-2014-49 | refinedweb | 1,023 | 70.43 |
Filling in for Editor-in-Chief Howard Dierking, Ted Neward lends some insight into the state of data collection and manipulation.
Ted Neward
MSDN Magazine July 2008
See why you need to be a polyglot programmer and what mixing and matching languages can do for your projects.
MSDN Magazine March 2009
Cobra, a descendant of Python, offers a combined dynamic and statically-typed programming model, built-in unit test facilities, scripting capabilities, and much more. Feel the power here.
One-time passwords offer solutions to dictionary attacks, phishing, interception, and lots of other security breaches. Here's how it all works.
Dan Griffin
MSDN Magazine May 2008
printf "Hello, world!"
System.Windows.Forms.MessageBox.Show "Hello World"
let results = [ for i in 0 .. 100 -> (i, i*i) ]
printfn "results = %A" results
let add a b =
a + b
let return10 _ =
add 5 5
// 12 is effectively ignored, and ten is set to the resulting
// value of add 5 5
let ten = return10 12
printf "ten = %d\n" ten
let add5 a =
add a 5
public class Adders {
public static int add(int a, int b) { return a + b; }
public static int add5(int a) { return add(a, 5); }
}
val ten : ('a -> int)
delegate int Transformer<T>(T ignored);
public class App
{
public static int return10(object ignored) { return 5 + 5; }
static void Main()
{
Transformer<object> ten = return10;
System.Console.WriteLine("ten = {0}", return10(0));
}
}
#light
let results = [ for i in 0 .. 100 -> (i, i*i) ]
printfn "results = %A" results
let compute2 x = (x, x*x)
let compute3 x = (x, x*x, x*x*x)
let results2 = [ for i in 0 .. 100 -> compute2 i ]
let results3 = [ for i in 0 .. 100 -> compute3 i ]
/// Get the contents of the URL via a web request
let http(url: string) =
let req = System.Net.WebRequest.Create(url)
let resp = req.GetResponse()
let stream = resp.GetResponseStream()
let reader = new System.IO.StreamReader(stream)
let html = reader.ReadToEnd()
resp.Close()
html
let getWords s = String.split [ ' '; '\n'; '\t'; '<'; '>'; '=' ] s
let getStats site =
let url = "http://" + site
let html = http url
let words = html |> getWords
let hrefs = html |> getWords |> List.filter (fun s -> s = "href")
(site,html.Length, words.Length, hrefs.Length)
let words = getWords html
open System.Collections.Generic
let capitals = Dictionary<string, string>()
capitals.["Great Britain"] <- "London"
capitals.["France"] <- "Paris"
capitals.ContainsKey("France")
VECTOR2D IN F#
type Vector2D(dx:float,dy:float) =
let length = sqrt(dx*dx + dy*dy)
member obj.Length = length
member obj.DX = dx
member obj.DY = dy
member obj.Move(dx2,dy2) = Vector2D(dx+dx2,dy+dy2)
VECTOR2D IN C# (REFLECTOR>
[Serializable, CompilationMapping(SourceLevelConstruct.ObjectType)]
public class Vector2D
{
// Fields
internal double _dx@48;
internal double _dy@48;
internal double _length@49;
// Methods
public Vector2D(double dx, double dy)
{
Hello.Vector2D @this = this;
@this._dx@48 = dx;
@this._dy@48 = dy;
double d = (@this._dx@48 * @this._dx@48) +
(@this._dy@48 * @this._dy@48);
@this._length@49 = Math.Sqrt(d);
}
public Hello.Vector2D Move(double dx2, double dy2)
{
return new Hello.Vector2D(this._dx@48 + dx2, this._dy@48 + dy2);
}
// Properties
public double DX
{
get
{
return this._dx@48;
}
}
public double DY
{
get
{
return this._dy@48;
}
}
public double Length
{
get
{
return this._length@49;
}
}
}
#light
open System
open System.IO
open System.Windows.Forms
open Printf
let form = new Form(Text="My First F# Form", Visible=true)
let menu = form.Menu <- new MainMenu()
let mnuFile = form.Menu.MenuItems.Add("&File")
let filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*"
let mnuiOpen =
new MenuItem("&Open...",
new EventHandler(fun _ _ ->
let dialog =
new OpenFileDialog(InitialDirectory="c:\\",
Filter=filter;
FilterIndex=2,
RestoreDirectory=true)
if dialog.ShowDialog() = DialogResult.OK then
match dialog.OpenFile() with
| null -> printf "Could not read the file...\n"
| s ->
let r = new StreamReader(s)
printf "First line is: %s!\n" (r.ReadLine());
s.Close();
),
Shortcut.CtrlO)
mnuFile.MenuItems.Add(mnuiOpen)
[<STAThread>]
do Application.Run(form)
fun _ _ -> ...
if dialog.ShowDialog() = DialogResult.OK then
match dialog.OpenFile() with
| null -> printf "Could not read the file...\n"
| s ->
let r = new StreamReader(s) in
printf "First line is: %s!\n" (r.ReadLine());
s.Close();
// Declaration of the 'Expr' type
type Expr =
| Binary of string * Expr * Expr
| Variable of string
| Constant of int
// Create a value 'v' representing 'x + 10'
let v = Binary("+", Variable "x", Constant 10)
let getVarValue v =
match v with
| "x" -> 25
| "y" -> 12
| _ -> 0
let rec eval x =
match x with
| Binary(op, l, r) ->
let (lv, rv) = (eval l, eval r) in
if (op = "+") then lv + rv
elif (op = "-") then lv - rv
else failwith "E_UNSUPPORTED"
| Variable(var) ->
getVarValue var
| Constant(n) ->
n
do printf "Results = %d\n" (eval v)
let TransformImage pixels i =
// Some kind of graphic manipulation of images
let ProcessImage(i) =
async { use inStream = File.OpenRead(sprintf "source%d.jpg" i)
let! pixels = inStream.ReadAsync(1024*1024)
let pixels' = TransformImage(pixels,i)
use outStream = File.OpenWrite(sprintf "result%d.jpg" i)
do! outStream.WriteAsync(pixels')
do Console.WriteLine "done!" }
let ProcessImages() =
Async.Run (Async.Parallel
[ for i in 1 .. numImages -> ProcessImage(i) ])
#light
open System.Threading
let printWithThread str =
printfn "[ThreadId = %d] %s" Thread.CurrentThread.ManagedThreadId str
let evals =
let z = 4.0
[ async { do printWithThread "Computing z*z\n"
return z * z };
async { do printWithThread "Computing sin(z)\n"
return (sin z) };
async { do printWithThread "Computing log(z)\n"
return (log z) } ]
let awr =
async { let! vs = Async.Parallel evals
do printWithThread "Computing v1+v2+v3\n"
return (Array.fold_left (fun a b -> a + b) 0.0 vs) }
let R = Async.Run awr
printf "Result = %f\n" R | http://msdn.microsoft.com/en-us/magazine/cc164244.aspx | crawl-002 | refinedweb | 925 | 52.26 |
.
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default2.aspx.cs
And the code behind (doesnt have to be all of it, but at least the class and some of the code you changed):
EX:
public partial class Default2 : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
//CODE I CHANGED
}
}
ALSO, do you have a BIN folder in your project, and is it full of .dll's? If so it's very possible the project is working from the existing .dll's and ignoring your code behind. When you publish it should make new .dll's and these would need to be placed into your projects bin folder. Or you can remove the .dll's from the BIN folder (dont delete them just move them out of the project for a test), and see if your changes begin to take effect.
dday
Answer to your questions:
1. .aspx starts with:
<%@ Page Language="C#" AutoEventWireup="true" MasterPageFile="~/Site.Mas
2. Code behind that I changed(I changed lot`s of them like):
public partial class ConfigList : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
BindGrid();;;;XYZ // (I added those garbage ;;;XYZ as an example)
}
}
Then I Re-Built the page, no error it works just fine.
3. ALSO, do you have a BIN folder in your project ....
Yes, there is a bin folder with: 2 .dll for AJAX, 2 file with the name same as project name one ProjectName.dll and second ProjectName.pdb, also:
MetaBuilders.WebControls.R
I tried to remove the ones with the ProjectName. , but if I remove them then when I try to Built I get error: ProjectName.data (or Some thing) is Missing.
Thanks
If the .aspx page was pointing to the code behind you posted then the .aspx inherits would read:
Inherits="ConfigList"
Instead it is pointing to a namespace.class that is likely in either the app_code folder or pre-compiled into .dll's in the BIN directory.
The simple answer is to change the inherits attribute on the .aspx page to point to the class of the code behind associated with it.
This may not be a problem if you don't care about what was previously written so much, but alas you probably do. So you are going to have to find that class file and make sure it's compiling its .dll into the BIN directory.
I could be off base here but I stronly suspect it's something like this. If you cant find the class it might help you to press Ctrl-Shift-F and search for the namespace.class in your entire solution.
Good Luck. I doubt I can offer much more assistance than this, but someone else may catch on and have some ideas for you :)
dday | https://www.experts-exchange.com/questions/24413671/Basic-question-Changes-on-code-behind-aspx-cs-file-dosen't-effect.html | CC-MAIN-2018-09 | refinedweb | 466 | 76.42 |
Compositional, streaming I/O library for Scala
Alarmfor every read/write? In any case, in the websocket stream case the exact timeout is not so important so you can merge with a 1 second interval stream and then count upwards, emitting a keepalive on 10, or resetting on a message
Queueand
Ref[F, Token]
List[IO[PdfDocument]]. Using
parSequenceI efficiently get the resulting
IO[List[PdfDocument]]. However, afterwards I want to merge these into 1 document, in the same order. Using
parSequenceI can not start merging until all 100 documents are done. So I would need something like a stream of generated documents, that are generated in parallel (like
parJoin(n)), but whose output is still in the original order. So, if document "2" takes very long generate, it will have emitted "1", internally generated already "3", "4" and "5", but only when "2" is ready are these emitted.
Stream.emits(unrenderedPdfList).mapAsync(render)(10).fold...
technically the PDF format allows that but then I have to dig a lot deeper than the pdfbox merging code
Tbh it would be enough to know that the merging is monoidal (associative, in particular), but I agree that the solution above is way simpler and already a substantial improvement
i initially meant that you could have a PDF file where pages 4-6 are defined at the beginning, pages 1-2 at the end, and 3 in the middle, f
If that's the case you need to have all the parts already there, for reordering, i.e. you do need
parTraverse
sorry, I haven't tried this beforesorry, I haven't tried this before
def render(i: Int) = s"doc-${i}" val jobs = List( IO.pure(1), IO.pure(2), IO.pure(3) ) Stream.emits(jobs).mapAsync(10)(render) gives: type mismatch; [error] found : String [error] required: fs2.Pure[?] [error] Stream.emits(jobs).mapAsync(10)(render)
mapAsync, your List if of Int
Stream(1,2,3).mapAsync(10)(render)
emits.mapAsync(identity)
Are there any write ups on performance tricks/improvements for fs2? I suspect I can make my avro rpc can be alot faster, but not sure what's the issue at the moment
To continue on my quest to get better performance; Did some benchmarks with my pipes here:
1000 elements all respond with an average response time of 2ms~ now. Which seems fine.
I expect with such small messages to process a lot concurrent (like a http server, you are able process 30-70k requests/s right?). The setup is now: 256 * 1024 as read buffer. Each frame is ~10-15 bytes and the client does balancing from one writer queue to 5 sockets. When stress testing I'm able to process 2500 msg/s. The server and client live the same process though. Not sure if that matters?
The fs2 code for the server is here: and the client code is here:
Pipeline is just composition of pipes to separate concerns. See the pipes package | https://gitter.im/functional-streams-for-scala/fs2?at=5bc1e36b384492366131b9de | CC-MAIN-2021-25 | refinedweb | 497 | 59.53 |
import Statement
Enables access to a namespace contained either within the current script or in an external library..
The following example defines three simple packages and imports the namespaces into the script. Typically, each package would be in a separate assembly to allow maintenance and distribution of the package content.
// Create a simple package containing a class with a single field (Hello). package Deutschland { class Greeting { static var Hello : String = "Guten tag!"; } }; // Create another simple package containing two classes. // The class Greeting has the field Hello. // The class Units has the field distance. package France { public class Greeting { static var Hello : String = "Bonjour!"; } public class Units { static var distance : String = "meter"; } }; // Use another package for more specific information. package France.Paris { public class Landmark { static var Tower : String = "Eiffel Tower"; } }; // Declare a local class that shadows the imported classes. class Greeting { static var Hello : String = "Greetings!"; } // Import the Deutschland, France, and France.Paris packages. import Deutschland; import France; import France.Paris; // Access the package members with fully qualified names. print(Greeting.Hello); print(France.Greeting.Hello); print(Deutschland.Greeting.Hello); print(France.Paris.Landmark.Tower); // The Units class is not shadowed, so it can be accessed with or without a fully qualified name. print(Units.distance); print(France.Units.distance);
The output of this script is: | https://msdn.microsoft.com/en-US/library/eydzeybh(v=vs.80).aspx | CC-MAIN-2017-26 | refinedweb | 216 | 51.34 |
Jinja built-in statements/tags and functions (like Django template tags)
Jinja offers several built-in statements/tags that offer immediate access to elaborate operations on Jinja templates. I'll classify each of these built-in statements/tags and functions into sections so it's easier to identify them, note I'll add the reference (Function) to indicate it's referring to a Jinja function. The categories I'll use are: Comparison operations, loops, Python & filter operations, spacing & special characters and template structures.
Comparison operations
{% if %}with
{% elif %}
{% else %}.- The
{% if %}statement is the primary building block to evaluate conditions. The
{% if %}statement is typically used in conjunction with the
{% elif %}and
{% else %}statements to evaluate more than one condition. An
{% if %}statement with an argument variable evaluates to true if a variable exists and is not empty or if the variable holds a True boolean value. Listing 4-14 illustrates a series of
{% if %}statement examples.
Listing 4-14. Jinja {% if %} statement match a condition. A variable that just exists and is empty does not match a condition.
{% if %}with
and,
orand
notoperators.- The
{% if %}statement %}statement %}).
{% if <value> in %}and
{% if <value> not in %}.- The
{% if %}statement. Alhough %}).
Loops
{% for %}and
{% for %}with
{% else %}.- The
{% for %}statement iterates over items on a dictionary, list, tuple or string variable. The
{% for %}statement syntax is
{% for <reference> in <variable> %}, where
<reference>is assigned a new value from
<variable>on each iteration. Depending on the nature of a variable there can be one or more references (e.g. for a list one reference, for a dictionary two references).The
{% for %}statement also supports the
{% else %}statement which is processed in case there are no iterations in a loop (i.e. the main variable is empty). Listing 4-15 illustrates a
{% for %}and a
{% for %}and
{% else %}loop example.
Listing 4-15 Jinja {% for %} statement and {% for %} with {% else %}
<ul> <ul> {% for drink in drinks %} {% for storeid,store in stores %} <li>{{ drink.name }}</li> <li><a href="/stores/{{storeid}}/">{{store.name}}</a></li> {% else %} {% endfor %} <li>No drinks, sorry</li> </ul> {% endfor %} </ul>
The
{% for %}
statement also generates a series of variables to manage the
iteration process, such as an iteration counter, a first iteration
flag and a last iteration flag. Table 4-1 illustrates the
{%
for %} statement variables.
Table 4-1. Jinja {% for %} statement variables
On certain occasions you may need to nest multiple {% for %} statements and access parent loop items. In Django templates, this is easy because there's a variable for just this purpose. However, Jinja templates don't have this variable as you can see in table 4-1. A solution in Jinja templates is to define a reference variable with {% set %} before entering the child loop to gain access to the parent loop, as illustrated in the following snippet<ul> {% for chapter in chapters %} {% set chapterloop = loop %} {% for section in chapter %} <li> {{chapterloop.index }}.{{ loop.index }}">{{ section }}</li> {% endfor %} {% endfor %} </ul>
Another nested loop feature in Jinja templates is cycle, which does not exist in Django templates (as a variable at least, it does exist as a tag). The primary use of cycle is to define CSS classes so each iteration receives a different CSS class and upon rendering each iteration is displayed in a different color. The following snippet illustrates the use of the cycle variable.{% for drink in drinks %} <li class="{{ loop.cycle('odd','even') }}">{{ drink.name }}</li> {% endfor %}
Note cycle can iterate sequentially over any number of strings or variables (e.g. {{ loop.cycle('red' 'white' 'blue') }}).
{% for %}with
if.- The
{% for %}statement also supports the inclusion of
ifstatements to filter the iteration over a dictionary, list, tuple or string variable. In this manner you can limit the iteration to elements that pass or fail a certain criteria. The
{% for %}statement syntax with an
ifclause is
{% for <reference> in <variable> if <test_for_reference>%}(e.g.
{% for drink in drinks if drink not in ['Cappuccino'] %})
{% for %}with
recursivekeyword.- The
{% for %}statement also supports recursion over nested dictionaries, lists, tuples or string variables. Instead of creating multiple nested
{% for %}statements, you can use recursion to re-use the same layout over each of the nested structures. Listing 4-16 illustrates a sample of a recursive loop in Jinja.
Listing 4-16. Jinja {% for %} statement with recursive keyword
# Dictionary definition coffees={ 'espresso': {'nothing else':'Espresso', 'water': 'Americano', 'steamed milk': {'more steamed milk than milk foam': 'Latte', 'chocolate syrup': {'Whipped cream': 'Mocha'} }, 'more milk foam than steamed milk': 'Capuccino' } } # Template definition with for and recursive {% for ingredient,result in coffees.iteritems() recursive %} <li>{{ ingredient }} {% if result is mapping %} <ul>{{ loop(result.iteritems()) }}</ul> {% else %} YOU GET: {{ result }} {% endif %}</li> {% endfor %} # Output espresso water YOU GET: Americano steamed milk more steamed milk than milk foam YOU GET: Latte chocolate syrup Whipped cream YOU GET: Mocha more milk foam than steamed milk YOU GET: Capuccino nothing else YOU GET: Espresso
{% break %}and
{% continue %}.- The
{% break %}and
{% continue %}statements are available inside
{% for %}statements and allow you to break out of the loop or continue to the next iteration, just like the same keywords available in regular Python loops.
Note {% break %} and {% continue %} require enabling the built-in jinja2.ext.loopcontrols extension. See the second to last section in this chapter on how to enable Jinja extensions for more details.
range(Function).- The
rangefunction works just like Python's standard function and is useful when you want to generate a loop over a given range of numbers from i to j-1. For example,
range(0,5)generates the range
[0,1,2,3,4]. In addition, the range function also supports overriding the step count -- which defaults to 1 -- in the third position (e.g.
range(0,11,2)generates
[0,2,4,6,8,10]).
cycler(Function)).- The
cyclerfunction lets you cycle among a series of values. It works just like the
loop.cyclevariable available in
{% for %}loops, except the
cyclerfunction can be used outside loops. The
cyclerfunction uses its
next()method to advance one item, the
reset()method to cycle to the first item and the current attribute to return the current item. Listing 4-17 illustrates a cycler method definition with CSS classes, which is then used over multiple
{% for %}loops to define a list where each item is assigned a different CSS class based on the cycle iteration.
Listing 4-17 Jinja cycler function
{% set row_class = cycler('white','lightgrey','grey') %} <ul> {% for item in items %} <li class="{{ row_class.next() }}">{{ item }}</li> {% endfor %} {% for otheritem in moreitems %} <li class="{{ row_class.next() }}">{{ otheritem }}</li> {% endfor %} # Output <ul> <li class="white">Item 1</li> <li class="lightgrey">Item 2 </li> <li class="grey">Item 3 </li> <li class="white">Item 4</li> <li class="lightgrey">Item 5</li> <li class="grey">Other item 1</li> <li class="white">Other item 2</li> </ul>
joiner(Function).- The
joinerfunction lets you join a series of disparate sections and join them with a given separator, which defaults to a comma-space ("
,"). A characteristic of the joiner function is that it returns the separator string every time it's called, except the first time to give the correct appearance in case sections are dependent on a condition. Listing 4-18 illustrates a joiner method definition with a slash-space ("
/") as its separator, which is then used to join a list of sections.
Listing 4-18 Jinja joiner function
{% set slash_joiner = joiner("/ ") %} User: {% if username %} {{ slash_joiner() }} {{username}} {% endif %} {% if alias %} {{ slash_joiner() }} {{alias}} {% endif %} {% if nickname %} {{ slash_joiner() }} {{nickname}} {% endif %} # Output # If all variables are defined User: username / alias / nickname # If only nickname is defined User: nickname # If only username and alias is defined User: username / alias # Etc, the joiner function avoids any unnecessary preceding slash # because it doesn't print anything the first time its called
Python and filter operations
{% set %}.- The
{% set %}statement lets you define variables in the context of Jinja templates. It's useful when you need to create variables for values that aren't exposed by a Django view method or when a variable is tied to a heavyweight operation. The following is a sample statement of this statement
{% set drinkwithtax=drink.cost*1.07 %}. The scope of a variable defined in a
{% set %}statement is from its declaration until the end of the template.
The
{% set %}statement can also define content blocks. For example, the statement
{% set advertisement %}<div class'banner'><img src=.....></div>{% endset %}, creates the variable
advertisementwith the content enclosed between
{% set %}and
{% endset %}which can later be reused in other parts of a template (e.g.
{{advertisement}}). The built-in
{% macro %}statement -- described in the template structures section -- provide more advanced re-use functionality for content blocks.
{% do %}(This statement requires enabling the built-in jinja2.ext.do extension, see the section on Jinja extensions for more details).- The
{% do %}statement is an expression evaluator that works like the
{{ }}variable syntax, except it doesn't produce output. For example, to increment the value of a variable or add a new element without producing any output, you can use the
{% do %}statement (e.g.{
% do itemlist.append('Forgot to add this other item') %}).
Note {% break %} and {% continue %} require enabling the built-in jinja2.ext.loopcontrols extension. See the second to last section in this chapter on how to enable Jinja extensions for more details.
{% with %}.- The
{% with %}statement is similar to the
{% set %}statement, the only difference is the
{% with %}statement limits the scope of a variable with the
{% endwith %}statement (e.g.
{% with myvar=1 %}...{% endwith %}any elements declared in ... have access to the
myvarvariable). It's also valid to declare
{% set %}statements within
{% with %}and
{% endwith %}statements to limit the scope of variables (e.g.
{% with %}{% set myvar=1 %}...{% endwith %}).
Note {% with %} requires enabling the built-in jinja2.ext.with_ extension. See the second to last section in this chapter on how to enable Jinja extensions for more details.
{% filter %}.- The
{% filter %}statement is used to apply Jinja filters to template sections. By default, Jinja filters are applied individually to template variables, but sometimes it can be helpful to apply Jinja filters to entire template sections. For example, if you declare
{% filter lower %}the
lowerfilter is applied to all content between this statement and the
{% endfilter %}statement -- note the
lowerfilter statement converts all content to lowercase, the next major section in this chapter describes Jinja's built-in filters.
dict(Function).- The
dictfunction offers an alternative to define dictionaries without literals (e.g.
{'id':1}is equivalent to
dict(id=1)).
Spacing and special characters
By default, Jinja keeps all spacing (e.g. tabs, spaces, newlines) unchanged from how they are defined in a template. Figure 4-1 illustrates the default rendering of a template snippet in Jinja.
Figure 4-1 Default space rendering in Jinja template
As you can see in figure 4-1, the
spacing before, after and by the
{% for %} and
{% if %} statements themselves is generated as is.
While this spacing is natural, it can be beneficial to create more
compact outputs with templates that handle a lot of data. The minus
sign
- appended to either the start or end of a
statement (e.g.
{%- <statement> -%}) tells Jinja
to strip the new line that follows it. This is best illustrated
with the examples presented in figure 4-2 and figure 4-3.
Figure 4-2. Space rendering in Jinja template with single -
Figure 4-3. Space rendering in Jinja template with double -
As you can see in figure 4-2, the
- symbol before closing the
{% for %}
statement makes Jinja eliminate the new line after each iteration.
In the case of the
{% if %} statement also in figure
4-2, the
- symbol has no impact because there's no new
line associated with the statement. In figure 4-3 you can see
there's an additional
- symbol at the start of the
{% endfor %} statement which makes Jinja eliminate the
new line before the start of each iteration. In the case of the
{% if %} statement also is figure 4-3, the additional
- symbol has no impact because there's no new line
associated with the statement.
Because adding
-
symbols to every Jinja statement can become tiresome, you can
configure Jinja so that by default it uses this behavior (i.e. just
as if you added
-). To alter Jinja's default spacing
behavior, you can use two Jinja environment parameters :
trim_blocks and
lstrip_blocks, both of
which default to
False. Note that in Django you set up
Jinja environment parameters as part of the
OPTIONS
variable in
settings.py, as described in the prior
section on setting up Jinja template configuration in Django.
Figure 4-4 illustrates the
rendering of a code snippet when
trim_blocks is set to
True, where as figure 4-5 illustrates the rendering of
a code snippet when both
trim_blocks and
lstrip_blocks are set to
True.
Figure 4-4. Space rendering in Jinja template with trim_blocks
Figure 4-5. Space rendering in Jinja template with both trim_blocks and lstrip_blocks set to True
As you can see in figures 4-4 and
4-5, the rendering produced by changing the
trim_blocks and
lstrip_blocks Jinja
environment variables is very similar to that of using
- symbols to start and end Jinja statements. It's
worth mentioning that if you set
lstrip_blocks to
True and want to omit its behavior for certain
sections, you can do so by adding the plus sign
+ to
either the start or end of a statement -- just like you use the
minus sign
- to achieve its opposite behavior.
{% raw %}.- The
{% raw %}statement is used to output any Jinja reserved characters verbatim until the
{% endraw %}statement is reached. The
{% raw %}statement is ideal if you want to render large chunks of Jinja template code or if you have a lot of text that includes special Jinja template characters (e.g.
{{,
{%)
Tip You can output special Jinja template characters individually by quoting them as part of a hard-coded string variable (e.g. to output {{ use {{ '{{' }}) vs. using a {% raw %} statement.
{% autoescape %}.- The
{% autoescape %}statement lets you escape HTML characters from a template section, effectively overriding Django's Jinja default autoescaping behavior. The
{% autoescape %}accepts one of two arguments
trueor
false. With
{% autoescape true %}all template content between this statement and the
{% endautoescape %}statement is HTML escaped, with
{% autoescape false %}no template content between this statement and the
{% endautoescape %}statement is HTML escaped.
Note {% autoescape %} requires enabling the built-in jinja2.ext.autoescape extension. See the second to last section in this chapter on how to enable Jinja extensions for more details.
lipsum(Function).- The
lipsumfunction is used to display random latin text, which is useful for filler on templates. The
lipsumfunction is called with four parameters:
lipsum(n=5, html=True, min=20, max=100). Where
nis a number of paragraphs to generate, if not provided the default
nis
5;
htmldefaults to
Trueto return HTML or you can set it to
Falseto return regular text; and
minand
maxrepresent the minimum and maximum number of random words per paragraph. To use the lipsum function you simply define a variable with it and the output to generate the random latin text (e.g.
{% set latinblurb=lipsum() %}and then
{{latinblurb}}to output the random latin text).
Template structures
{% block %}.- The
{% block %}statement is used to define page sections that can be overridden on different Jinja templates. See the previous section on creating reusable Jinja templates for detailed examples of this statement.
{# #}.- The
{#}statement is used to enclose comments on Jinja templates. Any content placed between
{#and
#}is bypassed by Jinja and doesn't appear in the final rendered web page.
{% extends %}.- The
{% extends %}statement is used to reuse the layout of another Jinja template. See the previous section on creating reusable Jinja templates for detailed examples of this statement.
{% include %}.- The
{% include %}statement is used to embed a Jinja template in another Jinja template. Note that by default, the
{% include %}statement gets access to the current template instance values (i.e. its context). If you want to disable access to a template's context you can use the
{% import %}statement or pass the keyword
without contextto the end of the
{% include %}statement (e.g.
{% from 'footer.html' without context %}). In addition, the
{% include %}statement also accepts the
ignore missingkeyword which tells Jinja to ignore the statement if the template to be included does not exist. See the previous section on creating reusable Jinja templates for detailed examples of this statement.
{% macro %}.- The
{% macro %}statement is a template function designed to output content. It's ideal for repetitive content snippets, where you define a
{% macro %}statement once and execute it multiple times with different variables -- like a function -- on any template. See the previous section on creating reusable Jinja templates for detailed examples of this statement. It's also worth mentioning the built-in
{% set %}statement -- described in the Python and filter operations section -- provides simpler re-use functionality for content blocks.
{% call %}.- The
{% call %}statement is used in conjunction with the
{% macro %}statement to reference the
caller()method within a
{% macro %}statement. If you define a
{% macro %}statement with a
caller()reference as part of its content, you can rely on the
{% call %}statement to invoke the
{% macro %}and have the contents of the
{% call %}statement substituted in place of the
caller()method. See the previous section on creating reusable Jinja templates for detailed examples of this statement.
{% import %}and
{% from ... import %}.- The
{% import %}statement is used to access elements from other templates. Similar to Python's standard import, you can also use the
fromand
askeywords to limit or rename the elements imported into a template. Note that by default and due to its caching behavior, the
{% import %}statement doesn't get access to the current template instance values (i.e. its context), it just gets access to globals (e.g. variables and macros). If you want to access a template's context you can use the
{% include %}statement or pass the keyword
with contextto the end of the
{% import %}statement to disable caching and access a template's context (e.g.
{% from 'footer.html' with context %}). See the previous section on creating reusable Jinja templates for detailed examples of this statement. | https://www.webforefront.com/django/usebuiltinjinjastatements.html | CC-MAIN-2021-31 | refinedweb | 3,034 | 53.41 |
Hi,
extend JFrame class.
class MyFrame extends JFrame
add your button objects to the container object(MyFrame) from within the constructor.
//create the button object
JButton button = new JButton("Button 1");
//add it to the container(unsure if i typed this correctly)
getContentPane().add(button);
I didn't run this code just typed it but I don't see any errors off hand.
//need to import this for jFrame
import javax.swing.*;
public class test extends JFrame{
public test() {
//sets the title of the jFrame by calling the jFrame constructor
super("My First JFrame");
//sets the size of your jFrame
setSize(400,300);
//makes the jFrame Visible
show();
}
public static void main(String args[]){
//creates a new instance of your class
test t = new test();
//causes the process to terminate when you click the x
t.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}
You have to initialize the GUI in the JFrame by having your constructor method declare all the visual parts of your JFrame.
public thing()
{
setSize(400 , 400);
setVisible(true);
}
Then , later on , you have to run the program and create an instance of your JFrame.
public static void main (String args[])
{
thing application = new thing();
}
then when you run the program , the JFrame should come up.
Come and play the games.....
Originally posted by BigJoe
Hi,
few corrections here:
if you "jkust started java" then the last thing you should be thinking about is making a GUI. i know that it feels very constructive and a monumentous achievement if you make a window with buttons and stuff, but you will miss the point of proper programming this way. if you want to program like that, VB would be a better language for you to learn, because it literally is a case of "draw a GUI in something similar to a sophisticated Microsoft Paint" then "add fragments of code onto the GUI to make things happen when you click buttons"
java programs arent built that way.. :/
-
care with how you use the term "layout" .. a layout to us means a Layout Manager, some clever thing that resizes and controls component appearance for you.. JFrame is not a Layout Manager, so using the term "JFrame layout" will confuse some java programmers.. "JFrame appearance" would be a better choice of words...
-
now.. how do you want your buttons.. horizontal, vertical or diagonally aligned? want to make a JFrame that has 2 buttons that allows u to input hex(button1) and decimal(button2) so that it can be converted
so what im really trying to make is a little program that allows me to convert from hex to decimal numbers and from decimal to hex
thanks for all the help so far guys
Originally posted by general4172
//need to import this for jFrame
import javax.swing.*;
importing javax.swing.JFrame would be sufficient
do try and use the UBB [code] and [/code] tags to wrap any code you write.. it makes it readable
public class test extends JFrame{
if youre going to provide example code, do please make an effort to follow the Java Language Specification's recommendations that:
Class Names Start With A Capital Letter
public static void main(String args[]){
it's "String[] args"
i know that the difference is trivial and that the compiler doesnt mind.. but [] is a type identifier and hence belongs with the TYPE String, not the name "args"
After all, you have an array of Exceptions, perhaps.. and you know they are of type SQLExcetion.. how would you cast an array from Object type to SQLException type? like this:
SQLException[] se_ary = (SQLException[])object_ary;
having your [] flitting about all over the place makes code harder to read because of it's inconsistency.. aside from a declaration, you cannot use [] next to a variable name.. even in something like this, you must use a full array index:
Code:
int[][] the2D = new int[10][10];
for(int x = 0; x<the2D.length; x++){
for(nt y = 0; y<the2D[].length; y++){
//do stuff
}
}
the red part is wrong.. an array index is required.. hence, try to avoid putting [] next to variable names.. always put it next to the type instead
int[][] the2D = new int[10][10];
for(int x = 0; x<the2D.length; x++){
for(nt y = 0; y<the2D[].length; y++){
//do stuff
}
}
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?139724-flicker-problem&goto=nextoldest | CC-MAIN-2016-50 | refinedweb | 731 | 68.4 |
Hi,
I’m hoping to ask for some help please filtering the Coarse Universe of US stocks in QuantConnects 'research' section, and displaying the stock info (open, close, volume etc) in a dataframe. I’m new to Python (taking the EDX course), new to pandas/numpy (watching a lot of YouTube tutorials) and trading (listening to lots of quant podcasts) - my background is in website analytics and trying to learn Quant trading & Python at the same time (learn and practice together). I'm finding that most of the tutorials on QuantConnect are for full backtesting of algorithms in the 'lab' so I'm struggling get started in the 'research' section (not ready to do backtesting of an algorithm before researching signals themselves).
I've outlined below what I'm trying to achieve, and my work in progress code thats not working further below (I figure best practice is to give more info than less). If you're able to help with any of the sections that would be an amazing help, but ideally just a basic piece of working code that allows the Coarse Universe of US stocks in QuantConnects 'research' notebooks to be filtered would be an amazing help! Also, if you think a newbie like me should learn in a different way, please advise.
*****The rest of this thread post is extra detail on what I'm trying to do:*****
In the 'research' notebook:
1. Load the Coarse Universe of US stocks (~16k I believe)
2. Apply a basic filter for stocks >=$45 & <=$50 close price for a given day (eg 2017/5/15)
3. Print a count of how many stocks are returned
4. Sort these stocks high to low
5. Filter the top 50
6. Pull the daily stock info (Symbol, Open, close, high, low, volume) for a date range (e.g. 2017/3/15 to 2017/6/15) for these top 50 stocks
7. Add an additional column for day over day return rate
8. Display this data for the 50 stocks in a dataframe
9. Add the same SPY data (same data range, columns) into the same dataframe as the 51st row
Here’s my (newbie) first stab at the code (its a work in progress that obviously isn’t working yet):
#Start: initial general imports
##taken from: EmaCrossUniverseSelectionAlgorithm.py
from clr import AddReference
AddReference("System")
AddReference("QuantConnect.Algorithm")
AddReference("QuantConnect.Indicators")
AddReference("QuantConnect.Common")
from System import *
from QuantConnect import *
from QuantConnect.Data import *
from QuantConnect.Algorithm import *
from QuantConnect.Indicators import *
from System.Collections.Generic import List
import decimal as d
#end 'initial general imports'
#Start: QuantBook research imports
##taken from initial notebook. I think these are the standard settings for creating a QuantBook research books
###removed some duplicates with initial general imports.
%matplotlib inline
# Imports
AddReference("QuantConnect.Jupyter")
from QuantConnect.Data.Custom import *
from QuantConnect.Data.Market import TradeBar, QuoteBar
from QuantConnect.Jupyter import *
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Create an instance - needed (I believe) in order to use the research tab
qb = QuantBook()
#End: QuantBook research imports
#need to create a class to set universe criteria within. All examples use 'QCAlgorithm' (presumably for 'lab' testing) so copying this.
class MyUniverseSelectionResearch(QCAlgorithm):
def Initialize(self): #need to initialize
self.AddUniverse(self.CoarseSelectionFunction) #needs to happen within initialize section according to:
self.SetStartDate(2017,5,15) #Set Start Date for the initial filtering of stocks
self.SetEndDate(2017,5,15) #Set End Date for the initial filtering of stocks
#figure out how to Set Start & end Date for pulling the stock info from 2017/3/15 to 2017/6/15, SetStartDate2 likely won't work as its a method, perhaps use a variable like: Date1 = self.SetStartDate2(2017,3,15)
def CoarseSelectionFunction(self, coarse):
# sort descending by daily close price
sortedByClosePrice = sorted(coarse, \
key=lambda x: x.Price, reverse=True)
#missing how to link the date, does self.SetEndDate1 work or
# we need to return only the symbol objects (as the Universe doesnt have things like high/low, open - presume need to pull this info from the stock symbols, pulling anything more than stock symbols at this stage seems a waste
return [ x.Symbol for x in sortedByClosePrice[:50] ]
#How many stock symbols does this $45-$50 filter return? (Figure out how to do)
self.Debug("Stock Sybmols matching criteria >>> print numpy.pi: " + print(count(x))
#restrict to symbols >=$45
return [ x.Symbol for x in sortedByClosePrice[45:50] ]
#MISSING SECTION:Pull the daily stock info (Symbol, Open, close, high, low, volume) for a date range (e.g. 2017/3/15 to 2017/6/15) for these top 50 stocks
#(no progress due to needing to understand universe methods first)
#Do I need to do a loop 50 times, looping something like x = qb.History([symbol], 90, Resolution.Daily)
#Add an additional column for day over day return rate
#this code worked for a previous research notebook for a single SPY symbol, should be adaptable (perhaps needs looping too) once know what to latch to
##drop everyting from SPY except closing, rename column to spy_closing
###spy_close = hist.drop(['open','high','low','volume'],1).rename(columns={'close': 'spy_closing'})
#create a percent change for SPY
###p_change = spy_close.pct_change(1).rename(columns={'spy_closing': 'spy_pct_change'})
#Need to add this new column into the dataframe once I know what to latch onto (figure out how to do it)
#Display this data for the 50 stocks in a dataframe
dftop50 = pd.DataFrame(hist)
print(dftop50)
##Figure out: Add the same SPY data (same data range, columns) into the same dataframe as the 51st row
Thanks in advance for any help, tips or advice that you can offer
Dean | https://www.quantconnect.com/forum/discussion/4525/newbie-help-filtering-coarse-universe-in-research-notebook-amp-stock-info-in-dataframe/* | CC-MAIN-2020-40 | refinedweb | 953 | 53.92 |
get_mouse_mickeys man page
get_mouse_mickeys — How far the mouse has moved since the last call to this function. Allegro game programming library.
Synopsis
#include <allegro.h>
void get_mouse_mickeys(int *mickeyx, int *mickeyy);
Description
Measures how far the mouse has moved since the last call to this function. The values of mickeyx and mickeyy will become negative if the mouse is moved left or up, respectively. The mouse will continue to generate movement mickeys even when it reaches the edge of the screen, so this form of input can be useful for games that require an infinite range of mouse movement.
Note that the infinite movement may not work in windowed mode, since under some platforms the mouse would leave the window, and may not work at all if the hardware cursor is in use.
See Also
install_mouse(3), exmouse(3)
Referenced By
disable_hardware_cursor(3), enable_hardware_cursor(3), exmouse(3), install_mouse(3). | https://www.mankier.com/3/get_mouse_mickeys | CC-MAIN-2018-05 | refinedweb | 149 | 62.07 |
Android Graphics Example.
Let’s take a quick tour of the DrawDemo screenshot. The blue circle in the upper left corner was drawn without anti-aliasing, notice the jagged circumference. The blue circle on the right was drawn with anti-aliasing turned on, thus the smooth circumference. “Style.STROKE” was drawn with Paint.Style.STROKE and paint.setAntiAlias(false). “Style.FILL” was drawn with Paint.Style.FILL and paint.setAntiAlias(true). A box was drawn around “!Rotated” by getting a bounding Rect that surrounds that same text. The “Rotated!” text was drawn by first rotating the canvas by 45 degrees, pivoting on the bounding Rect center point and then drawing the text. The triangles demonstrate using the Shape class and using a single instance to draw the shape at different locations with the Shape offset method. The “After canvas.restore()” text demonstrates how to undo the canvas.rotate(); Finally, there’s a an example of how to draw a dashed line using a DashPathEffect.
Here’s the source code for the DrawDemo application:
import android.app.Activity; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.DashPathEffect; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Rect; import android.os.Bundle; import android.view.View; public class DrawDemo extends Activity { DemoView demoview; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); demoview = new DemoView(this); setContentView(demoview); } private class DemoView extends View{ public DemoView(Context context){ super(context); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); // custom drawing code here // remember: y increases from top to bottom // x increases from left to right int x = 0; int y = 0; Paint paint = new Paint(); paint.setStyle(Paint.Style.FILL); // make the entire canvas white paint.setColor(Color.WHITE); canvas.drawPaint(paint); // another way to do this is to use: // canvas.drawColor(Color.WHITE); // draw a solid blue circle paint.setColor(Color.BLUE); canvas.drawCircle(20, 20, 15, paint); // draw blue circle with antialiasing turned on paint.setAntiAlias(true); paint.setColor(Color.BLUE); canvas.drawCircle(60, 20, 15, paint); // compare the above circles once drawn // the fist circle has a jagged perimeter // the second circle has a smooth perimeter // draw a solid green rectangle paint.setAntiAlias(false); paint.setColor(Color.GREEN); canvas.drawRect(100, 5, 200, 30, paint); // create and draw triangles // use a Path object to store the 3 line segments // use .offset to draw in many locations // note: this triangle is not centered at 0,0 paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(2); paint.setColor(Color.RED); Path path = new Path(); path.moveTo(0, -10); path.lineTo(5, 0); path.lineTo(-5, 0); path.close(); path.offset(10, 40); canvas.drawPath(path, paint); path.offset(50, 100); canvas.drawPath(path, paint); // offset is cumlative // next draw displaces 50,100 from previous path.offset(50, 100); canvas.drawPath(path, paint); // draw some text using STROKE style paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(1); paint.setColor(Color.MAGENTA); paint.setTextSize(30); canvas.drawText("Style.STROKE", 75, 75, paint); // draw some text using FILL style paint.setStyle(Paint.Style.FILL); //turn antialiasing on paint.setAntiAlias(true); paint.setTextSize(30); canvas.drawText("Style.FILL", 75, 110, paint); // draw some rotated text // get text width and height // set desired drawing location x = 75; y = 185; paint.setColor(Color.GRAY); paint.setTextSize(25); String str2rotate = "Rotated!"; // draw bounding rect before rotating text Rect rect = new Rect(); paint.getTextBounds(str2rotate, 0, str2rotate.length(), rect); canvas.translate(x, y); paint.setStyle(Paint.Style.FILL); // draw unrotated text canvas.drawText("!Rotated", 0, 0, paint); paint.setStyle(Paint.Style.STROKE); canvas.drawRect(rect, paint); // undo the translate canvas.translate(-x, -y); // rotate the canvas on center of the text to draw canvas.rotate(-45, x + rect.exactCenterX(), y + rect.exactCenterY()); // draw the rotated text paint.setStyle(Paint.Style.FILL); canvas.drawText(str2rotate, x, y, paint); //undo the rotate canvas.restore(); canvas.drawText("After canvas.restore()", 50, 250, paint); // draw a thick dashed line DashPathEffect dashPath = new DashPathEffect(new float[]{20,5}, 1); paint.setPathEffect(dashPath); paint.setStrokeWidth(8); canvas.drawLine(0, 300 , 320, 300, paint); } } }
good simple program to try out various things and understand the concept !
thanks
Hell yeah!
After a whole day of sifting through the Google Android site, finally a simple and useful set of examples.
You are the man!
Thx!, great help
Thanks for the code. Very useful! Can you answer how you would clear the screen? I mean, I know how to make menus, they’re pretty simple, but what would the method look like to clear the screen?
Ditto. 😀
Too good… It was all I needed to start in Android
Thanks a lot.
Really needed a canvas example. Thanks a lot.
Awesome stuff dude.. Love this simple code.
Thanks dude..I am searching for this thing since last three days. But no where they have given as simple as you given. chaala chaala thanks.
Damn! I spent all day surfing in Android’s webpage figuring out how to draw a f!&%ing circle and nothing.
This is all what I was looking for, thanks man!
You are my new personal hero!
Awesome! Thank you!
| https://bestsiteinthemultiverse.com/2008/11/android-graphics-example/ | CC-MAIN-2022-27 | refinedweb | 868 | 54.59 |
Created on 2012-03-15.15:18:59 by danilo2, last changed 2012-03-15.16:04:36 by fwierzbicki.
Hi!
weakrefs in juthon 2.5.2 does not work as they should. concider this code from jython manual:
import weakref
class Object:
pass
o = Object()
r = weakref.ref(o)
o2 = r()
o is o2
del o, o2
print '!'
print r()
as a reusult we can see "<_main_.Object instance at 0x15434>" not None
Hi danilo2,
Looking at weakref at you'll note the paragraph:
A weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else.
Of particular note is "garbage collection is free to destroy..."
So even though the current implementation of CPython appears to destroy the weakref in a deterministic fashion, that will be due to it's reference counted gc implementation. Jython does not have deterministic garbage collection and so will not immediately destroy the weakref. In fact the time of destruction is in no way guaranteed. | http://bugs.jython.org/issue1851 | CC-MAIN-2014-35 | refinedweb | 189 | 75 |
Michael Speer wrote: > I wrote the following patch to the 7.2 branch of coreutils to allow > `sort` to sort by human readable byte sizes. I looked around a bit to > see what the status of previous attempts to integrate this > functionality were, but didn't see any very recent activity. This is > my first interaction with coreutils, so if I missed something obvious, > please point me towards it. > > Is the last potential patch ( > ) > moving through? If not, if I cleaned this up ( tabs, documentation, > and test cases ) and applied it to the current HEAD on savannah is > there a chance of getting this functionality into sort? Thanks for reviving this again. There was a more recent attempt that petered out unfortunately: > > Patch assumptions : > * that numbers will use the best representation ( never uses 1024b > instead of 1k, etc ) > * that the sizes will be specified via suffixes of b, K, M, G, T, P, > E, Z, Y or their alternately cased variants > > The first assumption results in checking only the suffix when they differ. > This enables it to match the output of `du -h / du --si`, but possibly > not other tools that do not conform to these assumptions. The consensus was that these assumptions are appropriate and useful. We assume C99 support now for coreutils so I tweaked your patch, the main change being to greatly shrink the lookup table initialisation. Note I commented out the lower case letters (except 'k') as I don't think any coreutils generate those and they could preclude supporting other suffixes in future. I'm not sure about doing that but I think it's better to err on the side of too few suffixes than too many? Something else to consider is to flag when a mixture of SI and IEC units are used, as this not being supported might not be obvious to users and could cause difficult to debug issues for users. I.E. flag an error if the following input is presented. 999MB 998MiB I added a very quick hack for that to the patch for illustration. I also noticed that you didn't terminate the fields before processing as was done for the other numeric sorts? So I changed that also in the attached patch but didn't analyze it TBH. cheers, Pádraig. p.s. obviously docs and help and tests need to be written, but we can do that after we get the implementation done.
diff --git a/src/sort.c b/src/sort.c index f48d727..a2ed015 100644 --- a/src/sort.c +++ b/src/sort.c @@ -176,6 +176,7 @@ struct keyfield bool random; /* Sort by random hash of key. */ bool general_numeric; /* Flag for general, numeric comparison. Handle numbers in exponential notation. */ + bool human_numeric; /* Flag for sorting by common suffixes. */ bool month; /* Flag for comparison by month name. */ bool reverse; /* Reverse the sense of comparison. */ bool version; /* sort by version number */ @@ -426,7 +427,7 @@ enum SORT_OPTION }; -static char const short_options[] = "-bcCdfgik:mMno:rRsS:t:T:uVy:z"; +static char const short_options[] = "-bcCdfghik:mMno:rRsS:t:T:uVy:z"; static struct option const long_options[] = { @@ -442,6 +443,7 @@ static struct option const long_options[] = {"merge", no_argument, NULL, 'm'}, {"month-sort", no_argument, NULL, 'M'}, {"numeric-sort", no_argument, NULL, 'n'}, + {"human-sort", no_argument, NULL, 'h'}, {"version-sort", no_argument, NULL, 'V'}, {"random-sort", no_argument, NULL, 'R'}, {"random-source", required_argument, NULL, RANDOM_SOURCE_OPTION}, @@ -1673,6 +1675,54 @@ numcompare (const char *a, const char *b) return strnumcmp (a, b, decimal_point, thousands_sep); } +/* error if a mixture of SI and IEC units used. */ +static void +check_mixed_SI_IEC (char suffix) +{ + static int seen_si = -1; + bool si_present = suffix == 'i'; + if (seen_si != -1 && seen_si != si_present) + error (SORT_FAILURE, 0, _("Both SI and IEC suffixes present")); + seen_si = si_present; +} + +/* Compare numeric entities ending in human readable size specifiers + b < K < M < G < T < P < E < Z < Y + We assume that numbers are properly abbreviated. + For example, you will never see 500,000,000b, instead of 5M. */ + +static int +human_compare(const char *a, const char *b) +{ + static const char weights [] = { + ['K']=1, ['M']=2, ['G']=3, ['T']=4, ['P']=5, ['E']=6, ['Z']=7, ['Y']=8, + ['k']=1, /*['m']=2, ['g']=3, ['t']=4, ['p']=5, ['e']=6, ['z']=7, ['y']=8,*/ + }; + + while (blanks[to_uchar (*a)]) + a++; + while (blanks[to_uchar (*b)]) + b++; + + const char *ar = a; + const char *br = b; + + while( ISDIGIT (*ar) || (*ar) == decimal_point || (*ar) == thousands_sep ) + ar++; + while( ISDIGIT (*br) || (*br) == decimal_point || (*br) == thousands_sep ) + br++; + + check_mixed_SI_IEC (*(ar+1)); + check_mixed_SI_IEC (*(br+1)); + + int aw = weights[to_uchar (*ar)]; + int bw = weights[to_uchar (*br)]; + + return (aw > bw ? 1 + : aw < bw ? -1 + : strnumcmp( a , b , decimal_point , thousands_sep)); +} + static int general_numcompare (const char *sa, const char *sb) { @@ -1917,13 +1967,14 @@ keycompare (const struct line *a, const struct line *b) if (key->random) diff = compare_random (texta, lena, textb, lenb); - else if (key->numeric | key->general_numeric) + else if (key->numeric | key->general_numeric | key->human_numeric) { char savea = *lima, saveb = *limb; *lima = *limb = '\0'; - diff = ((key->numeric ? numcompare : general_numcompare) - (texta, textb)); + diff = ((key->numeric ? numcompare + : key->general_numeric ? general_numcompare + : human_compare) (texta, textb)); *lima = savea, *limb = saveb; } else if (key->version) @@ -2889,7 +2940,7 @@ check_ordering_compatibility (void) for (key = keylist; key; key = key->next) if ((1 < (key->random + key->numeric + key->general_numeric + key->month - + key->version + !!key->ignore)) + + key->version + (!!key->ignore) + key->human_numeric)) || (key->random && key->translate)) { /* The following is too big, but guaranteed to be "big enough". */ @@ -2901,6 +2952,8 @@ check_ordering_compatibility (void) *p++ = 'f'; if (key->general_numeric) *p++ = 'g'; + if (key->human_numeric) + *p++ = 'h'; if (key->ignore == nonprinting) *p++ = 'i'; if (key->month) @@ -2992,6 +3045,9 @@ set_ordering (const char *s, struct keyfield *key, enum blanktype blanktype) case 'g': key->general_numeric = true; break; + case 'h': + key->human_numeric = true; + break; case 'i': /* Option order should not matter, so don't let -i override -d. -d implies -i, but -i does not imply -d. */ @@ -3140,7 +3196,8 @@ main (int argc, char **argv) gkey.sword = gkey.eword = SIZE_MAX; gkey.ignore = NULL; gkey.translate = NULL; - gkey.numeric = gkey.general_numeric = gkey.random = gkey.version = false; + gkey.numeric = gkey.general_numeric = gkey.human_numeric = false; + gkey.random = gkey.version = false; gkey.month = gkey.reverse = false; gkey.skipsblanks = gkey.skipeblanks = false; @@ -3219,6 +3276,7 @@ main (int argc, char **argv) case 'd': case 'f': case 'g': + case 'h': case 'i': case 'M': case 'n': @@ -3471,6 +3529,7 @@ main (int argc, char **argv) | key->numeric | key->version | key->general_numeric + | key->human_numeric | key->random))) { key->ignore = gkey.ignore; @@ -3480,6 +3539,7 @@ main (int argc, char **argv) key->month = gkey.month; key->numeric = gkey.numeric; key->general_numeric = gkey.general_numeric; + key->human_numeric = gkey.human_numeric; key->random = gkey.random; key->reverse = gkey.reverse; key->version = gkey.version; @@ -3495,6 +3555,7 @@ main (int argc, char **argv) | gkey.month | gkey.numeric | gkey.general_numeric + | gkey.human_numeric | gkey.random | gkey.version))) { | http://lists.gnu.org/archive/html/bug-coreutils/2009-04/msg00249.html | CC-MAIN-2015-22 | refinedweb | 1,117 | 54.52 |
There are plenty of books about software engineering, but only a few of them rank among my favorites. I read all of those that do over and over again, and I might just update this post in the future when I stumble upon something else that's decent.
Note that I tried to put the most important books at the top of the list.
Object Thinking by David West. This is the best book I've read about object-oriented programming, and it totally changed my understanding of it. I would recommend you read it a few times. But before reading, try to forget everything you've heard about programming in the past. Try to start from scratch. Maybe it will work for you too :)
PMP Exam Prep, Eighth Edition: Rita's Course in a Book for Passing the PMP Exam by Rita Mulcahy. This book is my favorite for project management. Even though it's about the PMI approach and PMBOK in particular, it is a must-read for everyone who is interested in management. Ignore the PMBOK specifics and focus on the philosophy of project management and the role of project manager in it.
The Art of Software Testing by Glenford J. Myers et al. You can read my short review of this book here. The book perfectly explains the philosophy of testing and destroys many typical myths and stereotypes. No matter what your job description is, if you're working in the software industry, you should understand testing and its fundamental principles. This is the only book you need in order to get that understanding.
Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. All you need to know about your unit testing is in this book. I'm fully aware that I didn't include famous software engineer Kent Beck's book in this list because I don't like it at all. You definitely should read it, just to know what's going on, but it won't help you write good tests. Read this one instead, and read it many times.
Working Effectively With Legacy Code by Michael Feathers. This is awesome reading about modern software development, its pitfalls, and typical failures. Most of the code we're working on now is legacy (a.k.a. open source). I read this book as a novel.
Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation by Jez Humble and David Farley. This is a perfect book about software delivery, continuous integration, testing, packaging, versioning, and many other techniques involved in programming. It's definitely a must-read for anyone who is serious about software engineering.
XML in a Nutshell, Third Edition by Elliotte Rusty Harold and W. Scott Means. XML is my favorite standard. And I hated it before I read this book. I didn't understand all the strange prefixes, namespaces, XPath expressions, and schemes. Just this one book changed everything, and ever since reading it, I've used XML everywhere. It is very well written and easy to read. It's a must for everybody.
Java Concurrency in Practice by Brian Goetz et al. This is a very practical book about Java multi-threading, and at the same time, it provides a lot of theoretical knowledge about concurrency in general. I highly recommend you read it at least once.
Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 and C++14 by Scott Meyers. No matter what language you're using, this book is very interesting and very useful. It makes many important suggestions about better C++ coding. If you understand most of them, your Java/Ruby/Python/Scala coding skills will improve significantly.
Code Complete: A Practical Handbook of Software Construction, Second Edition by Steve McConnell. Consider this the bible of clean coding. Read it a few times and use it as a reference manual in debates with your colleagues. It mentions the most terrible anti-patterns and worst practices you'll see in modern programming. To be a good programmer, you must know all of them.
Software Estimation: Demystifying the Black Art by Steve McConnell. This one's an interesting read about software engineering and its most tricky part—estimations. At the least, read it to be aware of the problem and possible solutions.
Writing Effective Use Cases by Alistair Cockburn. An old and very good book, you won't actually use anything from this in your real projects, but you will pick up the philosophy of use cases, which will redirect your mind in the right direction. Don't take this book as something practical; these use cases are hardly used anywhere today, but the idea of scoping functionality this way is absolutely right.
Software Requirements, Third Edition by Karl Wiegers (author) and Joy Beatty. A superb book about requirements analysis, the first and most important activity in any software project. Even if you're not an analyst, this book is a must-read.
Version Control With Git: Powerful Tools and Techniques for Collaborative Software Development by Jon Loeliger and Matthew McCullough. This title serves as a practical guide for Git, a version control system. Read it from cover to cover and you will save many hours of your time in the future. Git is a de-facto standard in version control, and every programmer must know its fundamental principles—not from a cheat sheet but from an original source.
JavaScript: The Definitive Guide: Activate Your Web Pages by David Flanagan. JavaScript is a language of the modern Web, and this book explains it very well. No matter what kind of software you develop, you must know JavaScript. Don't read it as a practical guide (even though it's called a guide) but rather as food for thought. JavaScript offers a lot to learn for Java/Ruby/Python developers.
CSS: The Definitive Guide by Eric A. Meyer. CSS is not just about colors and shadows, and it's not only for graphic designers. CSS is a key language of the modern Web. Every developer must know it, whether you're working with a back-end, front-end, or desktop application in C++.
Also, check my GoodReads profile. | http://www.yegor256.com/2015/04/22/favorite-software-books.html | CC-MAIN-2017-26 | refinedweb | 1,038 | 65.93 |
namekuseijin <namekuseijin.nospam at gmail.com> writes: > C); No, it only constructs one list (the zip() one) and only in Python 2.x - in Python 3.x, zip return a special 'zip object'. There is no list comprehension. It's a generator expression [1]. To avoid the list created by zip in python 2.x (for x large enough!), just do: from itertools import izip as zip Another way to define the function that may appeal to you, as a lisper. def compare(a, b, comp=operator.eq): return len(a) == len(b) and all(map(comp, a, b)) The same remark applies to map() here as to zip() above. > >> * it evaluates all the elements, even if one is false at the beginning; It does not [2]. > This evaluates just until finding one that is false: > > return (len(a) == len(b)) and not any(not comp(*t) for t in > (zip(a, b))) > > plus the zip call enclosed in parentheses got turned into an iterator. not any(not i for i in iterable) is not an optimisation of all(iterable). Refer to [2]. Moreover, putting a list in parenthesis does not magically turn it into a generator. -- Arnaud [1] [2] | https://mail.python.org/pipermail/python-list/2009-April/534823.html | CC-MAIN-2017-04 | refinedweb | 200 | 77.64 |
In my previous post I illustrated the Levenshtein edit distance by comparing the opening paragraphs of Finnegans Wake by James Joyce and a parody by Adam Roberts.
In this post I’ll show how to align two sequences using the sequence alignment algorithms of Needleman-Wunsch and Hirschberg. These algorithms can be used to compare any sequences, though they are more often used to compare DNA sequences than impenetrable novels and parodies.
I’ll be using Gang Li’s implementation of these algorithms, available on github. I believe the two algorithms are supposed to produce the same results, that Hirschberg’s algorithm is a more space-efficient implementation of the Needleman-Wunsch algorithm, though the two algorithms below produce slightly different results. I’ll give the output of Hirschberg’s algorithm.
Li’s alignment code uses lists of characters for input and output. I wrote a simple wrapper to take in strings and output strings.
from alignment import Needleman, Hirschberg def compare(str1, str2): seq1 = list(str1) seq2 = list(str2) for algorithm in [Needleman(), Hirschberg()]: a, b = algorithm.align(seq1, seq2) print("".join(a)) print("".join(b)) print()
The code inserts vertical bars to indicate spaces added for alignment. Here’s the result of using the Needleman-Wunsch algorithm on the opening paragraphs of Finnegans Wake and the parody Finnegans Ewok.
|||riverrun, past Ev|e| and Adam'||||s, mov|i|er|un, past ||new and |||||hopes, from swe|rv||e of shore|||| to bend of from s||tr|ike of |||||back to bend of b|||ay, brings us by a commodius |jeday, brings us by a commodius vic|u||s of recirculation back to |||lucas of recirculation back to H|owth Ca||stle|||| and E|nvi||r|ons. |fo||||||rest||moon and |en||dor.||||
I mentioned in my previous post that I could compare the first four paragraphs easily, but I had some trouble aligning the fifth paragraphs. The fifth paragraphs of each version start out quite simiar:
Bygme||ster Fi|nnega||n, of the Bygm|onster ||Ann||akin, of the Stutte||r|||||||ing Hand, f|re|emen'|s ||||||Throatchokin| Hand, for|cemen|’s mau-rer, lived in the broadest way mau-rer, lived in the broadest way immarginable in his rushlit immarginable in his rushlit toofar-|||back for messuages before toofar| — back for messuages before
but then Roberts takes the liberty of skipping over a large section of the original. This is what I suspected by looking at the two texts, but Hirschberg’s algorithm makes the edits obvious by showing two long sequences of vertical bars, one about 600 characters long and another about 90 characters long. | http://www.statsblogs.com/2018/11/24/sequence-alignment/ | CC-MAIN-2019-22 | refinedweb | 439 | 57.1 |
It even goes a step further and offers numerous animations and reminder options, adding to this free app’s appeal. CDEX GRATIS DOWNLOAD allows you to control more of the icons at the top of your computer’s menu bar, including system icons that are typically off limits, making it a very useful app for those with limited space. Sitting in the menu bar, itself, CDEX GRATIS DOWNLOAD is low profile and uses minimal memory to perform basic functions like hiding icons, moving them into the separate CDEX GRATIS DOWNLOAD menu, and setting notifications to show the items only when necessary. Installation of CDEX GRATIS DOWNLOAD CDEX GRATIS DOWNLOADs. If you want to control how and when your
icons appear in the menu bar, consider downloading CDEX GRATIS DOWNLOAD for Mac. It’s fast and easy to set up and offers a range of options to keep your menu bar under control. It is free to try with the trial version, allowing you to get a full feel for how it works with a variety of different icons. CDEX GRATIS DOWNLOAD CDEX GRATIS DOWNLOAD is fairly straightforward and once you are done the interface clearly shows you what to do next and
These small issues aside, DOWNLOAD HINDI SONG SHINING IN THE SHADE is a very powerful, very effective replacement for the standard Mac Calendar app. Everything pops out in one smooth animation, you’ll see DOWNLOAD HINDI SONG SHINING IN THE SHADE broken down by time and date, and you can click through them with ease – not to mention the number of keyboard DOWNLOAD HINDI SONG SHINING IN THE SHADE you can set. If you want a more compact yet more powerful calendar and reminder management tool for your Mac, then consider downloading DOWNLOAD HINDI SONG SHINING IN THE SHADE. It’s easy to use, loaded with features, and while setup can be frustrating, once you get it down, it will make everything a bit easier. It’s free to try but requires a paid upgrade after 14 days.DOWNLOAD HINDI SONG SHINING IN THE SHADE is designed to help you remove any duplicate tracks from your iTunes library with an outside scan action. After loading the app, you can select specific tracks in your library and run a scan to detect any duplicates and then take action to remove them, DOWNLOAD HINDI SONG SHINING IN THE SHADE, DOWNLOAD HINDI SONG SHINING IN THE SHADE is a good one to install. It offers quite a few options for how to strip out and remove dead tracks. The app is free to try but will cost $8 to purchase if you need to use it multiple times in the future. DOWNLOAD HINDI SONG SHINING IN THE SHADE for Mac allows you to create custom calendar search criteria, load a number of events that match that criteria, and then edit, move, or copy/paste them in bulk. The result is an app that could be useful if only it provided more guidance on how to use its many functions. While the tool offers quite a few options for how to search and edit events, the options are not always easy to navigate. When you open DOWNLOAD HINDI SONG SHINING IN THE SHADE,
Those users may find the automated functioning of DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac useful. However, regular users looking for a simple browser should look elsewhere. DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac offers a free trial version, but its limitations and restrictions are unknown. The full version requires a $29.95 payment. While there was no native installer, the program downloaded and completed setup as expected. Upon startup the first DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP as a browser, only advanced users would be able to take advantage of the unique automated features of DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac. Editors’ note: This is a review of the
trial version of DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XPte without restrictions. Download and installation occurred quickly, despite the lack of a native installer. A license agreement file appeared after download, but no acceptance was required for DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS
XPtion. The program appears to be very simple with no menu for DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XPtion., DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac works well, but its instant removal of files could be a problem for some users. With a number of RSS DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XPs available, it may be difficult for users to find one that works well on their Mac. DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac, while simple, is a fully DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP RSS feed DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP with a basic interface. With no trial version available, the full program costs $3.99 from the Mac App Store. As with other applications from Apple’s store, download of DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac was smooth and no user agreement acceptance was needed to start. DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP. DOWNLOAD KODAK EASYSHARE SOFTWARE FOR WINDOWS XP for Mac is a welcome program for those users and syncs data quickly and securely.
The opening screen discusses X and Y coordinates and the intricacies of geometry. However, after a few moments spent playing with the app, you’ll find that it offers a unique experience even for those that aren’t mathematicians, or even mathematically inclined. The goal of ATI-ATIHAN MUSIC. ATI-ATIHAN MUSIC ATI-ATIHAN MUSIC. ATI-ATIHAN MUSIC ATI-ATIHAN MUSIC, ATI-ATIHAN MUSIC Phon FREE EXCEED DOWNLOAD or FREE EXCEED DOWNLOAD,. FREE EXCEED DOWNLOAD is the standalone app for Google’s conversation hub, Hangout, and for the most part recreates the desktop experience very well, even adding some additional features for broadcasting on the go. The app, which is scaled for both iPhone and iPad, is designed for sharing content and is deeply integrated with FREE EXCEED DOWNLOAD. If you enjoy using FREE EXCEED DOWNLOAD, then this app is a natural extension of that. Google’s FREE EXCEED DOWNLOAD is already a very popular and efficient tool, offering a free alternative to pricey Web conferencing tools. However, the mobile app takes that functionality to the next level. The Google designed interface is
of course attractive and easy to use, but beyond that, FREE EXCEED DOWNLOAD connects with all other types of accounts, even other computers, and you can pause and continue your hangout between devices. You can also send photos or text messages from your device with the app, something other video chat tools don’t always support. There are issues, of course. Inability to see who is online, inability to set your status when logged in as you would in Skype, and the requirement that you have an active FREE EXCEED DOWNLOAD account are all issues that can
limit usability for many potential users. Google’s FREE EXCEED DOWNLOAD app is a fine extension of their FREE EXCEED DOWNLOAD service. It is not for everyone, though, namely because of its social-based restrictions. If you use FREE EXCEED DOWNLOAD, however, and want a free Web video chat tool you can take with you on the go, FREE EXCEED DOWNLOAD is a great option. FREE EXCEED DOWNLOAD manages to not only make sharing files fun, but also it streamlines the process of moving files between your phone and computer in a way that few could have dreamed up. While most of the functions in FREE EXCEED DOWNLOAD are based on its illustrative name, there are numerous other options built into the app–far more than you can sort through in your first sitting. FREE EXCEED DOWNLOAD FREE EXCEED DOWNLOAD, and for good reason. While the setup and interface can at times be a little overwhelming, once it is running, the tutorials walk you through every step of the process, whether moving photos and files or sharing music with friends. And of course, there is FREE EXCEED DOWNLOAD integration to make this all even easier. If you are looking for a faster, easier way to transfer files between your phone and computer or to share files on your phone with someone else, download FREE EXCEED DOWNLOAD. It’s easy to use, requires no account with FREE EXCEED DOWNLOAD, and is extremely fast and responsive–one of the best transfer apps on the App Store. FREE EXCEED DOWNLOAD automatically recognizes faces on the screen and adds a mustache to them, even as the camera is moving. Such a concept could make for a fun app, but unfortunately FREE EXCEED DOWNLOAD never builds on this basic idea and the result is ultimately forgettable. The app does nothing else, the interface is very limited, and the ads are intrusive, but the delivery of the primary draw in this app is well done.. FREE EXCEED DOWNLOAD is a free app that works as advertised but doesn’t go above and beyond initial expectations in any way.
The opening screen discusses X and Y coordinates and the intricacies of geometry. However, after a few moments spent playing with the app, you’ll find that it offers a unique experience even for those that aren’t mathematicians, or even mathematically inclined. The goal of FRUITY LOOPS 3 FULL VERSION FREE. FRUITY LOOPS 3 FULL VERSION FREE FRUITY LOOPS 3 FULL VERSION FREE. FRUITY LOOPS 3 FULL VERSION FREE FRUITY LOOPS 3 FULL VERSION FREE, FRUITY LOOPS 3 FULL VERSION FREE terrify your friends or family. The app is very simple, using motion detection and a selection of scary and spooky sounds to alarm whomever touches the phone, but it works quite well and could theoretically be a strong component in your next Halloween scare playbook. The concept is simple. Choose a sound — either a scream, a wolf howling, or a very eerie “I can see you” message. Then choose how sensitive to make the motion sensor (or change it to a straight timer) and press start. The next time someone picks up the phone (the screen black
While not the most advanced slideshow app out there, it is a good choice for amateur photographers, as it has an easy-to-use interface and nice 3D PINBALL FOR MAC FREE DOWNLOADity. Be ready to invest in the paid version of the app, though. 3D PINBALL FOR MAC FREE DOWNLOAD, 3D PINBALL FOR MAC FREE DOWNLOAD for Mac has some nice features for you. To enjoy this app, though, you will have to buy the paid version; the annoying watermark makes the trial version good only for testing the app’s features.3D PINBALL FOR MAC FREE DOWNLOAD for Mac enables you to convert Wikipedia pages into podcasts with no hassle so that you can listen to them on your smartphone, tablet, or eBook reader. While the 3D PINBALL FOR MAC FREE DOWNLOAD playback is robotic, the speed of the app and its ability to
import files directly into iTunes make it worthwhile. This app is great for students. 3D PINBALL FOR MAC FREE DOWNLOAD 3D PINBALL FOR MAC FREE DOWNLOAD may slightly annoy you. A small and fast app, 3D PINBALL FOR MAC FREE DOWNLOAD for Mac lives up to its promises. If it had a more natural, less robotic 3D PINBALL FOR MAC FREE DOWNLOAD, it would have been a better app; but it’s still worth trying. Overall, this application does its job well and can enhance your learning process, helping you to learn more on the go.3D PINBALL FOR MAC FREE DOWNLOAD for Mac gives you instant access to your iOS device, enabling you to back up and explore any type of content stored on it, including media files, call logs, text messages, contacts, and more. Since it features iTunes-like backup 3D PINBALL FOR MAC FREE DOWNLOADity, it’s capable of completely replacing iTunes as a device 3D PINBALL FOR MAC FREE DOWNLOAD. You’ll like its streamlined design and drag-and-drop functions. 3D PINBALL FOR MAC FREE DOWNLOAD businesses. Note that prior to using this feature, you may need to spend 20-30 seconds calibrating your device’s compass. If you’re in the U.S., ELECTRIC SLIDE GRANDMASTER SLICE FREE DOWNLOAD is definitely a great way to discover new shops and stores. International coverage isn’t as in-depth yet, but the app is worth a download if you’re in North America. ELECTRIC SLIDE GRANDMASTER SLICE FREE DOWNLOAD is a security app for the iPhone that creates and stores chains of information that would be otherwise insecure in a notepad file. ELECTRIC
SLIDE GRANDMASTER SLICE FREE DOWNLOAD′s larger screen, it runs smoothly, nonetheless. ELECTRIC SLIDE GRANDMASTER SLICE FREE DOWNLOAD. ELECTRIC SLIDE GRANDMASTER SLICE FREE DOWNLOAD,
Options include clearing the cache and history from your browser, running daily, weekly, and monthly cron scripts, clearing system logs, application logs, archived logs, and crash logs, and removing KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOADs.. KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD is a streamlined and effective program for completing one important aspect of routine maintenance for your system. If you’re looking for a comprehensive system care program, this isn’t it. But it does offer a lot of KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOADity and customization options for users of all experience levels, and it’s a great tool for System Administrators managing multiple machines. This app is free to try, and it costs $3.50 if you choose to make a purchase. KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD, all you have to do is enter the User ID and Password for the machine you want to control into your own when prompted, and you’ll be automatically KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOADed. Quick KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD. KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD is a great tool for accessing your own computer remotely or helping another user with a problem on theirs. It does have some limitations when it comes to mobile devices, so you’ll get the most out of it if you use it strictly on laptop or KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD computers. Even with this restriction, though, the program offers many benefits and runs smoothly. Sponsored KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD,. KANNADA RAJKUMAR OLD MOVIE SONGS FREE DOWNLOAD for Mac is a great option for making resizing photos accessible for users of all experience levels. It’s free to try with the limitation that it places multiple watermarks on each resized image. If you want to purchas
One feature that sets the widget apart from other Mac calculators is the automatic labeling of equation results for easier access at a later point. You may even create your own labels to use in subsequent calculations. Another great touch is the ability to switch between different numbering styles like decimal, binary, and engineering in real time, a feature that makes it an impromptu DEAD SILENCE THEME MUSIC MP3 DOWNLOAD, as well. While there are certainly many calculator apps and widgets out there, DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac looks like one of the better ones due to its sleek design and thoughtful features. Whether you’re an architect, programmer, scientist, mathematician, or student, this widget has much to offer you.DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac appears as a Dashboard widget and contains short descriptions of several popular third-party apps that integrate with the Mac Menu bar. It’s designed to help you improve your overall Mac knowledge by introducing you to potentially helpful software and general OS X tips. It’s particularly appealing for new Mac users. DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac installs quickly and then takes you to the Dashboard. The widget, itself, is extremely basic: It’s a list of third-party software used directly from the Mac Menu bar. Also, the widget contains some lesser-known features of OS X related to the Menu bar. Since DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac is not an app store, it doesn’t support direct downloads of the
featured apps. If you click on the title or screenshot of an app, a browser window will open, taking you to the product download page. In our tests, however, we have DEAD SILENCE THEME MUSIC MP3 DOWNLOAD that several of the links were either expired or did not direct us to a relevant site. Although it’s basic, DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac can still offer you value if you are not familiar with the greater Mac app ecosystem and want to learn more. It can be a good read, especially if you are
interested in software to enhance your Mac experience. Don’t expect too much from it, though. Featuring compelling animations and slick imagery, DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac brings with it the spirit of the “Matrix” movies. While running, the glyphs on the screen animate themselves to form mosaic representations of various characters from the popular trilogy. DEAD SILENCE THEME MUSIC MP3 DOWNLOAD, DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac helps you get immersed in the universe depicted in the movies. It’s quite basic with only a few configuration options, but it can delight the movie fans. DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac allows you to run Flash games and applications in fullscreen directly from your hard drive. Because it automatically associates itself with SWF files, access to your favorite games is just a double tap away. But don’t expect it to play Flash files designed to be played only from the Web. DEAD SILENCE THEME MUSIC MP3 DOWNLOAD for Mac comes with a README file and a small but useful guide to sites for downloading Flash games. The app’s interface is quite minimalistic – actually, non-existent. You load games by either double-tapping them in Finder or by using the Open option in the app’s File menu. The app keeps its minimalist style even in the Preferences window, with just two options to adjust, one being a fullscreen mode. When comparing the app’s performance with that of Safari, we DEAD SILENCE THEME MUSIC MP3 DOWNLOAD that it needs on average 90 percent less memory than the browser, while the processor load was roughly the same. The great thing about this app is that it allows you to play Flash games, a | http://baseofdownloads.com/ | CC-MAIN-2016-30 | refinedweb | 3,117 | 57.91 |
Disclaimer::
- REPL (read-eval-print-loop) abilities which allow for a very fast feedback loop while experimenting
- Terseness – the lack of type names everywhere makes code shorter
- Code evaluated at execution time (so config files can be scripts etc):
- Duck typing
- Dynamic reaction to previously undeclared messages
- Other parts of dynamic typing I’m unaware of (how could there not be any?)):
- How often do you take advantage of dynamic typing in a way that really wouldn’t be feasible (or would be very clunky) in a statically typed language?
- Is it usually the same single problem which crops up regularly, or do you find a wide variety of problems benefit from dynamic typing?
- When you declare a variable (or first assign a value to a variable, if your language doesn’t use explicit declarations) how often do you really either not know its type or want to use some aspect of it which wouldn’t typically have been available in a statically typed environment?
- What balance do you find in your use of duck typing (the same method/member/message has already been declared on multiple types, but there’s no common type or interface) vs truly dynamic reaction based on introspection of the message within code (e.g. building a query based on the name of the method, such as FindBooksByAuthor("Josh Bloch"))?
- What aspects of dynamic typing do I appear to be completely unaware of?
Hopefully someone will be able to turn the light bulb on for me, so I can be more genuinely enthusiastic about dynamic typing, and perhaps even diversify from my comfort zone of C#…
57 thoughts on “Where do you benefit from dynamic typing?”
The problem is that the typing system is just one aspect of a larger picture. If you don’t have dynamic dispatch (like you do in Ruby but don’t in C#), then dynamic typing is the same as passing everything as an object and casting it to the desired type in your methods. If your language doesn’t make it easy to inspect or reflect on your objects, then dynamic typing is more likely to cause you more ceremonious pain than syntactic gain.
Ruby emphasizes behavior at runtime. C# emphasizes behavior at compile time. Turning dynamic typing “on” in C# is not the same as taking full advantage of a dynamically typed language like Ruby. The differences in the languages are so much larger than their respective syntax. I don’t think you can truly learn and understand dynamic typing in a statically typed bubble.
Generally, I believe that many of the things you can do with dynamic typing can also be achieved with well-designed strongly-typed code. However, there are a few areas where dynamic typing can add a level of expressiveness that strongly-typed languages lose. For example:
1. Creating Domain-Specific Languages. I’ve seen a few elegant examples of this in languages like Ruby and Python; and while you can certainly do this in a strongly typed language (the MS DSL framework is actually quite nice), dynamic typing helps you fit the syntax and structure of your DSL to better fit your domain. Also, when creating DSLs a common task is to transform the constructs and expressions in the DSL into equivalent forms in another representation (C#, SQL, etc.). Untyped languages allow you to treat all expressions homogeneously. Take a look at.
2. Configuration Management. Many systems today use high-level, strongly typed languages like C# or Java for their implementation, and then rely on flimsy XML or KVP (key-value pair) models for configuration. I have two major “beefs” with using XML/KVP for configuration: a) the structure and syntax of XML/KVP is obtuse, and lends itself to a lot of repetition/redundancy, and b) XML/KVP is entirely declarative and generally doesn’t provide a way to intermix imperative statements. For example, try creating a web.config file that scales the number of worker threads in your ASP.NET thread pool to be equal to the number of CPU cores times 3. You can’t. Or try dividing a single configuration file into manageable modules that get “dynamically included” as needed. Not generally possible. But if you use a language like Ruby or PHP to define your configuration, you can easily mix declarative and imperative constructs as you need to; you can also dynamically assemble them from multiple fragments.
3. Meta-Programming. Strongly-typed languages can make it hard (or impossible) to do generative programming or construct type-agnostic higher-order functions. For example, writing a code in C# that takes an arbitrary function (with any number of parameters/return value), a list of parameters, and then “mapping” it onto some set of data or an iterable collection is not possible. Languages like F# take the approach of performing sophisticated type inference based on the definitions of functions to ensure type-safety. Languages like Ruby/PHP go the other direction and essentially ignore the types involved in the expression, assuming that the programmer will ensure that all types are compatible/coercible. I don’t know which approach is ultimately better, but I can say that when I’ve needed to write generative meta-code I find it easier in Ruby than in F# (Full disclosure: I’ve only tried it twice in F# and given up each time because my limited brain wasn’t able to grok the intermediate or end result).
@Leo: Thanks, that’s exactly the sort of thing I was looking for. (You might want to look at Guice for the second option, btw. I don’t know if any of the .NET IoC containers work in a similar way.)
Er, somewhat late.
I work in both Delphi and Python. Delphi is strongly statically-typed; Python is strongly dynamically typed. My understanding of how to work in each differs according to the following point-of-view substitution:
1) When in Delphi, my typical thought is, “When I am inside a method with arguments, what feature set (fields, properties, methods) does each argument support?”
2) When in Python, my typical thought is, “When I am calling this method with arguments, what features (fields, properties, methods) must be available within the arguments I provide?”
Now admittedly, this is as much about duck-typing as it is about dynamic typing (how could it not be?), but the essence of the difference is the massively increased scope for polymorphism in Python compared to Delphi.
Given something like
def foo(arg)
arg.method()
it should become clear that anything passed into foo that has a “.method()” in it will work. There is a huge difference between this, and setting the type of the argument to be some ancestor class from which it is required that all objects needing to be passed to “foo” must be descended.
In these situations, duck-typing, and as a consequence dynamic typing, sidestep the problems relating to multiple inheritance that tend to become problematic in statically-typed languages (subjective, agreed, but my opinion).
I struggled with this a great deal initially, because I found it very difficult to grok Python code because I never knew what was being passed into functions; how then to know what the arguments support?
The resolution to this problem lies in learning new habits in reading code: what defines the argument to a function (in a dynamically-typed language) is what gets *done* to it in that function, not what it *is* outside the context of that function. Again, this is about duck-typing, but dynamic typing is required for this to work. In python, a “valid” argument to a function means something (anything!) that will provide the functionality that that function requires. In a sense, one must temporarily suspend one’s normal thought process that would apply when reading statically-typed code. The “type” is set by what the argument is required to be able to do, not by external declaration.
I can shift between Delphi and Python quite easily, and the difference in typing mechanisms is not a big deal (given the shift in understanding described above). Code reuse is obviously greater in python, because functions are agnostic w.r.t. types, and therefore somewhat more atomic. I get to be lazier in Delphi because the IDE and the static typing require less planning upfront, but I am generally more productive in python because the standard library is so good, and I can write functions to operate on my classes before the classes are even defined. In this respect, I constantly rely on dynamic typing in python as a language feature.
My experience with respect to safety (or danger, depending on how full your glass is) is that for both Delphi and Python, the prevalence of bugs is about equally likely and has nothing to do with static or dynamic typing, although every so often my Python code surprises me by running correctly on the first try.
@cjrh: So when you’re *calling* a method, how do you know whether a method will be valid or not? Does the documentation always specify everything that will be called on that argument, and what it expects those calls to do?
To me, static typing is a way of providing that information in a single word… and with interfaces, the lack of multiple inheritance doesn’t get in my way much anyway.
The downside is situations where various types *do* have the method I need, but they have no common interface. That’s where duck typing would really shine – but I don’t find I come across it that often. Is it regularly a problem for you when you’re coding in Delphi?
@skeet:
For knowing what can be passed to a function, it seems that conventionally you either:
a) follow documentation (if it exists)
b) get documentation from the docstring of the function at the REPL prompt, e.g. “>>> print help(foo)”. (this usually exists)
c) read the source. (this always exists)
It is often the case that a python library will say something like “this function should be passed a file-like object” (cf. Django), which means that you can pass actual file-like objects, but also anything else that contains .read() and .write(), and perhaps a few other members that might be explicitly mentioned in the docs. In practice, it is not that bad once you fully internalize the fact that the argument type doesn’t matter, only the operations performed on it. On the other hand, every now and then I see recipes for performing explicit type-checking on the arguments to functions, which seems IMO completely misguided (they should rather check for the existing of required methods/members on the arguments).
Lack of multiple inheritance…perhaps I wasn’t clear earlier: I tried to explain that duck-typing gives you, among several other things, basically unlimited polymorphism, which feeds greater code-reuse. That has pros and cons. One can always live without it (one can get by with very little, I have used FORTRAN somewhat) as you said, but that is IMO a significant benefit of dynamic typing. ymmv and all that. The relative weights of the pros and cons vary for different use cases, different projects, different individuals. You’re a smart guy, and you’ve heard all the arguments before. If you’ve already tinkered with something like python yourself, and found it came up wanting, well then that’s that; but if not, there is something else remaining to do: see for yourself. The tutorial included in the python install takes about 2 hours to get through, and gets you about 80% effective to write programs.
I find this discussion similar to how the introduction of subversion as a version control system caused much angst (myself included) because it removed file-locking (Oh No, everything will be clobbered!). But once you get used to the extra freedom, and deal with exceptional cases as required, you wouldn’t want to go back. Of course now we see a similar trend regarding the DVCS backlash with many people clutching at the safety of their SVN. But in a few years, DVCS will be completely dominant and we’ll be asking ourselves however did we get anything done without it.
The fact that static typing still produces the benefit of fast-running code remains a very compelling point for a large number of use-cases. Were dynamic languages to close the performance gap more substantially than has been the case so far, I would expect to see a much greater shift in that direction. The claim that static typing is “safer” than dynamic typing is often made, but it has consistently been my experience that there is no difference in the nature and number of bugs between my delphi code and python code. It just takes me longer to produce them in Delphi. Oh, and I pretty much never get off-by-one errors in python, because almost everything is directly iterable; but this has nothing to do with dynamic typing.
Regarding feature frustration, I don’t code in Delphi the same way I code in python, so in Delphi I don’t even think in these terms, therefore the lack of duck-typing is not problematic. I guess in a sense you just resign yourself to duplicating code for different types as needed. You do your best to abstract, but only so far and no further, and then deal with it. It is not something that I think about often. Idiomatic Delphi is very far from idiomatic python. It is sometimes frustrating to have to declare so much upfront in Delphi, and lack of multi-line strings really suck, but regarding object patterns I stick to the tried and tested, straight and narrow simple object inheritance with minimal polymorphism sprinkled throughout; and this works reliably and well. It’s kinda like asking if I miss LINQ in Delphi, or asking a FORTRAN programmer if they miss Delphi classes: the question itself seems odd, because idiomatic use of each language implementation accomplishes similar objectives in different ways. This isn’t only a language issue, because the specific implementation, including the provided libraries make a big difference. The most powerful syntax in the world is no match for a single library call that does everything you need, for example.
I will say that the more complex the problem to solve, all else being equal, the more I will be looking towards Python rather than Delphi, and I think the dynamic nature of python plays a huge part in that, especially when the problem domain isn’t even fully revealed before you start writing code. In contrast, the simpler projects, especially GUI-based stuff would be done in Delphi.
@cjrh: Thanks for all the detail. I find your last comment particularly interesting – I would have expected an approach of “quick and dirty, one off code is fine in Python – for large enterprise systems I’d use Delphi.” It’s interesting to hear it working the other way round…
It does sound like I’m not going to appreciate the benefits of dynamic typing without diving in for a significant project – which is a pain, as I haven’t got time to do that at the moment :( | https://codeblog.jonskeet.uk/2009/11/17/where-do-you-benefit-from-dynamic-typing/ | CC-MAIN-2020-05 | refinedweb | 2,556 | 57.3 |
Find the first match from a list of words in a string
excel find text in range and return cell reference
how to get the first word of a string in python
excel find first occurrence of a value in a column
python regex
index match if cell contains text
python first word of each line
python extract first word from a string
I am writing a function that finds a keyword in a string and returns the first match, if any.
The keywords are "what", "when", "who"
Example:
- The user inputs a string in the form of a question: "Who is John Connor"
- The function returns "who"
Is there a way to compare a list of keywords against a string input and return the first match?
I thought about using re.search but it takes single string at a time. This is what I have so far:
question = input("Ask me a question: ") keywords = ("what", "when", "who") question = question.lower() match = re.search(r'word:', question) # re.search seems to take only one string at a time
Convert your list to a regular expression of the form
\b(?:what|when|who)\b, then use
re.search().
question = input("Ask me a question: ").lower() keywords = ("what", "when", "who") kw_re = r'\b(?:' + '|'.join(map(re.escape, keywords)) + r')\b' match = re.search(kw_re, question)
\b matches word boundaries, so this will only match whole words.
Excel formula: Get first match cell contains, If you have a list of things (words, substrings, etc) and want to find out which of these things appear in a cell, you can build a simple table and use a formula based�.
Testing a word for inclusion in a set of keywords is
O(1) vs.
O(n) for a list of
n keywords.
def find_match(sentence, keyword_set): for word in sentence.split(): if word in keyword_set: return word keywords = {"what", "when", "who"} question = "Who is John Connor".lower() >>> find_match(question, keywords) 'who'
How to get the first word in a string in Python, split() and list indexing to get the first word in a string. Call str.split() to create a list of all words in str separated by a space or newline character. Index the� As I apply the formula on more scenarios I notice some gaps. If I have 'G' and 'Super Six' as separate words in my list and if a cell carries the string value "Super Six Group", the formula matches 'G' as value. It is looking at the first word in the word list than the first match from the string (expected result is 'Super Six').
This will find the fist word in the list and give you the location of it:
question = input("Ask me a question: ") keywords = ("what", "when", "who") question = question.lower() for keyword in keywords: loc = question.find(keyword) if loc == -1: loc = "Not Found" else: print('{} was found at location: {} and is the first match found in the list of keywords'.format(keyword, loc)) break
String Manipulation and Regular Expressions, find() and index() are very similar, in that they search for the first occurrence of a is to split on any whitespace, returning a list of the individual words in a string:. Find example. Here we consider the Find() method on List. Find accepts a Predicate, which we can specify as a lambda expression. It returns the first match.Predicate Lambda. Here: This code loops through each int value in the List, starting at the beginning, and tests each one to see if it is greater than 20. Return: The value 23 is returned
Split the input into words and compare each against your set of keywords; break from the loop if a match is found, or return if you want to wrap it into a function.
for word in question.split(): if word in keywords: match = word break
Also, take care to handle cases when no match is found.
perlrequick, A regex consisting of a word matches any string that contains that word: "Hello World" =~ /World/ Perl will always match at the earliest possible point in the string: See Backslash sequences in perlrecharclass for details.) \d is a In list context, a match /regex/ with groupings will return the list of matched values ($1, $2,) . Get first match in cell. The language here is quite confusing, but the formula above will return the first match found in the list of things to look for.If instead you want to return the first match found in the cell being tested, you can try a formula like this:
Searching a String for Matching Word from another String in, To search a string for a matching word from another string, we use the "IF", FIND:This function returns the location number of the character at which a specific 2 lists,in column A and column B. Weneed to match the first word in each cell in�.
Python – Find the index of first occurrence of substring in a String, Python String - Find the occurrence of first occurrence of substring - To find the position of first occurrence of a string in another string, you can use string.find()�
Python re.match, search Examples, Python program that uses match import re # Sample strings. list = ["dog dot", "do don't", "dumb-dumb", in list: # Match if two words starting with letter d. m = re. match("(d\w+)\W(d\w+)", element) # See if Match just tries the first starting point. Java Java Lambda Expressions Java 8 Streams . Imperative Style. List<String> list = Arrays.asList("Apple", "Orange", "Banana"); String string = "A box of Oranges | https://thetopsites.net/article/58809588.shtml | CC-MAIN-2021-25 | refinedweb | 931 | 78.18 |
symlink, symlinkat - make a symbolic link relative to directory file descriptor
#include <unistd.h>
int symlink(const char *path1, const char *path2);
int symlinkat(const char *path1, int f.
The symbolic link's user ID shall be set to the process' effective user ID. The symbolic link's group ID shall be set to the group ID of the parent directory or to the effective group ID of the process. Implementations shall provide a way to initialize the symbolic link's group ID to the group ID of the parent directory. Implementations may, but need not, provide an implementation-defined way to initialize the symbolic link's group ID to the effective group ID of the calling process.
The values of the file mode bits for the created symbolic link are unspecified. All interfaces specified by POSIX.1-2008 shall behave as if the contents of symbolic links can always be read, except that the value of the file mode bits returned in the st_mode field of the stat structure is unspecified.
Upon successful completion, symlink() shall mark for update the last data access, last data modification, and last file status change timestamps of the symbolic link. Also, the last data modification and last file status change timestamps of the directory that contains the new entry shall be marked for update.
The symlinkat() function shall shall be identical to a call to symlink().
Upon successful completion, these functions shall return 0. Otherwise, these functions shall return -1 and set errno to indicate the error.
These functions a component of the pathname specified by the path2 argumentat() function shall fail if:
- [EACCES]
- fd was not opened with O_SEARCH and the permissions of the directory underlying fd do not permit directory searches.
- [EBADF]
- The path22 argument.
- [ENAMETOOLONG]
- The length of the path2 argument exceeds {PATH_MAX} or pathname resolution of a symbolic link in the path2 argument produced an intermediate result with a length that exceeds {PATH_MAX}.
The symlinkat() function may fail if:
- [ENOTDIR]
- The path2 argument is not an absolute path and fd is neither AT_FDCWD nor a file descriptor associated with a directory. POSIX.1-2008 does not require any association of file times with symbolic links, there is no requirement that file times be updated by symlink().
The purpose of the symlinkat() function is to create symbolic links in directories other than the current working directory without exposure to race conditions. Any part of the path of a file could be changed in parallel to a call to symlink(), resulting in unspecified behavior. By opening a file descriptor for the target directory and using the symlinkat() function it can be guaranteed that the created symbolic link is located relative to the desired directory.
None.
fdopendir , fstatat , lchown , link , open , readlink , rename , unlink
XBD .
Austin Group Interpretation 1003.1-2001 #143 is applied.
The symlinkat() function is added from The Open Group Technical Standard, 2006, Extended API Set Part 2.
Additions have been made describing how symlink() sets the user and group IDs and file mode of the symbolic link, and its effect on timestamps.
Changes are made to allow a directory to be opened for searching.
return to top of pagereturn to top of page | https://pubs.opengroup.org/onlinepubs/9699919799.2008edition/functions/symlinkat.html | CC-MAIN-2021-39 | refinedweb | 535 | 53.21 |
please some body give me the full C code of water jug problem
jony_munsi -1 Newbie Poster
nitin1 commented: >.< +0
Gonbe commented: Delete your account, sell your computer. -1
Recommended Answers
#include <stdio.h> int main() { printf("Fill the three up, and pour it into the five. Fill the three up aain, and pour it into the five until the five is full. You now have one unit of liquid in the three container. Empty out the five, put …
All 3 Replies
Be a part of the DaniWeb community
We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge. | https://www.daniweb.com/programming/software-development/threads/459560/water-jug-problem | CC-MAIN-2021-31 | refinedweb | 109 | 64.71 |
Regular Expressions for Objects
For action phrases and noun phrases, which are defined as follows, using regex-like notation:
human_action = ("he"|"she"|"i"|"they"|"we") ([VERB] [ADP])+ noun_phrase = [DET]? ([ADJ] [NOUN])+
Translated to English this means that human actions are defined as 1st and 3rd person, singular and plural pronouns followed by repeated groups of verbs and adpositions (in, to, during). Noun phrases are composed of an optional determiner (a, an, the) followed by repeated groups of adjectives and nouns.
Most standard regex libraries won't help you with this, because they work only on strings. But this problem is still perfectly well described by regular grammars, so after a bit of Googling I found REfO and it's super simple to use, albeit you have to read the source code, because it doesn't really have documentation.
REfO is a bit more verbose than normal regular expressions, but at least it
tries to stay close to usual regex notions. Lazy repetition (*) is done using the
refo.Star operator, while greedy one (+) is
refo.Plus . The only new
operator is
refo.Predicate, which takes a function which takes a parameter and
matches if that function returns true when called with the element at that
position. Using this we will build the functions we need:
def pos(pos): return refo.Predicate(lambda x: x[1] == pos) def humanpron(): return refo.Predicate(lambda x: x[1] == 'PRON' and x[0] in {'i', 'he', 'she', 'we', 'they'})
For matching POS, we use a helper to create a function that will match the given tag. For matching human pronouns, we also check the words, not just the POS tag.
np = refo.Question(pos('DET')) + refo.Plus(refo.Question(pos('ADJ')) + pos('NOUN')) humanAction = humanpron() + refo.Plus(pos('VERB') + pos('ADP'))
Then we just compose our functions and concatenate them and we got what we
wanted. Using them is simple. You either call
refo.search, which finds the
first match or
refo.finditer which returns an iterable over all matches.
for match in refo.finditer(humanAction, s): start = match.start() end = match.end() print(s[start:end])
[[u'i', u'PRON'], [u'look', u'VERB'], [u'around', u'ADP']]
So, it's always good to Google around for a solution, because my first instict to whip up a parser in Parsec would have lead to a much more complicated solution. This is nice, elegant, short and efficient. | https://rolisz.ro/2018/06/08/regular-expressions-for-objects/ | CC-MAIN-2019-13 | refinedweb | 402 | 57.06 |
/* * Copyright (c) written prior permission. This software * is provided ``as is'' without express or implied warranty. * * RCS: @(#) $Id: tmpnam.c,v 1.2 2001/09/14 01:42:00 zlaski Exp $ */ #include <sys/param.h> #include <sys/stat.h> #include <sys/file.h> #include <stdio.h> /* * Use /tmp instead of /usr/tmp, because L_tmpname is only 14 chars * on some machines (like NeXT machines) and /usr/tmp will cause * buffer overflows. */ #ifdef P_tmpdir # undef P_tmpdir #endif #define P_tmpdir "/tmp" char * tmpnam(s) char *s; { static char name[50]; char *mktemp(); if (!s) s = name; (void)sprintf(s, "%s/XXXXXX", P_tmpdir); return(mktemp(s)); } | http://opensource.apple.com/source/gcc_os/gcc_os-1256/tcl/compat/tmpnam.c | CC-MAIN-2016-30 | refinedweb | 103 | 63.36 |
IRC log of tagmem on 2011-02-08
Timestamps are in UTC.
14:04:25 [RRSAgent]
RRSAgent has joined #tagmem
14:04:25 [RRSAgent]
logging to
14:04:41 [jar]
scribe: Jonathan Rees
14:04:46 [jar]
scribenick: jar
14:04:52 [jar]
chair: Noah Mendelsohn
14:06:11 [jar]
agenda:
14:06:43 [jar]
zakim, this is tag
14:06:43 [Zakim]
ok, jar; that matches TAG_f2f()8:30AM
14:10:11 [jar]
Not convened yet, awaiting arrivals
14:11:11 [jar]
Convened
14:12:02 [jar]
intros
14:12:08 [jar]
jar: (intro)
14:12:45 [jar]
ht: (intro) sgml, xml , more recently status of uris in webarch
14:13:49 [jar]
dka: (intro) mobile web, privacy, social web
14:14:36 [Ashok]
Ashok has joined #tagmem
14:14:40 [jar]
timbl: (intro) DIG, privacy, policy, semweb UI
14:15:23 [jar]
ashok: (intro) standards, oasis, etc, rdb to rdf
14:16:49 [jar]
plinss: (intro) css, gecko, print as 1st-class citizen on web
14:19:21 [jar]
... pre-css: object based editor on nextstep, design model. Digital Style Websuite
14:21:39 [jar]
noah: agenda review
14:23:06 [jar]
... norm w is planning to spend all of wed. with us
14:23:16 [ht]
Noah, is this the number you used: email: LMM@acm.org (personal) masinter@adobe.com (company)
14:23:16 [ht]
tel: +1 408 536-3024
14:30:13 [jar]
noah: (re priorities session on thu) we had identified 3 areas, larry has created a 4th area of core technologies (mime, sniffing, etc)
14:31:11 [jar]
... please think about tradeoffs
14:31:15 [jar]
zakim, who is here?
14:31:15 [Zakim]
On the phone I see W3C
14:31:16 [Zakim]
On IRC I see Ashok, RRSAgent, Zakim, jar, noah, plinss, DKA, timbl, ht, trackbot, Yves
14:33:23 [jar]
topic: Design of APIs for Web Applications
14:33:56 [DKA]
DAP privacy requirements:
14:34:52 [jar]
dka: Looking at DAP group's document on requirements
14:35:21 [jar]
... javascript apis that access things containing sensitive information - just about anyting
14:36:07 [jar]
... camera, address book, calendar, orientation, velocity
14:36:48 [jar]
(pointing at table 'how each element is covered' with notice, consent, minimization, etc. rows)
14:38:54 [jar]
dka: what might the tag do to help promote privacy [control] on web?
14:39:49 [jar]
dka: set of small, targeted docs that build on work of others (DAP, UCB, others)?
14:41:19 [jar]
dks: look at existing docs, amplifying, put in specific web contexts. e.g. (for instance) Hannis (sp?) doc is general, DAP specific to DAP, connect them.
14:41:26 [jar]
s/dks:/dka:/
14:43:22 [jar]
projecting the API minimization note [URI in agenda]
14:43:53 [jar]
dka: come up with several examples of this idea in action
14:45:30 [jar]
... want to sidestep Ashok's issue - about the Abelson et al. paper pointing out that user dialogs are silly, since they can't assess consequences
14:46:23 [jar]
Ashok: Abelson et al suggests to consider legal accountability as alternative
14:47:20 [jar]
dka: Vodafone privacy counsel said (at workshop) things are coming together on that front
14:47:22 [noah]
q?
14:47:51 [jar]
... Minimization is not about this.
14:48:44 [jar]
timbl: Need global change in ethos regarding data use, independent of how they got it
14:48:58 [jar]
... All these [tactics] need to be in the list
14:49:38 [jar]
dka: Looking for technical [tactics] that TAG might be able to say something about.
14:50:15 [jar]
... image metadata capturing privacy intent?
14:50:37 [jar]
... If you keep asking people about this, good results are unlikely
14:51:42 [jar]
timbl: What if you say: I want my friends to see my pictures. would be nice if software kept track of how/why friend got them, as reminder
14:52:32 [jar]
dka: Problem - technical jargon in dialog boxes ('GPS coordinates' ...)
14:53:18 [jar]
noah: You're saying the apps should be able to say: I don't need more info than xxx.
14:53:25 [jar]
... What about malicious apps.
14:55:05 [jar]
dka: Remember this philosophical approach. We tend to get distracted. Need to find particular points to focus on.
14:55:26 [jar]
dka: [Solve one problem at a time.]
14:55:54 [jar]
noah: ... But my experience is that most of the problems have to do with attackers
14:56:27 [jar]
... and exploiters
14:57:41 [jar]
dka: Problem comes with attacker exploiting well-intended app. What to do to well-intended to make it less vulnerable to exploitation
14:58:29 [jar]
dka: We need to be clear that even if you do [any particular thing], you won't have a privacy solution
14:59:10 [jar]
noah: Problem is interacting with untrusted services that I need to use.
15:00:47 [jar]
dka: The aggregate amount of info open to abuse is lower if you minimize. So several docs to chip away at specific things, not to provide comprehensive solution
15:01:12 [ht]
q+ to support DKA wrt NM's use case
15:01:38 [ht]
15:01:50 [ht]
is what JR is talking about
15:03:37 [ht]
LM and JK join the meeting at this point
15:04:13 [jar]
jar: security is just one way to support privacy... and need to do lots to get security. least privilege just one.
15:05:30 [jar]
ht: Dan's answer did address Noah's point. By specifying an approach that the platforms subscribe, you bound the damage that the bad guys can do. If they have less info, they can do less.
15:06:21 [jar]
ht: You can reduce the bandwidth of any particular API call. This raises the barrier.
15:07:36 [jar]
dka: If the app only needs city location, but has to request fine grained location, ... is the right question being asked [or user, developer, app...?]
15:07:51 [jar]
noah: Document needs an intro that sets expectations
15:08:35 [jar]
masinter: Framing = it's warfare, we're minimizing the attack surface
15:09:09 [ht]
There is a HF/UI design/human engineering issue here which won't go away, but micro-capabilities do create a real opportunity to reduce your exposure, much as they make me tear my hair out as an implementor
15:09:26 [jar]
masinter: To say there's a way around a defense, is not an argument against the defense
15:09:30 )
15:09:42 [ht]
q+ to give the Mark Logic API parallel
15:10:01 [noah]
ack next
15:10:02 [Zakim]
ht, you wanted to support DKA wrt NM's use case and to give the Mark Logic API parallel
15:10:13 [noah]
q?
15:10:53 [jar]
ht: I use two different xml database systems... the 'open' one has unix style object protection - file x RW
15:11:39 [jar]
... the commercial one has about 60-70 capabilities. almost 1-1 on API calls, file x cap
15:12:01 [jar]
... bigger effort to manage for both users and developers.
15:12:03 [johnk]
johnk has joined #tagmem
15:12:40 [johnk]
q+ to mention the FB API model
15:13:16 [jar]
... you get high degree of control. Compare minimization. You have to get informed consent, but if it's granular enough you get questions that are specific enough to make sense
15:14:22 [jar]
dka: Resistance to normative requirements to UI design, esp. re privacy
15:14:30 [jar]
s/to UI/for UI/
15:15:48 [jar]
dka: The minimization approach doesn't impose specific UI requirements. This might enable creative UI design
15:15:58 [noah]
zakim, close the queue
15:15:58 [Zakim]
ok, noah, the speaker queue is closed
15:16:39 [jar]
johnk: There's always a useability tradeoff in security. E.g. facebook has tons of knobs
15:17:05 [jar]
... but underneath there's a simple set of access control privs
15:17:18 [jar]
... e.g. app needs to do something special to get email address
15:17:35 [jar]
... This is a usability issue, a tradeoff
15:18:44 [jar]
dka: Re minimization, the approach stands, since it says nothing about the user interaction. [API and UI needn't slavishly correspond]
15:18:56 [timbl]
q+ to talk about globbing together as a good UI for apps - data - groups of people
15:19:01 [noah]
zakim, open the queue
15:19:01 [Zakim]
ok, noah, the speaker queue is open
15:19:12 [timbl]
q+ to talk about globbing together as a good UI for apps - data - groups of people
15:19:25 [timbl]
q-
15:19:28 [johnk]
ack next
15:19:29 [Zakim]
johnk, you wanted to mention the FB API model
15:19:43 [masinter]
masinter has joined #tagmem
15:19:47 [jar]
noah: Proposal?
15:19:59 [masinter]
q+ to talk about participating in larger discussion
15:20:14 [noah]
I'm asking: what do you propose we do that will have real, useful impact for the community?
15:20:44 [jar]
dka: Useful output might be: Umbrella document. Privacy and webarch. Subdocuments, e.g. minimization.
15:21:59 [jar]
scribes - Tue JR / DA, Wed AM / ?, Th HT / ?
15:22:41 [jar]
masinter: Big discussion on privacy in larger community. Our schedule should coordinate with external events
15:23:11 [noah]
q+ to make a specific proposal
15:23:23 [noah]
ack next
15:23:24 [Zakim]
masinter, you wanted to talk about participating in larger discussion
15:24:48 [jar]
masinter: What does API minimization have to do with HTTP?
15:25:15 [jar]
jar (under breath): there are HTTP APIs
15:25:55
15:26:12 [jar]
noah: DKA, can we get together and make a straw-man product proposal?
15:28:35 [jar]
masinter: E.g. can be a problem sending info in Accept: headers when it's not needed in order for server to do its job
15:29:28 [jar]
masinter: Trying to suggest how to expand this from a DAP point to a TAG point
15:29:54 [jar]
timbl: (masinter, you missed the beginning of the session)
15:30:25 [jar]
(break)
15:45:58 [johnk]
johnk has joined #tagmem
15:48:30 [jar]
topic: Web Applications: Security
15:48:43 [jar]
15:50:14 [jar]
jk: I was asked to frame section 7 of the webarch report on apps
15:50:32 [jar]
jk: Wanted to echo [style of] Larry's MIME writeup
15:51:05 [jar]
... If you start with browser/server/protocol, and trace history of the three with a security focus...
15:51:24 [jar]
... start with just getting a doc.
15:52:03 [jar]
... then more support in http. history in doc is well known but worth reviewing
15:52:33 [jar]
... NN2 introduced cookies, and cookies needed origin
15:53:09 [masinter]
15:53:21 [jar]
... Related to lost of security issues. State in protocol. Origin and document not linked securely.
15:54:11 [jar]
... Why should you trust the DNS?
15:55:27 [jar]
timbl: It assumes there's a social connection between - and -. There was a trust model, it just wasn't cryptographically secure
15:55:58 [jar]
jk: These are layered protocols, that makes security harder. eg. DNSsec isn't bound to higher protocols
15:56:34 [jar]
ht: scripts??
15:56:49 [jar]
jk: Dynamically loaded scripts not subject to SOP
15:57:59 [jar]
noah: XML and JSON is good example - the weaker language was subject to tighter security controls - dumb
15:58:50 [jar]
ht: script with a source tag predates JSON. it was never subject to SOP ??
15:59:49 [masinter]
15:59:53 [jar]
timbl: Suddenly all these APIs have this extra parameter, the calling function ...
16:00:09 [timbl]
the function to be called by the injkected script tag
16:00:22 [jar]
jk: Cookies were easiest way to do session indicator. shopping carts and so on.
16:00:34 [jar]
... AJAX was other driver
16:01:16 [jar]
... XHR does use SOP, but using JSONP you can circumvent it
16:01:40 [jar]
... apps send cookies from one place to another
16:02:08 [jar]
jk: Trying to abstract away, to find security issues as opposed to implementation bugs. What issues are architectural in these examples
16:02:37 [jar]
... One is, when doc contains multiple parts, contributed from different security domains
16:03:52 [jar]
noah: (When did we stop using the term 'representation'?)
16:04:19 [jar]
jk: If you don't mediate the interaction, e.g. using sandbox, bad things happen.
16:04:29 [jar]
... e.g. runaway cpu time
16:05:15 [jar]
... Silent redirects. Malicious site forwards, cookies sent to 2nd site -> clickjacking
16:06:37 [jar]
... Authentication based on Referer: (i.e. referrer) header
16:07:12 [jar]
... Servers depend on client to do the right thing, in particular proper origin processing
16:07:50 [jar]
... Specs are difficult read, so there can be broken user agents.
16:09:01 [jar]
... My advice: Server should not trust user agents. What are circumstances in which you can server can align with user
16:09:32 [jar]
timbl: We need to preserve the role of the user-agent as the agent of the (human) user.
16:10:18 [jar]
johnk: Yes, but we need to be a bit more nuanced. There shouldn't be inordinate trust in a class of agents. One should only need to trust an agent to a certain degree.
16:11:05 [jar]
noah: Users don't understand UAs well enough to be able to discriminate..
16:11:11 [masinter]
masinter has joined #tagmem
16:11:53 [masinter]
somehow I want to bring in
'
16:12:14 [jar]
timbl: That doesn't diminish the responsibility of UAs
16:12:33 [masinter]
q+
16:13:06 [jar]
timbl: One of the the things the TAG does is to ascribe blame
16:13:22 [jar]
johnk: Who's responsible for a clickjacking attack? Software was behaving per spec
16:13:41 [jar]
masinter: Users are presented choices that they don't understand
16:14:21 [jar]
johnk: Not much you can do about that -
16:15:01 [jar]
masinter: don't require users to make decisions that they don't understand. design principle.
16:17:02 [jar]
... optimize a match between what user wants and what happens. doesn't matter whether choices are simple or complex
16:17:48 [jar]
pl: You said simplicity might be better - maybe so at user level, not nec. across the system
16:19:34 [masinter]
complex choices are less likely to be understood, but simple choices might be a problem
16:20:06 [jar]
(scribe notes that henry suggested just the opposite. see above)
16:21:24 [jar]
jk: Cache poisoning might mean no link between IP and domain name... in fact no way to guarantee domain name ownership
16:21:33 [masinter]
want to talk about TAG work in context with
16:21:53 [masinter]
16:21:53 [masinter]
Oct 2010Submit 'HTTP Application Security Problem Statement and Requirements' as initial WG item. -- don't see that document
16:22:00 [jar]
ssl... data not encrypted on hotspot
16:22:08 [jar]
s/ssl/jk: ssl/
16:23:10 [jar]
timbl: Firefox 'get me out of here'
16:24:07 [jar]
jk: When you run web content, the content starts being rendered immediately - there is no install step. It just starts running
16:24:54 [jar]
ht: I've been manually virus checking every downloaded app. Can't do this with pages
16:25:07 [jar]
masinter: antivirus sw modifies the stack
16:26:19 [jar]
noah: Also you lose the ability to make sticky decisions. Nextbus is an example of non-installed app but that you come back to repeatedly
16:26:51 [jar]
... you keep getting asked for permssion to use location. annoying
16:27:55 [jar]
timbl: But most browsers do this well ?
16:28:12 [masinter]
s/antivirus/some antivirus/
16:28:18 [masinter]
s/the stack/the HTTP stack/
16:28:18 [jar]
jk: Lack of tie-in between host naming and where you access the doc (where published)
16:28:56 [jar]
... who is responsible for the content of the document? Nonrepudiation.
16:30:03 [jar]
timbl: You can sign the document until you're blue in the face ...
16:30:23 [jar]
noah: Doc is written by an expert, would be helpful if some of the examples were spelled out in more detail
16:31:03 [jar]
masinter: Security WG calls for a [...] document. Is what we're doing related to their work item?
16:31:25 [jar]
... They have a bunch of specific documents, but nothing at this level
16:31:50 [jar]
jk: Their docs are very narrow
16:31:56 [jar]
masinter: No, look at their charter
16:33:05 [jar]
Oct 2010 Submit 'HTTP Application Security Problem Statement and Requirements' as initial WG item.
16:33:41 [jar]
masinter: Isn't this what we're doing?
16:35:30 [jar]
jk: The issue of mime sniffing. It became a good idea for the browser to ignore media type... problem is guessing user intent
16:36:52 [jar]
(slight aside)
16:37:12 [jar]
jk: So what would be desirable properties of security webarch? (reviewing doc)
16:37:57 [jar]
noah: please clarify use of 'web agent'
16:39:23 [jar]
noah: 'tie' isn't evocative - what constitutes success? what system properties are we after?
16:40:15 [jar]
timbl: E.g. maybe avoid separation of authentication and authorization
16:41:18 [jar]
jk: App layer with signed piece of content, same key should be used in both levels of protocol stack (or at least related)
16:41:31 [jar]
timbl: WebID people have expereienced this need - converting keys between apps / layers - PGP to log in using ssh etc.
16:42:45 [jar]
ht: I'm having to use Kerberos - very inconvenient - when I ssh from laptop home I need a kerberos principal... way too much work... [so unification cuts both ways?]
16:42:56 [jar]
timbl: but kerberos isn't public-key
16:43:39 [jar]
timbl: The thing about connecting the two parts together is valuable
16:45:00 [jar]
jk: WebID is a case where it can't be done. User generates a cert, puts it in foaf file. Impossible to tie foaf description of me with me the person.
16:46:03 [jar]
masinter: can show 1 person wrote 2 things
16:46:31 [jar]
noah: Same issue as in PGP - you have to be careful when first picking up the key
16:46:42 [jar]
jk: what's the purpose of encrypting the assertion?...
16:46:58 [jar]
s/?/ (in webid)?/
16:48:15 [jar]
jk: 3rd bullet in properties section: We should be able to do what the original web design wanted us to do
16:50:49 [jar]
timbl: But doesn't CORS do this for us?
16:51:04 [jar]
jar: Controversial.
16:54:24 [masinter]
W3C TAG should be a participant in overall work on web security, including other work in IETF and W3C
16:55:36 [noah]
ACTION-417?
16:55:59 [masinter]
action-417?
16:55:59 [trackbot]
ACTION-417 -- John Kemp to frame section 7, security -- due 2011-01-25 -- OPEN
16:55:59 [trackbot]
16:56:11 [masinter]
ACTION-417?
16:56:11 [trackbot]
ACTION-417 -- John Kemp to frame section 7, security -- due 2011-01-25 -- OPEN
16:56:11 [trackbot]
16:56:13 [jar]
masinter: There's ongoing work. We should review it regularly and be seen as a participant. The way to do that is to publish a note, and announce, repeat. But be clear that we're not trying to take the lead.
16:57:01 [jar]
noah: But the action was to frame a section of our document...
16:57:47 [masinter]
The W3C chapter on security on the web could identify that there are some issues and point at other groups that are working on the problems
16:58:37 [masinter]
W3C TAG should have input on W3C activities decisions, and this should be a W3C activity, on "security and privacy"
16:59:02 [jar]
ashok: Let's close 417, start another one to write a note. If that becomes bigger/better, fine.
16:59:32 [jar]
masinter: In general the TAG should be more involved in setting up W3C activities.
17:00:08 [jar]
timbl: So far it's just been a series of workshops, not an activity
17:01:52 [jar]
ashok: Privacy at w3 is morphing
17:03:47 [jar]
masinter: Would like to see a note out before Prague meeting (end of March)
17:04:03 [noah]
noah: any objection to a proposal to close ACTION-417, and have John publish what he's got, slightly cleaned up, as a note with no formal status, but at a stable URI. Noah will help.
17:04:44 [noah]
Larry will help too, and would like this done in time for IETF in Prague.
17:04:55 [noah]
PROPOSAL: close ACTION-417, and have John publish what he's got, slightly cleaned up, as a note with no formal status, but at a stable URI. Noah will help.
17:05:03 [noah]
No objections.
17:05:07 [noah]
close ACTION-417
17:05:08 [trackbot]
ACTION-417 Frame section 7, security closed
17:05:54 [noah]
action John to publish,
slightly cleaned up, with help from Noah and Larry Due: 2011-03-07
17:05:54 [trackbot]
Sorry, couldn't find user - John
17:06:25 [noah]
action Larry (as trackbot proxy for John) who will publish,
slightly cleaned up, with help from Noah and Larry Due: 2011-03-07
17:06:25 [trackbot]
Created ACTION-515 - (as trackbot proxy for John) who will publish,
slightly cleaned up, with help from Noah and Larry Due: 2011-03-07 [on Larry Masinter - due 2011-02-15].
17:08:36 [noah]
ACTION: Noah to talk with Thomas Roessler about organizing W3C architecture work on security
17:08:36 [trackbot]
Created ACTION-516 - Talk with Thomas Roessler about organizing W3C architecture work on security [on Noah Mendelsohn - due 2011-02-15].
17:53:27 [noah]
noah has joined #tagmem
17:54:33 [ht]
ht has joined #tagmem
17:58:28 [johnk]
johnk has joined #tagmem
17:58:32 [timbl]
timbl has joined #tagmem
18:03:02 [jar]
jar has joined #tagmem
18:06:06 [ted]
ted has joined #tagmem
18:08:47 [plinss]
plinss has joined #tagmem
18:10:05 [masinter`]
masinter` has joined #tagmem
18:19:09 [DKA]
DKA has joined #tagmem
18:19:54 [DKA]
Scribe: Dan
18:19:57 [DKA]
ScribeNick: DKA
18:20:03 [DKA]
[roll call]
18:20:11 [DKA]
Noah: Ted is joining us.
18:20:20 [timbl_]
timbl_ has joined #tagmem
18:20:36 [DKA]
Topic: scalabilityOfURIAccess-58: Scalability of URI Access to Resources
18:21:06 [DKA]
Noah: [background] there are certain resources w3c publishes on its website - e.g. dtds...
18:21:18 [DKA]
... certain organizations were [fetching] these resources a lot.
18:22:03 [ted]
->
summary Yves wrote of actions taken by W3C
18:22:03 [DKA]
... practical question: what can be done? Architectural question: what can be fixed in the architecture?
18:22:05 [ted]
->
article on DTD traffic
18:22:38 [timbl_]
timbl_ has joined #tagmem
18:23:02 [DKA]
... one angle proposed is : what would be the role of a catalog? You could tell people that certain resources won't change or won't change any time soon so they could build [their products] not to fetch these resources.
18:23:17 [DKA]
... Anything else from Ted?
18:24:15 [DKA]
Ted: We've employed some different techniques - for certain patterns we've given http 503 after reaching a threashold. At peaks, we see half a billion a day. Starts to become a problem. Sometimes this has resulted in blocking organizations.
18:24:30 [DKA]
... if it's an organization that is a member then we pursue through the AC rep...
18:24:46 [DKA]
... this doesn't scale well.
18:25:19 [DKA]
... there are several big libraries - eg. msxml - they've put a fix in which has led to a sharp decline.
18:25:37 [DKA]
... Norm Walsh came up with a URI resolver in Java that would implement a caching catalog solution but this never made its way into Sun JDK.
18:26:07 [DKA]
... Sun has been bought by Oracle so now we are talking to Oracle engineers and they have been responsive. Trying to see if we can get something into next JDK.
18:26:22 [DKA]
... We had a fast response from Python.
18:26:38 [DKA]
Noah: Do you ask these people to implement caching or a catalog?
18:27:10 [DKA]
Ted: We suggest either. I like the caching catalog solution [from Norm].
18:28:05 [DKA]
... we educate, we block, we have a high-volume proxy front-end that distinguishes traffic...
18:28:05 [DKA]
... when we explain to people that this is not good architecture - receiving the same thing over the network 100000's times a day - they agree.
18:28:21 [DKA]
... we probably should be in the business of packaging and promoting the catalog. Henry has done some work on this.
18:28:55 [Ashok]
Ashok has joined #tagmem
18:29:21 [DKA]
... the idea we came up with - find the most popular ones based on traffic and we routinely package these up, have RSS feeds to alert to catalog changes, talk to Oracle, Microsoft, Python, etc... get some of the bigger customers out there to adopt the catalog.
18:29:45 [timbl]
q+ to wonder about RSS feeds fro updates to things with distant expiry dates.
18:30:38 [DKA]
... meta-topic (that the TAG is concerned with) is the scalability of URIs in general. There is a lack of directives to do rate limiting, to set boundaries, how to scale URIs... Could be useful in dealing with DDOS attacks.
18:30:44 [noah]
q- noah
18:31:04 [DKA]
q?
18:31:08 [noah]
ack next
18:31:09 [noah]
ack next
18:31:10 [Zakim]
timbl, you wanted to wonder about RSS feeds fro updates to things with distant expiry dates.
18:31:22 [masinter]
masinter has joined #tagmem
18:31:33 [noah]
q+ to talk about what's required vs. what's desirable
18:32:01 [masinter]
who's here?
18:32:19 [DKA]
Tim: We don't have real push technology available (apart from Email) but supposing we make a package [a catalog] and we send them out. Then an erratum comes in for something that has a 12 month expiry date. Do we need a revocation mechanism?
18:32:30 [noah]
zakim, who is here?
18:32:30 [Zakim]
On the phone I see W3C
18:32:31 [Zakim]
On IRC I see masinter, Ashok, timbl, DKA, plinss, ted, jar, johnk, ht, noah, RRSAgent, Zakim, trackbot, Yves
18:32:45 [noah]
q?
18:33:21 [DKA]
Henry: I think there's an 80/20 point. Speaking as a user, I'm grateful for the shift from the 503s to the tarpitting.
18:33:33 [timbl]
q+ to mention HTTP automatically morphing to P2P when under stress
18:33:42 [DKA]
... the delay of 30 seconds helps people to remind people to get the catalog.
18:34:13 [masinter`]
masinter` has joined #tagmem
18:34:52 [DKA]
... SAs to install the catalog that would cause the tools to find them, then I don't think there's an expiry problem.
18:34:58 [noah]
q?
18:35:06 [ted]
q+
18:35:18 [DKA]
Tim: We have to consider the new and the old separately.
18:35:43 [DKA]
... new systems could be designed differently. The total load on the server from the HTML dtd will go down over time.
18:35:50 [masinter]
masinter has joined #tagmem
18:36:50 [masinter]
masinter has joined #tagmem
18:37:48 [DKA].
18:37:53 [noah]
q?
18:37:58 [masinter]
q+ to wonder if it's "http" or if it's some other "new http" = "http"
18:38:08 [DKA]
... so that the chance of finding a copy locally (of a DTD) would be quite high.
18:38:23 [DKA]
... after the Egypt situation, there's been a lot of interest in this.
18:38:30 [DKA]
... I'd love to have the TAG push that forward.
18:38:39 [noah]
ack timbl
18:38:39 [Zakim]
timbl, you wanted to mention HTTP automatically morphing to P2P when under stress
18:38:41 [noah]
q?
18:38:44 [noah]
ack next
18:38:45 [Zakim]
noah, you wanted to talk about what's required vs. what's desirable
18:39:02 [ht]
q+ to speak up for the user
18:39:06 [masinter]
q?
18:39:12 [masinter]
q-
18:41:02 [DKA]
Noah: I think the role for the TAG is to talk about the problem that is not specific to particular resources like the html dtd. In previous times I've come across this problem, the response from some has been "well you should be running a proxy" - is that correct? And in some cases it is actually cheaper to make multiple http requests...
18:41:27 [DKA]
... so: we could clarify the responsibilities that people have to cache or to not cache.
18:41:58 [DKA]
... should we change the normative specs?
18:42:04 [DKA]
... [some will push bacl]
18:42:59 [noah]
ack next
18:43:07 [DKA]
... for long term - we could break open this protocol http version 2.
18:43:52 [DKA]
Ted: Looking over the rfc-2616, the language is "should" around caching of http.
18:44:12 [DKA]
... it's optional and treated as such.
18:44:20 [masinter]
q+ to give strawman: specs were wrong, so asking people to run a proxy is really only to compensate for our failures
18:44:26 [DKA]
... lighter-weight implementations tend to be very barebones.
18:44:49 [DKA]
... I think promoting catalogs is the way to go - and we should work to get major libraries to include it, ship it, and have it enabled by default.
18:45:05 [DKA]
... I think the focus for the TAG should be in the meta problem. How to make URIs and web sites scale.
18:45:50 [DKA]
... Sites do get overwhelmed. There is no way to let consumers of this data know what is acceptable behaviour besides sending back a 503.
18:46:01 [masinter]
should also note that HTML itself has gotten rid of DTDs. But isn't main problem giving out "http:" URIs in the first place?
18:46:05 [DKA]
... we see lots of sites experiencing similar problems.
18:46:17 [noah]
q?
18:46:19 [noah]
ack next
18:46:20 [Zakim]
ht, you wanted to speak up for the user
18:46:21 [DKA]
Noah: I read it as a MAY in rfc-2616
18:46:38 [noah]
From RFC-2616 section 13.2.1:
18:46:52 [noah]
The primary mechanism for avoiding
18:46:52 [noah]
requests is for an origin server to provide an explicit expiration
18:46:52 [noah]
time in the future, indicating that a response MAY be used to satisfy
18:46:52 [noah]
subsequent requests.
18:47:00 [noah]
So, it's a MAY not a SHOULD.
18:47:07 [DKA]
Henry: I'm concerned about the message we're sending to students "you should produce valid html, valid XML, etc..." and yet when they try to validate their documents they have to wait 30 seconds.
18:47:14 [johnk]
q+ to note that waiting 30 seconds should be to encourage alternate behaviour
18:47:40 [DKA]
... because the web page has the public identifier.
18:47:46 [DKA]
Tim: Why does the validator not cache it?
18:48:23 [DKA]
Henry: Because the number of validators out there is quite large, and the free ones (while they support catalogs) but they don't distribute the catalog of DTS as part of their install.
18:48:27 [ted]
[libxml from the beginning shipped w a catalog]
18:48:53 [masinter]
valid HTML no longer has a doctype
18:49:10 [DKA]
Tim: That can be fixed relatively easily - the DTDs can be wired into the code for things that aren't going to change any more.
18:49:48 [DKA]
Henry: The crucial people you need to convince are the open source implementers.
18:50:42 [DKA]
Noah: in many cases, when you dig into what needs to be fixed, it is not straightforward to change all the implementations...
18:51:11 [DKA]
Henry: I am more worried about the people [students] who are the future of the Web. The people who use off-the-shelf free validator tools and get burned.
18:51:56 [noah]
q?
18:51:59 [noah]
ack next
18:52:00 [Zakim]
masinter, you wanted to give strawman: specs were wrong, so asking people to run a proxy is really only to compensate for our failures
18:52:01 [DKA]
Noah: should we undertake any work to help ted and/or ongoing work.
18:52:37 [DKA]
Larry: I think it was a serious design mistake to put a URL in a document that you didn't want anyone to retrieve and not tell them that.
18:53:11 [DKA]
... all of these proxies are compensating for someone else's mistake.
18:53:13 [noah]
q+ to ask: is it a mistake?
18:54:02 [DKA]
....
18:54:05 [noah]
ack next
18:54:06 [Zakim]
johnk, you wanted to note that waiting 30 seconds should be to encourage alternate behaviour
18:54:43 [DKA]
... We should think of the architectural design flaw here and make sure we don't do this again.
18:55:38 [masinter]
"there are no cool URLs, everything changes eventually"
18:55:42 [DKA]
John: Pragmatically, tarpitting requests that are overwhelming your server seems like the right way to deal with it [counterpoint to Henry's statement]. They should learn that they are doing something wrong.
18:55:46 [masinter]
"the URL is already broken"
18:56:10 [noah]
ack next
18:56:12 [Zakim]
noah, you wanted to ask: is it a mistake?
18:56:20 [DKA]
... I'm worried we're going to overthink this, when education plus pragmatic tarpitting could be the right response.
18:56:31 [timbl]
q+ to long tail
18:57:10 [DKA]
Noah: My inclination is close to John's. This is a big distributed file system. [The system should cope with this.]
18:57:59 .
18:58:13 [DKA]
....
18:58:16 [masinter`]
masinter` has joined #tagmem
18:58:23 [noah]
q?
18:58:26 [noah]
ack next
18:58:27 [Zakim]
timbl, you wanted to long tail
19:00:32 [DKA]
Tim: There are lots of DTD-like things out there. We need to be able to copy with various different scaling. We could provide some specific tailored response for these w3c issues. There may be similar things with some libraries...
19:01:16 [DKA]
Noah: Let's say there 100,000 ontologies, getting a lot of traffic. Let's say if I work my way through 100,000 ontologies in a loop. Should I also be tarpitted?
19:01:21 [DKA]
Tim: No.
19:01:33 [masinter]
masinter has joined #tagmem
19:02:03 [masinter]
q+
19:02:05 [DKA]
Tim: I won't want to mess up the fact that in general you should be able to dereference a dtd if you want to.
19:02:14 [noah]
q?
19:03:12 [DKA].
19:03:12 [masinter]
masinter has joined #tagmem
19:03:32 [masinter]
q?
19:03:36 [DKA]
... Publishers can take care of it.
19:04:03 [DKA]
Tim: For the case of harry potter, the book industry operates differently, because it's a different scale of usage.
19:05:16 [DKA]
Jonathan: Transaction costs [on the web] are so much lower. Inexpensive social expectations.
19:05:26 [ted]
q+
19:05:33 [masinter]
One downside of using URIs for things other than href@a and img@src is that these scale issues arise. This has been an architectural principle, to use http: URIs for things that you don't really intend to be referenced. it's not the only downside
19:05:58 [noah]
I guess I just disagree that they should not be derefenced
19:06:04 [DKA]
... it's a question of economics in relation to social expectations. ... who pays for what.
19:06:28 [noah]
On the contrary, we've said that when you make things like namespaces, we want you to use http-scheme URIs precisely so that you CAN dereference them.
19:07:31 [noah]
Larry, these DTD references are like img src -- each of the references is from an HTML document.
19:07:56 [DKA].
19:07:58 [noah]
q+ to disagree with Larry
19:08:03 [ted]
q- later
19:08:05 [noah]
ack next
19:08:20 [DKA]
... The mismatch has led to a couple of problems.
19:08:23 [noah]
ack next
19:08:24 [Zakim]
noah, you wanted to disagree with Larry
19:08:35 [DKA]
.. let's acknowledge the problem.
19:09:31 [DKA]
Noah: [disagreeing on the different scaling model between DTDs and IMG SRC...]
19:10:00 [noah]
q?
19:10:02 [noah]
ack next
19:10:43 [DKA]
Ted: To Tim's point: a software engineer comes up with a brand new ontology, puts it on his web site, it becomes popular - he will have the same headaches and hassles as we do.
19:11:19 [DKA]
Noah: If apache came pre-configured to handle the load would you be happy?
19:12:01 [DKA]
Ted: Yes, for example, if apache told search engines "I'm busy right now please come back later" then that would be good. You can't express in http your pain threshold.
19:12:37 [DKA]
Tim: TCP works really well because you stuff in as much as you can. It was designed at 300 baud times and it works at 300 gigabit times.
19:12:41 [masinter]
q+ to say that the problem was the W3C published a STANDARD that pointed to a http URI rather than something more permanent
19:12:58 [DKA]
Tim: You want to have negotiated quality of service.
19:13:15 [masinter]
speaking of Van Jacobson,
19:13:16 [noah]
q?
19:13:36 [masinter]
q+
19:13:47 [noah]
ack next
19:13:49 [Zakim]
masinter, you wanted to say that the problem was the W3C published a STANDARD that pointed to a http URI rather than something more permanent and to
19:14:26 [DKA]
Larry: Van Jacobson - has an interesting project on content-centric networks that we might want to look into.
19:15:16 [DKA]
[debate on whether DTDs are intended to be retrieved or not]
19:15:25 [DKA]
Noah: Next steps...
19:15:30 [noah]
ACTION-390?
19:15:30 [trackbot]
ACTION-390 -- Daniel Appelquist to review ISSUE-58 and suggest next steps -- due 2011-03-01 -- OPEN
19:15:30 [trackbot]
19:16:35 [DKA]
Dan: I don't have an answer...
19:16:57 [ted]
ted: the # (2-3?) of connection limit per ip gets in the way of user experiences as well, making CDN more popular. as administrator i would like to improve a user's browser experience (faster load time) and allow in some cases more concurrent connections
19:17:05 [DKA]
Noah: The simple answer is to [keep this on the back burner]. I need a proposal on what we should do and who does it.
19:17:07 [ted]
ted: i also want to encourage search engines to crawl me and do so efficiently when convenient for me
19:17:27 [DKA]
... I think we need a short finding on what people's responsibilities are regarding caching.
19:17:40 [Norm]
Norm has joined #tagmem
19:17:45 [DKA]
Henry: I will reach out to [authors of XML parsers].
19:17:55 [ted]
s/ted: i also/[i also
19:18:05 [DKA]
Tim: We should write what we want clients to do.
19:18:30 [masinter]
wonder if Henry could write up what he's asking and what they say or do?
19:18:32 [DKA]
Henry: A good idea is - what Ted mentioned - an adaptive caching mechanism.
19:19:09 [DKA]
Noah: We could talk about turing the MAY in rfc-2616 to a SHOULD.
19:19:18 [DKA]
Larry: I am against that. I think it's the wrong place.
19:19:41 [DKA]
Noah: When you have a piece of software that is in a position to detect repeated requests, you should cache.
19:19:56 [ted]
[if caching was less optional and more widely deployed on net popular resources would scale better and performance would be better]
19:20:04 [DKA]
John: [supporting tarpitting]
19:20:13 [ted]
q+ to put that on rec
19:20:49 [DKA]
John: I think it should be cached in the open source code level...
19:20:49 [masinter]
(a) I don't think we can quickly come to a conclusion, but (b) Henry has agreed to ask tool authors to do something, (c) think we could endorse what Henry asks if the tool authors are willing to go along with it
19:20:57 [masinter]
q+ to suggest Henry write that up
19:22:18 [johnk]
Norm has written about this;
19:22:28 [DKA]
[discussion of caching catalog and whether or not it's a catalog]
19:22:52 [masinter]
for example, "clear my cache" for privacy reasons might not clear the catalog
19:22:57 [DKA]
Henry: the OASIS catalog is just a string-to-string matcher, matching HTTP URIs to loca disk copies.
19:23:21 [DKA]
Larry: for privacy reasons you might want to say "clear my cache" but that wouldn't clear my catalog.
19:23:28 [Ashok]
Ashok has joined #tagmem
19:23:44 [DKA]
Noah: What's implicit in john's proposal: separation of concerns.
19:24:13 [DKA]
Tim: I hope you wouldn't expect clients to spot that tcp connection is going slowly...
19:25:03 .
19:25:28 [DKA]
Noah: the server is creating a network that is robust against traffic access pattern. Different clients will make different choices. A client might not need to change anything [in the case of e.g. tarpitting]. [if you are not time sensitive]
19:27:04 [masinter`]
masinter` has joined #tagmem
19:27:14 [DKA]
Larry: Henry - I would like you to document what you tell [the implementors] and report back what they say.
19:28:21 [DKA]
Dan: on the p2p topic - should we be doing something here?
19:28:43 [DKA]
Henry: I don't know enough the next gen internet...
19:28:58 [DKA]
Tim: I don't think that internet2 is reinventing http.
19:29:03 [DKA]
s/http/tcp/
19:29:08 [ted]
[p2p has too much overhead (startup time to connect to peers) imho to be worthwhile for small resources. yves makes that point as well in his email]
19:30:17 [DKA].
19:30:25 [DKA]
... but we need people who want to put time into that.
19:31:30 [timbl]
q+
19:31:34 [DKA]
19:31:45 [masinter`]
I wonder if this should be part of the TAG/IETF discussion
19:31:49 [DKA]
... web) is going to survive.
19:32:20 [DKA]
Henry: [clarifying] as the future becomes clearer, we need to start tracking it ...
19:32:25 [noah]
q?
19:32:51 [noah]
ack next
19:32:52 [Zakim]
ted, you wanted to put that on rec
19:32:52 [DKA]
Noah: I want to focus this on next steps.
19:33:01 [ted]
ted: ^^ comment on merits of caching. in practice as we've heard from noah the costs of maintaining caching proxies too high compared to bandwidth.
19:33:01 [ted]
ted: glad to hear larry's comment. get library developers to implement what ht suggests. i heard ht (and others) liked norm's caching catalog. would oracle implement it in jdk?
19:33:02 [Zakim]
masinter, you wanted to suggest Henry write that up
19:33:35 [masinter]
masinter has joined #tagmem
19:33:54 [DKA]
q+ to remind people that just because there is a next-gen or internet2 activity doesn't mean that will be the future of the internet. :)
19:33:54 [DKA]
q?
19:33:54 [DKA]
test
19:34:49 [DKA]
Ted: [ speaking in support of the caching catalog approach ]
19:35:10 [noah]
q?
19:35:18 [timbl]
q-
19:35:28 [noah]
ack timbl
19:35:33 [noah]
ack next
19:35:34 [Zakim]
DKA, you wanted to remind people that just because there is a next-gen or internet2 activity doesn't mean that will be the future of the internet. :)
19:36:33 [noah]
NM: Ted, anything hi priorty you want the TAG to do?
19:36:59 [noah]
TG: Day by day, we're getting by. The catalog work would be helpful. What seems really useful is for the TAG to tackle the meta-issue.
19:37:45 [noah]
TG: Directives are potentially useful; peer-to-peer seems most applicable for large things.
19:37:50 [noah]
NM: Large or high volume?
19:38:02 [noah]
TG: P2P startup times are typically significant, so large resources.
19:38:52 [ted]
[and p2p could be intersting failover for http]
19:39:11 [noah]
NM: Floor is open for volunteers
19:39:35 [noah]
ACTION: Larry to help us figure out whether to say anything about scalability of access at IETF panel
19:39:35 [trackbot]
Created ACTION-517 - Help us figure out whether to say anything about scalability of access at IETF panel [on Larry Masinter - due 2011-02-15].
19:40:33 [ht]
trackbot, status?
19:41:37 [Ashok]
Ashok has joined #tagmem
19:41:58 [ht]
ACTION: Henry S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools
19:41:59 [trackbot]
Created ACTION-518 - S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools [on Henry S. Thompson - due 2011-02-15].
19:41:59 [noah]
. ACTION Peter to frame architectural opportunities relating to scalability of resource access
19:42:13 [ht]
trackbot, action-518 due 2011-07-15
19:42:13 [trackbot]
ACTION-518 S. to report back on efforts to get undertakings from open-source tool authors to ship pre-provisioned catalogs configured into their tools due date now 2011-07-15
19:42:40 [noah]
ACTION Peter to frame architectural opportunities relating to scalability of resource access Due: 2011-03-15
19:42:40 [trackbot]
Created ACTION-519 - Frame architectural opportunities relating to scalability of resource access Due: 2011-03-15 [on Peter Linss - due 2011-02-15].
19:43:02 [noah]
close ACTION-390
19:43:02 [trackbot]
ACTION-390 Review ISSUE-58 and suggest next steps closed
19:43:19 [DKA]
Topic: Web Applications: Client-side state
19:45:45 [noah]
ACTION-514 Due 2011-03-01
19:45:45 [trackbot]
ACTION-514 Draft finding on API minimization Due: 2011-02-01 due date now 2011-03-01
19:45:52 [noah]
(that should have been fixed this morning)
19:46:00 [ted]
ted has left #tagmem
19:46:33 [DKA]
19:46:49 [DKA]
Noah: I think this draft needs to make a few key points...
19:47:02 [DKA]
Ashok: there is at the end a section on recommendations...
19:47:41 [DKA]
... sections 4, 5 and 6 are the heart of it.
19:47:57 [DKA]
Noah: Can it be abstracted into a one or 2 sentence best practice?
19:48:23 [DKA]
[ looking through section 4 and picking out BP statements ]
19:48:26 [noah]
I'm seeing as potential recommendations:
19:48:27 [noah]
As the state of the resource and the display changes, the fragment identifier can be changed to keep track of the state.
19:48:34 [masinter]
I think the wording would be better if recast a bit
19:48:36 [noah]
...and...
19:48:37 [noah]
if the URI is sent to someone else the fragment identifier can be used to recreate the state.
19:49:00 [masinter]
"the application can be designed so that the fragment identifier 'identifies' the state"
19:49:02 [noah]
NM: What about "?" vs. "#
19:49:10 [DKA]
Ashok: I have added one paragraph - in the google maps case which I think talks about that.
19:49:12 [noah]
AM: I added a para about that.
19:49:33 [masinter]
"the application can be designed so that the fragment identifier identifies or encodes the relevant transferable parts of the state"
19:50:30 [DKA].
19:50:34 [DKA]
Ashok: Yes.
19:50:37 [noah]
q?
19:50:59 [DKA]
Larry: the application can be designed so that the fragment identifier identifies or encodes the relevant transferable parts of the state
19:51:09 [DKA]
Ashok: Yes.
19:52:03 [DKA]
Larry: in the case of a map application with a lot of state, then you want the app to be designed so that the URI contains the [part of the state that you want to be transferred to another client]
19:52:34 [Ashok]
Larry: You can design the app so that the frag ig identifies or encodes the state you want uniformly erferenced
19:53:04 [DKA]
... the part that you want to have uniformly referenced.
19:54:08 [DKA]
Noah: let's suspend disbelief and assume that google maps used hash signs. The question is: state of what? [demonstrates using google maps]
19:54:08 [DKA]
Ashok: What [gmaps
19:55:10 [DKA]
Noah: there are a lot of http interactions under the covers...
19:55:10 [DKA]
... let's be careful about what is the transferable part of the resource...
19:55:12 [DKA]
... originally, [in the case of gmaps] an http request was made for the generic
document.
19:55:28 [DKA]
... scrolling through this map feels like scrolling through an http document.
19:55:36 [DKA]
s/http/html/
19:56:06 [DKA]
... the question I want to raise: for this class of apps, you emphasise that there is a virtual document that is the map...
19:57:10 [DKA]
Ashok: [points to text in:
]
19:57:14 [DKA]
Ashok: we can work on this wording...
19:58:11 [DKA]
Tim: When you're looking at the map... It's interesting that you don't use the hash as you drive around... They do not use the hash, but they could...
19:58:45 [DKA]
Ashok: the question mark tells you what to bring from the server. the has would not tell you that.
19:58:52 [DKA]
Tim: they both would...
19:59:19 [DKA]
Ashok: I disagree it could be done with the hash.
19:59:58 [DKA]
Tim: What comes back on the response is a piece of javascript. The javascript then starts pulling in all the tiles.
20:00:29 [DKA]
Ashok: if the only thing that comes back is javascript on the first get... [then it could be hash...]
20:01:18 [DKA]
Noah: I think one of the attractions of this - is you don't have to do the distribution in the same way in all cases. If I use the hash sign and I us it in an email reader, the typical email client [wouldn't handle it correctly].
20:02:04 [DKA]
Noah: [disables javascript and reloads the map from google maps; it works]
20:02:14 [DKA]
Noah: You couldn't do that with the hash sign.
20:04:35 [DKA]
Ashok: Your first access gets you the app plus some javascript...
20:07:15 [DKA]
Noah: where does the word representation apply. In the case of gmaps, is it a representation when it is generated with javascript, client side?
20:08:00 [DKA]
Tim: yes, it's a representation.
20:08:11 [DKA]
Tim: lots and lots of web pages are filled in with javascript.
20:08:45 [DKA]
Noah: Ok - it would be good to tell that story. Many web pages do this. There may be other ajax apps where you get different behavior.
20:09:09 [DKA]
Ashok: I'll ask TV if he can tell us what goes on under the covers [of google maps].
20:12:44 [johnk]
example 3 talks about client URI generation -
20:13:12 [DKA]
Tim: History manipulation - to be able to change the behavior of the back button and change what's in the location bar - is in firefox 4.
20:13:52 [DKA]
Ashok: [talking through section 6]
20:14:12 [DKA]
Ashok: Do these or don't these violate specs and what do / should we do?
20:14:44 [DKA]
... frag ids for html and xml... many media types don't define usage of frag ids..
20:14:53 [DKA]
Larry: But we are specifically talking about http and html...
20:15:25 [DKA]
Ashok: [last paragraph] - "active content"
20:15:56 [DKA]
Larry: When you talk about URIs do you mean URIs in general, or just http URIs...?
20:16:05 [DKA]
... [you need to be specific.]
20:16:41 [DKA]
Tim: I think we should make feel bad about using hash in this way. We should change the specs.
20:16:48 [DKA]
Larry: We should fix the specs to match.
20:17:13 [DKA]
Henry: I'm happier with doing this if we can say "because it's not incompatible" with the speced story.
20:17:43 [DKA]
Larry: originally content was static. Fragment ids were pointers to static pointers. Now content is active...
20:18:10 [DKA]
Henry: the interpretation of stuff after the hash should be client side...
20:18:15 [DKA]
[broad agreement]
20:20:23 [DKA]
Larry: it would be great if URIs worked [interoperated] between google maps and yahoo maps...
20:20:57 [AndroUser]
AndroUser has joined #tagmem
20:21:09 [DKA]
Henry: Historically the spec told you that all you needed to know was the media type of the response, now it's more tightly coupled.
20:21:48 [Ashok]
The page tells you what the fragId is used for
20:22:21 [DKA]
Tim: what's interesting about the maps space - it would be great if the user has independent control over what happens when you get a GEO URI... what service you want to use...
20:22:58 [DKA]
John: Lat and Long have meaning in the real world. You also have the position on a map, which is different from the real space. The third part is the panning and zooming.
20:23:13 [DKA]
Tim: all you need is the lat - lon.
20:23:29 [DKA]
... the user [should] just see lat, long.
20:23:46 [ht]
There has been a real change in where the responsibility for determining the meaning of the post-# strings lies
20:23:59 [ht]
Per the existing specs, it's global, and lies in the media type registration
20:24:40 [ht]
Per the practice under discussion, it lies with the [transitive closure of] the representation retrieved for the pre-# URI
20:26:09 [ht]
This is parallel to where the code comes from the _implements_ the semantics: for the existing spec. story, it's in the UA from the beginning, because it's known at UA-creation time, because it comes from the media type spec.
20:26:29 [ht]
whereas for the new usage, it's in the retrieved representation itself
20:27:09 [DKA]
John: I think this goes back to the coupling issue.
20:27:09 [DKA]
Ashok: [back to the document] Section 7 - I didn't do anything with it - Yves says take it out...
20:28:14 [DKA]
Noah: It feels like we haven't nailed the good practices and recommendation. There are some interesting bits here. I'd like to see them in support of some news [some concrete recommendations]. Then we could see what other groups we need to coordinate with.
20:28:14 [DKA]
[back up to section 4]
20:28:31 [Ashok]
Ashok has joined #tagmem
20:29:02 [noah]
Noah: Not happy with the word "operate" in section 4.
20:29:18 [DKA]
[discussion on the wording]
20:29:27 [noah]
Noah: I think it's more like: the JavaScript uses the fragment identifier as well as other information to render the representation(?) of the resource.
20:29:44 [noah]
Noah: I think it's more like: the JavaScript uses the fragment identifier as well as other information to render and support interaction with the representation(?) of the resource.
20:30:27 [noah]
Noah: On "As the state of the resource and the display changes, the fragment identifier can be changed to keep track of the state." Yes, but we need to get clear on pros and cons of ? vs. #
20:32:30 [DKA]
Dan: do you need to assume programmatic access to the history/address bar?
20:33:39 [noah]
TBL: The key point on # vs ? is that when you update the address bar, the page >will< reload. In the case of #, well, the right document is already loaded. In the case of ?, the tendency would be to reload the page.
20:33:58 [noah]
TBL: Right, and when the GET happens, you lose state.
20:35:08 [noah]
noah has joined #tagmem
20:37:04 [jar]
jar has joined #tagmem
20:37:32 [DKA]
DKA has joined #tagmem
20:37:36 [johnk]
johnk has joined #tagmem
20:37:42 [plinss]
plinss has joined #tagmem
20:37:42 [timbl]
timbl has joined #tagmem
20:38:40 [timbl_]
timbl_ has joined #tagmem
20:39:39 [DKA]
Noah: This finding has been slowly evolving. Need to hear from the TAG : we need to focus on it, get it to where people are happy and move ahead.
20:39:56 [DKA]
+1 on its usefulness.
20:40:52 [DKA]
Jonathan: I am not worked up about it. My focus tends to be on what does the stuff mean, independent on the protocols.
20:41:04 [masinter]
masinter has joined #tagmem
20:41:20 [DKA]
... I can't figure out who it would help or who would pay attention.
20:42:04 [DKA].
20:43:05 [DKA]
Larry: the media type registration needs to say (for active content) when and how those parameters are passed to the active content. We are extending something originally designed for passive content to change for active content.
20:43:33 [DKA]
Henry: So this should be a story about how we think about media type registration in the space [active content] that we are now living in.
20:44:10 [DKA]
Larry: ..make the frag identifiers useful for the potion of the state that you are interested in [uniformly referencing].
20:45:03 [DKA]
Larry: We could start with the current document as a note and use that as a basis to add something to the mime-web document and maybe another document.
20:45:48 [DKA]
Noah: the document either has to cut the advice out, or it needs to give advice in close to the style that we've done in findings. "Good practice: xxx , explanation"...
20:45:55 [DKA]
... or describe use cases.
20:46:09 [DKA]
... Ashok I think that work needs to be done before publishing it as a note.
20:46:41 [DKA]
Larry: I'm OK with it. The context is a discovery...
20:48:07 [DKA]
Dan: I think that sounds like the right approach - reformatting / expanding some of the recommendations and publishing it as a note.
20:48:21 [DKA]
John: I think it makes sense to document things we'd like to see happen.
20:48:50 [DKA]
... highlighting that kind of usage is good. But I worry that it's getting a bit wooly.
20:49:20 [DKA]
... I told Raman when I reviewed this document that he could pull out 2 things - the same things referenced in section 4 of the current document.
20:49:45 [DKA]
Ashok: I think we can make this [section 4] better.
20:50:21 [DKA]
... If people think that after that we can publish this as a note, great. Following that, if you want something smaller - one page, about spec recommendations, then we can pull that out.
20:50:34 [DKA]
Noah: that could be as simple as giving someone an action...
20:52:12 [masinter]
action-508?
20:52:12 [trackbot]
ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 -- due 2011-02-12 -- OPEN
20:52:12 [trackbot]
20:52:47 [masinter]
action-500?
20:52:47 [trackbot]
ACTION-500 -- Larry Masinter to coordinate about TAG participation in IETF/IAB panel at March 2011 IETF -- due 2011-02-15 -- OPEN
20:52:47 [trackbot]
20:52:48 [noah]
Leave ACTION-481 as is
20:53:48 [noah]
ACTION-508?
20:53:48 [trackbot]
ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 -- due 2011-02-12 -- OPEN
20:53:48 [trackbot]
20:53:59 [noah]
LM: Ashok's document should be a stable reference.
20:54:35 [noah]
ACTION-508 Due 2011-02-22
20:54:35 [trackbot]
ACTION-508 Draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 due date now 2011-02-22
20:55:00 [masinter]
action-508 should say that the problem is that #XXXX are parameters to acdtive content
21:00:13 [timbl_]
timbl_ has joined #tagmem
21:01:27 [timbl]
timbl has joined #tagmem
21:13:46 [timbl_]
timbl_ has joined #tagmem
21:15:08 [DKA]
Topic: the IETF presentation...
21:15:16 [timbl__]
timbl__ has joined #tagmem
21:16:06 [DKA]
Larry: What is the boundary between "the web" and the "rest of the Internet"?
21:16:21 [DKA]
ISSUE-500?
21:16:21 [trackbot]
ISSUE-500 does not exist
21:16:29 [Ashok]
Ashok has joined #tagmem
21:16:42 [masinter]
issue-500?
21:16:42 [trackbot]
ISSUE-500 does not exist
21:16:47 [masinter]
action-500?
21:16:47 [trackbot]
ACTION-500 -- Larry Masinter to coordinate about TAG participation in IETF/IAB panel at March 2011 IETF -- due 2011-02-15 -- OPEN
21:16:47 [trackbot]
21:19:53 [Yves]
[re: Ashok's document on fragments, I'll send further comments/help working on it]
21:20:09 [DKA]
[debate on what is implied by the quote from the IAB]
21:20:38 [Ashok]
Thanks, Yves!
21:26:15 [DKA]
Noah: The TAG has decided to say yes to participating on the IETF panel in Prague.
21:27:24 [DKA]
Topic: Admin
21:27:34 [DKA]
Noah: Once again, welcome to Peter.
21:27:47 [DKA]
Noah: Minutes of the 20th - approved?
21:27:54 [DKA]
Minutes of the 20th are approved.
21:28:03 [DKA]
Noah: Note that TPAC is happening November in Santa Clara.
21:29:09 [DKA]
Noah: we would normally meet sometime in may timeframe. there is an ac meeting in bilbao, spain in may.
21:30:07 [DKA]
... so - open to suggestions.
21:30:21 [DKA]
... we could meet in Cambridge again...
21:30:41 [DKA]
Tim: 11-12-13 of May in London...?
21:30:45 [DKA]
Noah: Doesn't work for me.
21:31:17 [DKA]
Noah: Who else is going to the ac meeting?
21:31:29 [DKA]
Noah: 9-11 in the UK?
21:31:46 [DKA]
Larry: Week of the 9th I am completely booked.
21:32:49 [DKA]
Noah: Week after the AC?
21:33:00 [DKA]
[week of the 23rd]
21:33:39 [DKA]
[not good for Tim]
21:35:15 [DKA]
Noah: Week of June 6?
21:38:03 [DKA]
Noah: 7-8-9 of June?
21:38:17 [DKA]
Tim: Yes could do it - would have to be in Cambridge.
21:39:51 [DKA]
Noah: Formal proposal - 7-9 June in cambridge Mass for next TAG f2f meeting.
21:39:55 [DKA]
+1
21:40:24 [DKA]
Noah: Should we talk about September?
21:40:39 [DKA]
Henry: I would be happy to host.
21:41:56 [DKA]
+1 to edinburgh in September.
21:49:12 [noah]
ACTION: Settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31
21:49:12 [trackbot]
Sorry, couldn't find user - Settle
21:49:46 [noah]
RESOLUTION: The TAG will meet at MIT 7-9 June
21:50:47 [noah]
ACTION: Noah to settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31
21:50:47 [trackbot]
Created ACTION-520 - Settle London vs. Edinburgh for Sept. 13-15 F2F Due 2011-05-31 [on Noah Mendelsohn - due 2011-02-15].
21:51:25 [noah]
RESOLUTION: The TAG will meet in the UK 13-15 Sept, either Edinburgh or London, TBD see ACTION-520
21:52:10 [DKA]
RRSAgent, make minutes
21:52:10 [RRSAgent]
I have made the request to generate
DKA
21:52:18 [DKA]
rrsagent, make logs public
21:52:30 [Ashok]
rrsagent, pointer
21:52:30 [RRSAgent]
See
22:00:38 [Zakim]
-W3C
22:00:40 [Zakim]
TAG_f2f()8:30AM has ended
22:00:40 [Zakim]
Attendees were W3C
22:03:10 [DKA]
Draft minutes available here:
22:11:13 [ndw]
ndw has joined #tagmem
22:19:12 [Zakim]
Zakim has left #tagmem
22:43:17 [jar]
rrsagent, make logs public
23:11:20 [jar]
jar has joined #tagmem
23:19:08 [Norm]
Norm has joined #tagmem
23:29:31 [ht]
ht has joined #tagmem | http://www.w3.org/2011/02/08-tagmem-irc | CC-MAIN-2013-48 | refinedweb | 11,117 | 70.84 |
Seymour wrote: > I just made some typos and was wondering if there was an easier > way to clear the Python namespace at the interactive prompt rather than > shutting Komodo down and restarting (really brute force). most IDE's have a "reset interactive mode" command (it's ctrl-F6 in IDLE, for example). another approach is to put the code in a script file, and run that file. scripts always run in a clean namespace. yet another approach is to stop using wildcard import ("from import *"). if you refer to the things in Adder via the Adder namespace, you can just do reload(Adder) to fetch the new version. ("from import *" is generally considered a "only use if you know what you're doing" construct in Python, so if it causes trouble for you, you shouldn't have used it in the first place, per definition ;-) </F> | http://mail.python.org/pipermail/python-list/2006-November/414963.html | CC-MAIN-2013-20 | refinedweb | 146 | 68.1 |
In recent years, Visual Basic has won great acclaim for granting programmers the tools for creating highly detailed user interfaces via an intuitive form designer, along with an easy to learn programming language that together produced probably the best environment for rapid application development out there. One of the things that Visual Basic does, and other rapid application development tools, such as Delphi, also does, is provide access to a number of prefabricated controls that the developer can use to quickly build the user interface (UI) for an application.
At the center of most Visual Basic Windows applications stands the form designer. You create a user interface by dragging and dropping controls from a toolbox to your form, placing them where you want them to be when you run the program, and then double-clicking the control to add handlers for the control. The controls provided out of the box by Microsoft along with custom controls that can be bought at reasonable prices, have supplied programmers with an unprecedented pool of reusable, thoroughly tested code that is no further away than a click with the mouse. What was central to Visual Basic is now, through Visual Studio.NET, available to C# programmers.
Most of the controls used before .NET were, and still are, special COM objects, known as ActiveX controls. These are usually able to render themselves at both design and runtime. Each control has a number of properties allowing the programmer to do a certain amount of customization, such as setting the background color, caption, and its position on the form. The controls that we'll see in this chapter have the same look and feel as ActiveX controls, but they are not – they are .NET assemblies. However, it is still possible to use the controls that have been designed for older versions of Visual Studio but there is a small performance overhead because .NET has to wrap the control when you do so. For obvious reasons, when they designed .NET, Microsoft did not want to render the immense pool of existing controls redundant, and so have provided us with the means to use the old controls, even if future controls are built as pure .NET components.
These .NET assemblies can be designed in such a way that you will be able to use them in any of the Visual Studio languages, and the hope and belief is that the growing component industry will latch on, and start producing pure .NET components. We'll look at creating control ourselves in the next chapter.
An in depth explanation of .NET assemblies is provided in Chapter 21. Please refer to it if you want to know more about what an assembly is.
We have already seen the form designer in action, if only briefly, in the examples provided earlier in this book. In this chapter, we'll take a closer look at it, and especially how we use a number of controls, all of which come out of the box with Visual Studio.NET. Presenting all of the controls present in Visual Studio.NET will be an impossible task within the scope of this book, and so we'll be presenting the most commonly used controls, ranging from labelsand text boxes, to list views and status bars.
We'll start out by taking a brief tour of the Windows Form Designer. This is the main playing ground when you are laying out your user interface. It is perfectly possible to design forms without using Visual Studio.NET, but designing an interface in Notepad can be a quite painful experience.
Let's look at the environment we'll be using. Start Visual Studio.NET and create a new C# Windows Application project by selecting File | New | Project. In the dialog that appears, click Visual C# Projects in the tree to the left and then select Windows Application in the list to the right. For now, simply use the default name suggested by Visual Studio and click OK. This should bring up a window much like the one shown below:
If you are famliar with the forms designer found in Visual Basic you will notice the similarities – obviously someone desided that the designer was a winner and desided to allow it to be used it in other Visual Studio languages as well. If you are not familiar with the Visual Basic designer, then quite a few things are going on in the above screenshot, so let's take a moment and go through the panels one by one.
In the center of the screen is the form that you are designing. You can drag and drop controls from the toolbox onto the form. The toolbox is collapsed in the picture above, but if you move the mouse pointer to the far left of the screen over the Toolbox tab, it will unfold. You can then click the pin at the top right of the panel to pin it down. This will rearrange the work area so that the toolbox is now always on top, and isn't obscuring the form. We'll take a closer look at the toolbox and what it contains shortly.
Also collapsed on the left hand bar is the Server Explorer – represented by the computers icon on top of the toolbox tab. You can think of this as a small version of the Windows Control Panel. From here, you can browse computers on a network, add and remove database connections, and much more.
To the right of the window are two panels. The top-right one is the Solution Explorer and the class view. In the Solution Explorer, you can see all open projects and their associated files. By clicking the tab at the bottom of the Solution Explorer, you activate the Class Viewer. In this, you can browse all of the classes in your projects and all of the classes that they are derived from.
At the bottom right of the screen, is the Properties panel. This panel will contain all of the properties of the selected item for easy reference and editing. We'll be using this panel quite a bit in this chapter.
Also in this panel, the Dynamic Help tab is visible. This panel will show help tips to you for any selected objects and code even while you type. If your computer uses one of the older microprocessors or has a small amount of RAM, then I suggest that you remove this from the panel when it is not needed, as all that searching for help can make performance rather sluggish.
Let's have a closer look at the toolbox. If you haven't already, move your mouse pointer over the toolbox on the left of the screen, and pin it to the foreground by clicking the pin at the top right of the panel that unfolds:
If you accidentally remove the toolbox by clicking the X instead, you can make it reappear by selecting Toolbox from the View menu, or by pressing Ctrl-Alt-X.
The toolbox contains a selection of all the controls available to you as a .NET developer. In particular, it provides the selection that is of importance to you as a Windows Application developer. If you had chosen to create a Web Forms project, rather than a Windows Application, you would have been given a different toolbox to use. You are not limited to use this selection. You can customize the toolbox to fit your needs, but in this chapter, we'll be focusing on the controls found in the selection that is shown in the picture above – in fact, we'll look at most of the controls that are shown here.
Now that we know where we'll be doing the work, let's look at controls in general.
Most controls in .NET derive from the System.Windows.Forms.Control class. This class defines the basic functionality of the controls, which is why many properties and events in the controls we'll see are identical. Many of these classes are themselves base classes for other controls, as is the case with the Label and TextBoxBase classes in the diagram below:
Some controls, named custom or user controls, derive from another class: System.Windows.Forms.UserControl. This class is itself derived from the Control class and provides the functionality we need to create controls ourselves. We'll cover this class in Chapter 14. Incidentally, controls used for designing Web user interfaces derive from yet another class, System.Web.UI.Control.
All controls have a number of properties that are used to manipulate the behavior of the control. The base class of most controls, Control, has a number of properties that other controls either inherit directly or override to provide some kind of custom behavior.
The table below shows some of the most common properties of the Control class. These properties will be present in most of the controls we'll visit in this chapter, and they will therefore, not be explained in detail again, unless the behavior of the properties is changed for the control in question. Note that this table is not meant to be exhaustive; if you want to see all f the properties in the class, please refer to the MSDN library:
Name
Availability
Description
Anchor
Read/Write
Using this property, you can specify how the control behaves when its container is resized. See below for a detailed explanation of this property.
BackColor
The background color of a control.
Bottom
By setting this property, you specify the distance from the top of the window to the bottom of the control. This is not the same as specifying the height of the control.
Dock
Allows you to make a control dock to the edges of a window. See below for a more detailed explanation of this property.
Enabled
Setting Enabled to true usually means that the control can receive input from the user. Setting Enabled to false usually means that it cannot.
ForeColor
The foreground color of the control.
Height
The distance from the top to the bottom of the control.
Left
The left edge of the control relative to the left edge of the window.
The name of the control. This name can be used to reference the control in code.
Parent
The parent of the control.
Right
The right edge of the control relative to the left edge of the window.
TabIndex
The number the control has in the tab order of its container.
TabStop
Specifies whether the control can be accessed by the Tab key.
Tag
This value is usually not used by the control itself, and is there for you to store information about the control on the control itself. When this property is assigned a value through the Windows Form designer, you can only assign a string to it.
Top
The top edge of the control relative to the top of the window.
Visible
Specifies whether or not the control is visible at runtime.
Width
The width of the control.
These two properties are especially useful when you are designing your form. Ensuring that a window doesn't become a mess to look at if the user decides to resize the window is far from trivial, and numerous lines of code have been written to achieve this. Many programs solve the problem by simply disallowing the window from being resized, which is clearly the easiest way around the problem, but not the best. The Anchor and Dock properties that have been introduced with .NET lets you solve this problem without writing a single line of code.
The Anchor property is used to to specify how the control behaves when a user resizes the window. You can specify if the control should resize itself, anchoring itself in proportion to its own edges, or stay the same size, anchoring its position relative to the window's edges.
The Dock property is related to the Anchor property. You can use it to specify that a control should dock to an edge of its container. If a user resizes the window, the control will continue to be docked to the edge of the window. If, for instance, you specify that a control should dock with the bottom of its container, the control will resize itself to always occupy the bottom part of the screen, no matter how the window is resized. The control will not be resized in the process; it simply stays docked to the edge of the window.
See the text box example later in this chapter for the exact use of the Anchor property.
When a user clicks a button or presses a button, you as the programmer of the application, want to be told that this has happened. To do so, controls use events. The Control class defines a number of events that are common to the controls we'll use in this chapter. The table below describes a number of these events. Once again, this is just a selection of the most common events; if you need to see the entire list, please refer to the MSDN library:
Click
Occurs when a control is clicked. In some cases, this event will also occur when a user presses Enter.
DoubleClick
Occurs when a control is double-clicked. Handling the Click event on some controls, such as the Button control will mean that the DoubleClick event can never be called.
DragDrop
Occurs when a drag-and-drop operation is completed, in other words, when an object has been dragged over the control, and the user releases the mouse button.
DragEnter
Occurs when an object being dragged enters the bounds of the control.
DragLeave
Occurs when an object being dragged leaves the bounds of the control.
DragOver
Occurs when an object has been dragged over the control.
KeyDown
Occurs when a key becomes pressed while the control has focus. This event always occurs before KeyPress and KeyUp.
KeyPress
Occurs when a key becomes pressed, while a control has focus. This event always occurs after KeyDown and before KeyUp. The difference between KeyDown and KeyPress is that KeyDown passes the keyboard code of the key that has been pressed, while KeyPress passes the corresponding char value for the key.
KeyUp
Occurs when a key is released while a control has focus. This event always occurs after KeyDown and KeyPress.
GotFocus
Occurs when a control receives focus. Do not use this event to perform validation of controls. Use Validating and Validated instead.
LostFocus
Occurs when a control looses focus. Do not use this event to perform validation of controls. Use Validating and Validated instead.
MouseDown
Occurs when the mouse pointer is over a control and a mouse button is pressed. This is not the same as a Click event because MouseDown occurs as soon as the button is pressed and before it is released.
MouseMove
Occurs continually as the mouse travels over the control.
MouseUp
Occurs when the mouse pointer is over a control and a mouse button is released.
Paint
Occurs when the control is drawn.
Validated
This event is fired when a control with the CausesValidation property set to true is about to receive focus. It fires after the Validating event finishes and indicates that validation is complete.
Validating
Fires when a control with the CausesValidation property set to true is about to receive focus. Note that the control which is to be validated is the control which is losing focus, not the one that is receiving it.
We will see many of these events in the examples in the rest of the chapter.
We are now ready to start looking at the controls themselves, and we'll start with one that we've seen in previous chapters, the Button control.
When you think of a button, you are probably thinking of a rectangular button that can be clicked to perform some task. However, technically there are three buttons in Visual Studio.NET. This is because radio buttons (as the name implies) and check boxes are also buttons. Because of this, the Button class is not derived directly from Control, but from another class called ButtonBase, which is derived from Control. We'll focus on the Button control in this section and leave radio buttons and check boxes for later in the chapter.
The button control exists on just about any Windows dialog you can think of. A button is primarily used to perform three kinds of tasks:
Working with the button control is very straightforward. It usually consists of adding the control to your form and double-clicking it to add the code to the Click event, which will probably be enough for most applications you'll work on.
Let's look at some of the commonly used properties and events of the control. This will give you an idea what can be done with it. After that, we'll create a small example that demonstrates some of the basic properties and events of a button.
We'll list the properties as members of the button base, even if technically they are defined in the ButtonBase base class. Only the most commonly used properties are explained here. Please refer to MSDN for a complete listing:
FlatStyle
The style of the button can be changed with this property. If you set the style to PopUp, the button will appear flat until the user moves the mouse pointer over it. When that happens, the button pops up to its normal 3D look.
We'll mention this here even though it is derived from Control, because it's a very important property for a button. Setting the Enabled property to false means that the button becomes grayed out and nothing happens when you click it.
Image
Allow you to specify an image (bitmap, icon etc.), which will be displayed on the button.
ImageAlign
With this property, you can set where the image on the button should appear.
By far the most used event of a button is the Click event. This happens whenever a user clicks the button, by which we mean pressing the left mouse button and releasing it again while over the button. This means that if you left-click on the button and then draw the mouse away from the button before releasing it the Click event will not be raised. Also, the Click event is raised when the button has focus and the user press Enter. If you have a button on a form, you should always handle this event.
Let's move to the example. We'll create a dialog with three buttons. Two of the buttons will change the language used from English to Danish and back (feel free to use whatever language you prefer). The last button closes the dialog.
1. Open Visual Studio.NET and create a new C# Windows Application. Name the application ButtonTest.
2. Pin the Toolbox down, and double-click the Button control three times. Then move the buttons and resize the form as shown in the picture below:
3. Right-click a button and select Properties. Then change the Name property for each of the buttons as indicated in the picture above by selecting the Name edit field in the Properties panel and typing the text.
4. Change the Text properties of each of the three buttons to the same as the name except for the first three letters (btn).
5. We want to display a flag in front of the text to make it clear what we are talking about. Select the English button and find the Image property. Click (…) to the right of it to bring up a dialog where you can select an image. The flag icons we want to display come with Visual Studio.NET. If you installed to the default location (on an English language installation) they should be located in C:\Program Files\Microsoft Visual Studio.NET\Common7\Graphics\icons\Flags. Select the icon flguk.ico. Repeat this process with the Danish button, selecting the flgden.ico file instead (if you want to use a different flag here, then this directory will have other flags to choose from).
6. You'll notice at this point that the button text and icon is placed on top of each other, so we need to change the alignment of the icon. For both the English and Danish buttons, change the ImageAlign property to MiddleLeft.
7. At this point, you may want to adjust the width of the buttons so that the text doesn't start right where the images end. Do this by selecting each of the buttons and pull out the right notch that appear.
8. Finally, click on the form and change the Text property to "Do you speak English?"
That's it for the user interface of our dialog. You should now have something that looks like this:
Now we are ready to add the event handlers to the dialog. Double-click the English button. This will take you to directly to the event handler. The Click event is the default for the button it is that event that is created when you double-click the button. Other controls have other defaults.
When you double-click the control two things happens in the code behind the form. First of all, a subscription to the event is created in the InitializeComponent() method:
this.btnEnglish.Click += new System.EventHandler(this.btnEnglish_Click);
If you want to subscribe an event other than the default one, you will need to write the subscription code yourself, as we will do in the remainder of this chapter. It is important to remember that the code in the InitializeComponent() method is overwritten every time you switch from design mode to the code. Because of this, you should never write your event subscriptions in this method. Instead, use the constructor of the class.
The second thing that happens, is that the event handler itself is added:
private void btnEnglish_Click(object sender, System.EventArgs e)
{
this.Text = "Do you speak English?";
}
The method name is a concatenation of the name of the control, an underscore and the name of the event that is handled. The first parameter, object sender, will hold the control that was clicked. In this example, this will always be the control indicated by the name of the method, but in other cases many controls may use the same method to handle an event, and it that case you can find out exactly which control is calling by checking this value. The text box example later in this chapter demonstrates how to use a single method for multiple controls. The other parameter, System.EventArgs e, holds information about what happened. In this case, we'll not be needing any of this information.
As you will recall from earlier in this book, the this keyword identifies the current instance of the class. Because the class we are working on is represented by that instance, we can access the properties and controls it contains through that keyword. Setting the Text property on this as we do in the code above then means that we are setting the Text property of the current instance of the form.
Return to the Form Designer and double-click the Danish button and you will be taken to the event handler for that button. Here is the code:
private void btnDanish_Click(object sender, System.EventArgs e)
{
this.Text = "Taler du dansk?";
}
This method is identical to the btnEnglish_Click, except that the text is in Danish. Finally, we add the event handler for the OK button in the same way as we've done twice now. The code is a little different though:
private void btnOK_Click(object sender, System.EventArgs e)
{
Application.Exit();
}
With this, we exit the application and, with it, this first example. Compile it, run it, and press a few of the buttons. You will get output similar to this:
The Label control is probably the most used control of them all. Look at any Windows application and you'll see them on just about any dialog you can find. The label is a simple control with one purpose only: to present a caption or short hint to explain something on the form to the user.
Out of the box, Visual Studio.NET includes two label controls that are able to present them selves to the user in two distinct ways:
The two controls are found at the top of the control panel on the Window Forms tab. In the picture below, one of each of the two types of Label have been dragged to a to illustrate the difference in appearance between the two:
If you have experience with Visual Basic you may notice that the Text property is used to set the text that is displayed, rather than the Caption property. You will find that all intrinsic .NET controls use the name Text to describe the main text for a control. Before .NET, Caption and Text were used interchangeably.
And that's it for most uses of the Label control. Usually you need to add no event handling code for a standard Label. In the case of the LinkLabel, however, some extra code is needed if you want to allow the user to click it and take him or her to the web page shown in the text.
The Label control has a surprising number of properties that can be set. Most of these are derived from Control, but some are new. The following table lists the most common ones. If nothing else is stated, the properties exist in both the Label and LinkLabel controls:
BorderStyle
Allows you to specify the style of the border around the Label. The default is no border.
DisabledLinkColor
(LinkLabel only) The color of the LinkLabel after the user has clicked it.
Controls how the control is displayed. Setting this property to PopUp will make the control appear flat until the user moves the mouse pointer over the control. At that time, the control will appear raised.
This property allows you to specify a single image (bitmap, icon, and so on.) to be displayed in the label.
(Read/Write) Where in the Label the image is shown.
LinkArea
(LinkLabel only) The range in the text that should be displayed as a link.
LinkColor
(LinkLabel only) The color of the link.
Links
Read only
(LinkLabel only) It is possible for a LinkLabel to contain more than one link. This property allows you to find the link you want. The control keeps track of the links displayed in the text.
LinkVisited
(LinkLabel only) Returns whether a link has been visited or not.
Text
The text that is shown in the Label.
TextAlign
Where in the control is the text shown.
Text boxes should be used when you want the user to enter text that you have no knowledge of at design time (for example the name of the user). The primary function of a text box is for the user to enter text, but any characters can be entered, and it is quite possible to force the user to enter numeric values only.
Out of the box .NET comes with two basic controls to take text input from the user: TextBox and RichTextBox (we'll discuss RichTextBox later in this chapter). Both controls are derived from a base class called TextBoxBase which itself is derived from Control.
TextBoxBase provides the base functionality for text manipulation in a text box, such as selecting text, cutting to and pasting from the Clipboard, and a wide range of events. We'll not focus so much now on what is derived from where, but instead look at the simpler of the two controls first – TextBox. We'll build one example that demonstrates the TextBox properties and build on that to demonstrate the RichTextBox control later.
As has been stated earlier in this chapter, there are simply too many properties for us to describe them all, and so this listing includes only the most common ones:
CausesValidation
When a control that has this property set to true is about to receive focus, two events are fired: Validating and Validated. You can handle these events in order to validate data in the control that is losing focus. This may cause the control never to receive focus. The related events are discussed below.
CharacterCasing
A value indicating if the TextBox changes the case of the text entered. The possible values are:
q Lower: All text entered into the text box is converted lower case.
q Normal: No changes are made to the text.
q Upper: All text entered into the text box is converted to upper case.
MaxLength
A value that specifies the maximum length in characters of any text, entered into the TextBox. Set this value to zero it the maximum limit is limited only by available memory.
Multiline
Indicates if this is a multiline control. A multiline control is able to show multiple lines of text.
PasswordChar
Specifies if a password character should replace the actual characters entered into a single line textbox. If the Multiline property is true then this has no effect.
ReadOnly
A Boolean indicating if the text is read only.
ScrollBars
Specifies if a multilane text box should display scrollbars.
SelectedText
The text that is selected in the text box.
SelectionLength
The number of characters selected in the text. If this value is set to be larger than the total number of characters in the text, it is reset by the control to be the total number of characters minus the value of SelectionStart.
SelectionStart
The start of the selected text in a text box.
WordWrap
Specifies if a multiline text box should automatically wrap words if a line exceeds the width of the control.
Careful validation of the text in the TextBox controls on a form can make the difference between happy users and very angry ones.
You have probably experienced how annoying it is, when a dialog only validates its contents when you click OK. This approach to validating the data usually results in a message box being displayed informing you that the data in "TextBox number three" is incorrect. You can then continue to click OK until all the data is correct. Clearly this is not a good approach to validating data, so what can we do instead?
The answer lies in handling the validation events a TextBox control provides. If you want to make sure that invalid characters are not entered in the text box or only values within a certain range are allowed, then you will want to indocate to the user of the control whether the value engtered is valid or not.
The TextBox control provides these events (all of which are inherited from Control):
Enter
Leave
These six events occur in the order they are listed here. They are known as "Focus Events" and are fired whenever a controls focus changes, with two exceptions. Validating and Validated are only fired if the control that receives focus has the CausesValidation property set to true. The reason why it's the receiving control that fires the event is that there are times where you do not want to validate the control, even if focus changes. An example of this is if the user clicks a Help button.
These three events are known as "Key Events". They allow you to monitor and change what is entered into your controls.
KeyDown and KeyUp receive the key code corresponding to the key that was pressed. This allows you to determine if special keys such as Shift or Control and F1 were pressed.
KeyPress, on the otherhand, receives the character corresponding to a keyboard key. This means that the value for the letter "a" is not the same as the letter "A". It is useful if you want to exclude a range of characters, for example, only allowing numeric values to be entered.
Change
Occurs whenever the text in the textbox is changed, no matter what the change.
We'll create a dialog on which you can enter your name, address, occupation, and age. The purpose of this example is to give you a good grounding in manipulating properties and using events, not to create something that is incredibly useful.
We'll build the user interface first:
1. Select Files | New and create a new Windows Application under C# Projects. Name the project TextBoxTest.
2. Create the form shown below by dragging the labels, text boxes, and buttons onto the design surface. Before you can resize the two text boxes txtAddress and txtOutput as shown you must set their Multiline property to true. Do this by right-clicking the controls and select Properties:
3. Name the controls as are indicated in the picture above.
4. Set the text property of for each of the text boxes to an empty string, which means that they will contain nothing when the application is first run.
5. Set the text property of all other controls to the same as the name of the control except for the first three letters. Set the text property of the form as indicated in the caption in the picture.
6. Set the Scrollbars property of the two controls txtOutput and txtAddress to Vertical.
7. Set the ReadOnly property of the txtOutput control to true.
8. Set the CausesValidation property of the button btnHelp to false. Remember from the discussion of the Validating and Validated events that setting this to false will allow the user to click this button without having to be convered about entering invalid data.
9. When you have sized the form to fit snugly around the controls, it is time to anchor the controls so they behave properly when the form is resized. Set the Anchor property as shown in the table below:
Control
Anchor value
All Labels
Top, Bottom, Left
All TextBoxes
Top, Bottom, Left, Right
Both buttons
Top, Right
Tip: You can copy the Anchor property text from one control and paste into the other controls with the same values. You do not need to use the drop-down button.
The reason why txtOutput is anchored rather than docked to the bottom of the form is that we want the output text area to be resized as we pull the form. If we had docked the control to the bottom of the form, it would be moved to stay at the bottom, but it would not be resized.
10. One final thing should be set. On the form, find the Size and MinSize properties. Our form has little meaning if it is sized to something smaller than it is now, therefore you should set the MinSize property to the same as the Size property.
The job of setting up the visual part of the form is now complete. If you run it nothing happens when you click the buttons or enter text, but if you maximize or pull in the dialog, the controls behave exactly as you want them to in a proper user interface, staying put and resizing to fill the whole of the dialog. If you've ever tried to accomplish the same task in a language like Visual Basic 6, you will know how much work you have just been spared.
Now it is time to look at the code. Right click on the form and select View Code. If you have the toolbox pinned, you should remove the pin to make more space for the code window.
Surprisingly little code is visible in the editor. At the top of our class, the controls are defined, but it isn't until you expand the region labeled Windows Form Designer Generated Code that you can see where all your work went. It should be stressed that you should never edit the code in this section! The next time you change something in the designer, it will be overwritten, or even worse, you could change something such that the form designer can no longer show the form.
You should take a minute to look over the statements in this section. You will see exactly why it is possible to create a Windows Application without using Visual Studio.NET. Everything in this section could simply be entered in Notepad or a similar text editor and compiled. You will also see why that is not advisable. Keeping track of everything in here is difficult at the best of times; it is easy to introduce errors and, because you cannot see the effects of what you are doing, arranging the controls on the form to look right is a cumbersome task. This does, however, open the door for third party software producers to write their own programming environments to rival Visual Studio.NET, because the compilers used to create the forms are included with the .NET framework, rather than with Visual Studio.NET.
We can pull out that last bit of effort from the form designer however, before we add our own code. Go back to the Form Designer (by clicking the tab on the top of the text editor), and double-click the button btnOK. Repeat this with the other button. As we saw in the button example earlier in this chapter this causes event handlers for the click event of the buttons to be created. When the OK button is clicked, we want to transfer the text in the input text boxes to the read-only output box.
Here is the code for the two click events:
private void btnOK_Click(object sender, System.EventArgs e)
{
// No testing for invalid values are made, as that should
// not be necessary
string output;
// Concatenate the text values of the four TextBoxes
output = "Name: " + this.txtName.Text + "\r\n";
output += "Address: " + this.txtAddress.Text + "\r\n";
output += "Occupation: " + this.txtOccupation.Text + "\r\n";
output += "Age: " + this.txtAge.Text;
// Insert the new text
this.txtOutput.Text = output;
}
private void btnHelp_Click(object sender, System.EventArgs e)
{
// Write a short description of each TextBox in the Output TextBox
string output;
output = "Name = Your name\r\n;
output += "Address = Your address\r\n";
output += "Occupation = Only allowed value is 'Programmer'\r\n";
output += "Age = Your age";
// Insert the new text
this.txtOutput.Text = output;
}
In both functions the Text properties of the text boxes are used, either retrieved or set in the btnOK_Click() function or simply set as in the btnHelp_Click() function.
We insert the information the user has entered without bothering to check if it is correct. This means that we must do the checking elsewhere. In this example, there are a number of criteria that have to be met in order for the values to be correct:
From this we can see that the check that must be done for two of the text boxes (txtName and txtAddress) is the same. We also see that we should prevent the user from entering anything invalid into the Age box, and finally we must check if the user is a programmer.
To prevent the user from clicking OK before anything is entered, we start by setting the OK button's Enabled property to false in the constructor of our form, making sure not to set the property until after the generated code in InitializeComponent() has been called:
public Form1()
{
//
// Required for Windows Form Designer support
//
InitializeComponent();
this.btnOK.Enabled = false;
}
Now we'll create the handler for the two text boxes that must be checked to see if they are empty. We do this by subscribing to the Validating event of the text boxes. We inform the control that the event should be handled by a method named txtBoxEmpty_Validating().
We also need a way to know the state of our controls. For this purpose, we use the Tag property of the text boxes. If you recall the discussion of this property earlier in the chapter, we said that only strings can be assigned to the Tag property from the Forms Designer. However, as we are setting the Tag value from code, we can do pretty much what we want with it, and it is more appropriate to enter a Boolean value here.
To the constructor we add the following statements:
this.btnOK.Enabled = false;
// Tag values for testing if the data is valid
this.txtAddress.Tag = false;
this.txtAge.Tag = false;
this.txtName.Tag = false;
this.txtOccupation.Tag = false;
// Subscriptions to events
this.txtName.Validating += new
System.ComponentModel.CancelEventHandler(this.txtBoxEmpty_Validating);
this.txtAddress.Validating += new
System.ComponentModel.CancelEventHandler(this.txtBoxEmpty_Validating);
Please see Chapter 12 for a complete explanation of events if you are not entirely comfortable with them yet.
Unlike the button event handler we've seen previously, the event handler for the Validating event is a specialized version of the standard handler, System.EventHandler. The reason this event need a special handler is that should the validating fail, there must be a way to prevent any further processing. If we were to cancel further processing, that would effectively mean that it would be impossible to leave a text box until the data entered is valid. We will not do anything as drastic as that in this example.
The Validating and Validated events combined with the CausesValidation property fixes a nasty problem that occurred when using the GotFocus and LostFocus events to perform validation of controls. The problem occurred when the GotFocus and LostFocus events were continually fired, because validation code was attempting to shift the focus between control, which created an infinite loop.
We add the event handler as follows:
private void txtBoxEmpty_Validating(object sender,
System.ComponentModel.CancelEventArgs e)
{
// We know the sender is a TextBox, so we cast the sender object to that
TextBox tb = (TextBox)sender;
// If the text is empty we set the background color of the
// Textbox to red to indicate a problem. We use the tag value
// of the control to indicate if the control contains valid
// information.
if (tb.Text.Length == 0)
{
tb.BackColor = Color.Red;
tb.Tag = false;
// In this case we do not want to cancel further processing,
// but if we had wanted to do this, we would have added this line:
// e.Cancel = true;
}
else
{
tb.BackColor = System.Drawing.SystemColors.Window;
tb.Tag = true;
}
// Finally, we call ValidateAll which will set the value of
// the OK button.
ValidateAll();
}
Because more than one text box is using this method to handle the event, we cannot be sure which is calling the function. We do know, however, that the effect of calling the method should be the same no matter who is calling, so we can simply cast the sender parameter to a text box and work on that.
If the length of the text in the text box is zero, we set the background color to red and the tag to false. If it is not, we set the background color to the standard Windows color for a window.
You should always use the colors found in the System.Drawing.SystemColors enumeration when you want to set a standard color in a control. If you simply set the color to white, your application will look strange if the user has changed the default color settings.
The ValidateAll()function is described at the end of this example.
Keeping with the Validating event, the next handler we'll add is for the Occupation text box. The procedure is exactly the same as for the two previous handlers, but the validation code is different, because occupation must be Programmer or an empty string to be valid. We therefore, add a new line to the constructor.
this.txtOccupation.Validating += new
System.ComponentModel.CancelEventHandler(this.txtOccupation_Validating);
And then the handler itself:
private void txtOccupation_Validating(object sender,
System.ComponentModel.CancelEventArgs e)
{
// Cast the sender object to a textbox
TextBox tb = (TextBox)sender;
// Check if the values are correct
if (tb.Text.CompareTo("Programmer") == 0 || tb.Text.Length == 0)
{
tb.Tag = true;
tb.BackColor = System.Drawing.SystemColors.Window;
}
else
{
tb.Tag = false;
tb.BackColor = Color.Red;
}
// Set the state of the OK button
ValidateAll();
}
Our second to last challenge is the age text box. We don't want the user to type anything but positive numbers (including 0 to make the test simpler). To achieve this we'll use the KeyPress event to remove any unwanted characters before they are shown in the text box.
First, we subscribe to the KeyPress event. We do this as we've done with the previous event handlers in the constructor:
this.txtAge.KeyPress += new
System.Windows.Forms.KeyPressEventHandler(this.txtAge_KeyPress);
This event handler is specialized as well. The System.Windows.Forms.KeyPressEventHandler is supplied, because the event needs information about the key that was pressed.
We then add the event handler itself:
private void txtAge_KeyPress(object sender,
System.Windows.Forms.KeyPressEventArgs e)
{
if ((e.KeyChar < 48 || e.KeyChar > 57) && e.KeyChar != 8)
e.Handled = true; // Remove the character
}
The ASCII values for the characters between 0 and 9 lies between 48 and 57, so we make sure that the character is within this range. We make one exception though. The ASCII value 8 is the Backspace key, and for editing reasons, we allow this to slip through.
Setting the Handled property of KeyPressEventArgs to true tells the control that it shouldn't do anything else with the character, and so it is not shown.
As it is now, the control is not marked as invalid or valid. This is because we need another check to see if anything was entered at all. This is a simple thing as we've already written the method to perform this check, and we simply subscribe to the Validating event for the Age control as well by adding this line to the constructor:
this.txtAge.Validating += new
System.ComponentModel.CancelEventHandler(this.txtBoxEmpty_Validating);
One last case must be handled for all the controls. If the user has entered valid text in all the textboxes and then changes something, making the text invalid, the OK button remains enabled. So we have to handle one last event handler for all of the text boxes: the Change event which will turn off the OK button should any text field contain invalid data.
The Change event is fired whenever the text in the control changes. We subscribe to the event by adding the following lines to the constructor:
this.txtName.TextChanged += new System.EventHandler(this.txtBox_TextChanged);
this.txtAddress.TextChanged += new
System.EventHandler(this.txtBox_TextChanged);
this.txtAge.TextChanged += new System.EventHandler(this.txtBox_TextChanged);
this.txtOccupation.TextChanged += new
System.EventHandler(this.txtBox_TextChanged);
The Change event uses the standard event handler we know from the Click event. Finally, we add the event itself:
private void txtBox_TextChanged(object sender, System.EventArgs e)
{
// Cast the sender object to a Textbox
TextBox tb = (TextBox)sender;
// Test if the data is valid and set the tag and background
// color accordingly.
if (tb.Text.Length == 0 && tb != txtOccupation)
{
tb.Tag = false;
tb.BackColor = Color.Red;
}
else if (tb == txtOccupation &&
(tb.Text.Length != 0 && tb.Text.CompareTo("Programmer") != 0))
{
// Don't set the color here, as it will color change while the user
// is typing
tb.Tag = false;
}
else
{
tb.Tag = true;
tb.BackColor = SystemColors.Window;
}
// Call ValidateAll to set the OK button
ValidateAll();
}
This time, we must find out exactly which control is calling the event handler, because we don't want the background color of the Occupation textbox to change to red when the user starts typing. We do this by checking the Name property of the textbox that was passed to us in the sender parameter.
Only one thing remains: the ValidateAll method that enables or disables the OK button:
private void ValidateAll()
{
// Set the OK button to enabled if all the Tags are true
this.btnOK.Enabled = ((bool)(this.txtAddress.Tag) &&
(bool)(this.txtAge.Tag) &&
(bool)(this.txtName.Tag) &&
(bool)(this.txtOccupation.Tag));
}
The method simply sets the value of the Enabled property of the OK button to true if all of the Tag properties are true. We need to cast the value of the Tag properties to a Boolean, because it is stored as an object type.
If you test the program now, you should see something like this:
Notice that you can click the Help button while you are in a textbox with invalid data without the background color changing to red.
The example we've just completed is quite long compared to the others you will see in this chapter. This is because we'll build on this example rather than inventing the wheel.
Remember you can download the source code for the examples in this book from.
As mentioned earlier, the RadioButton and CheckBox controls share their base class with the button control, though their appearance and use differs substantially from the button.
Radio buttons traditionally displays themselves as a label with a dot to the left of it, which can be either selected or not. You should use the radio buttons when you want to give the user a choice between several mutually exclusive options. An example of this could be, if you want to ask for the gender of the user.
To group radio boxes together so that they create one logical unit you must use a GroupBox control. By first placing a group box on a form, and then placing the RadioButton controls you need within the borders of the group box, the RadioButton controls will know to change their state to reflect that only one within the group box can be selected. If you do not place them within a group box, only one RadioButton on the form can be selected at any given time.
A CheckBox traditionally displays itself as a label with a small box with a checkmark to the left of it. You should use the check box when you want to allow the user to choose one or more options. An example could be a questionnaire asking which operating systems the user has tried (for example, Windows 95, Windows 98, Linux, Max OS X, and so on.)
We'll look at the important properties and events of the two controls, starting with the RadioButton, and then move on to a quick example of their use.
As the control derives from ButtonBase and we've already seen in our example that used the button earlier, there are only a few properties to describe. As always, should you need a complete list, please refer to the MSDN library:
Appearance
A RadioButton can be displayed either as a label with a circular check to the left, middle or right of it, or as a standard button. When it is displayed as a button, the control will appear pressed when selected and 3D otherwise.
AutoCheck
When this property is true, a check mark is displayed when the user clicks the radio button. When it is false, the check mark is not displayed by default.
CheckAlign
By using this property, you can change the alignment of the radio button. It can be left, middle, and right.
Checked
Indicates the status of the control. It is true if the control has a check mark, and false otherwise.
You will usually only use one event when working with RadioButtons, but as always there many others that can be subscribed to. We'll only cover two in this chapter, and the only reason the second event is mentioned is, that there is a subtle difference between the two that should be noted:
CheckChanged
This event is sent when the check of the RadioButton changes. If there is more than one RadioButton control on the form or within a group box, this event will be sent twice, first to the control, which was checked and now becomes unchecked, then to the control which becomes checked.
This event is sent every time the RadioButton is clicked. This is not the same as the change event, because clicking a RadioButton twice or more times in succession only changes the checked property once – and only if it wasn't checked already.
As you would imagine, the properties and events of this control is very similar to those of the RadioButton, but there are two new ones:
CheckState
Unlike the RadioButton, a CheckBox can have three states: Checked, Indeterminate, and Unchecked. When the state of the check box is Indeterminate, the control check next to the label is usually grayed, indicating that the current value of the check is not valid or has no meaning under the current circumstances. An example of this state can be seen if you select several files in the Windows Explorer and look at their properties. If some files are readonly and others are not, the readonly checkbox will be checked, but greyed – indeterminate.
ThreeState
When this property is false, the user will not be able to change the CheckBox' state to Indeterminate. You can, however, still change the state of the check box to Indeterminate from code.
You will normally use only one or two events on this control. Note that, even though the CheckChanged event exists on both the RadioButton and the CheckBox controls, the effects of the events differ:
CheckedChanged
Occurs whenever the Checked property of the check box changes. Note that in a CheckBox where the ThreeState property is true, it is possible to click the check box without changing the Checked property. This happens when the check box changes from checked to indeterminate state.
CheckedStateChanged
Occurs whenever the CheckedState property changes. As Checked and Unchecked are both possible values of the CheckedState property, this event will be sent whenever the Checked property changes. In addition to that, it will also be sent when the state changes from Checked to Indeterminate.
This concludes the events and properties of the RadioButton and CheckBox controls. But before we look at an example using these, let's take a look at the GroupBox control which we mentioned earlier.
Before we move on to the example, we'll look at the group box control. This control is often used in conjunction with the RadioButton and CheckBox controls to display a frame around, and a caption above, a series of controls that are logically linked in some way.
Using the group box is as simple as dragging it onto a form, and then dragging the controls it should contain onto it (but not the other way round – you can't lay a group box over some pre-exisiting controls). The effect of this is that the parent of the controls becomes the group box, rather than the form, and it is therefore possible to have more than one RadioButton selected at any given time. Within the group box, however, only one RadioButton can be selected.
The relationship between parent and child probably need to be explained a bit more. When a control is placed on a form, the form is said to become the parent of the control, and hence the control is the child of the form. When you place a GroupBox on a form, it becomes a child of a form. As a group box can itself contain controls, it becomes the parent of these controls. The effect of this is that moving the GroupBox will move all of the controls placed on it.
Another effect of placing controls on a group box, is that it allows you to change certain properties of all the controls it contains simply by setting the corresponding property on the group box. For instance, if you want to disable all the controls within a group box, you can simply set the Enabled property of the group box to false.
We will demonstrate the use of the GroupBox in the following example.
We'll modify the example we used when we demonstrated the use of text boxes. In that example, the only possible occupation was Programmer. Instead of forcing the user to write this if he or she is a programmer, we'll change this text box to a check box.
To demonstrate the use of the RadioButton, we'll ask the user to provide one more piece of information: his or her gender.
Change the text box example like this:
1. Remove the label named lblOccupation and the text box named txtOccupation.
2. Resize the form in order to fit a group box with the information about the users sex onto it, and name the new controls as shown in the picture below:
3. The text of the RadioButton and CheckBox controls should be the same as the names of the controls without the first three letters.
4. Set the Checked property of the chkProgrammer check box to true.
5. Set the Checked property of either rdoMale or rdoFemale to true. Note that you cannot set both to true. If you try to, the value of the other RadioButton is automatically changed to false.
No more needs to be done on the visual part of the example, but there are a number of changes in the code. First, we need to remove all our reference to the text box that has been removed. Go to the code and complete the following steps.
6. In the constructor of the form, remove the two lines which refer to txtOccupation. This includes subscriptions to the Validating and TextChanged events and the line, which sets the tag of the txtBox to false.
7. Remove the methodtxtOccupation_Validating entirely.
This example can be found in the code download for this chapter as the RadioButtonAndCheckBox Visual Studio.NET project.
The txtBox_TextChanged method included tests to see if the calling control was the txtOccupation TextBox. We now know for sure that it will not be, and so we change the method by removing the elseif block and modify the if test as follows:
private void txtBox_TextChanged(object sender, System.EventArgs e)
{
// Cast the sender object to a Textbox
TextBox tb = (TextBox)sender;
// Test if the data is valid and set the tag background
// color accordingly.
if (tb.Text.Length == 0)
{
tb.Tag = false;
tb.BackColor = Color.Red;
}
else
{
tb.Tag = true;
tb.BackColor = SystemColors.Window;
}
// Call ValidateAll to set the OK button
ValidateAll();
}
The last place in which we check the value of the text box we've removed is in the ValidateAll() method. Remove the check entirely so the code becomes:
private void ValidateAll()
{
// Set the OK button to enabled if all the Tags are true
this.btnOK.Enabled = ((bool)(this.txtAddress.Tag) &&
(bool)(this.txtAge.Tag) &&
(bool)(this.txtName.Tag));
}
Since we are using a check box rather than a text box we know that the user cannot enter any invalid information, as he or she will always be either a programmer or not.
We also know that the user is either male or female, and because we set the property of one of the RadioButtons to true, the user is prevented from choosing an invalid value. Therefore, the only thing left to do is change the help text and the output. We do this in the button event handlers:
private void btnHelp_Click(object sender, System.EventArgs e)
{
// Write a short descrption of each TextBox in the Output TextBox
string output;
output = "Name = Your name\r\n";
output += "Address = Your address\r\n";
output += "Programmer = Check 'Programmer' if you are a programmer\r\n";
output += "Sex = Choose your sex\r\n";
output += "Age = Your age";
// Insert the new text
this.txtOutput.Text = output;
}
Only the help text is changed, so nothing surprising in the help method. It gets slightly more interesting in the OK method:
private void btnOK_Click(object sender, System.EventArgs e)
{
// No testing for invalid values are made, as that should
// not be neccessary
string output;
// Concatenate the text values of the four TextBoxes
output = "Name: " + this.txtName.Text + "\r\n";
output += "Address: " + this.txtAddress.Text + "\r\n";
output += "Occupation: " + (string)(this.chkProgrammer.Checked ?
"Programmer" : "Not a programmer") + "\r\n";
output += "Sex: " + (string)(this.rdoFemale.Checked ? "Female" :
"Male") + "\r\n";
output += "Age: " + this.txtAge.Text;
// Insert the new text
this.txtOutput.Text = output;
}
The first of the new lines, which are highlighted, is the line in which the occupation of the user is printed. We investigate the Checked property of the check box, and if it is true, we write the string Programmer. If it is false, we write Not a programmer.
The second line examines only the radio button rdoFemale. If the Checked property is true on that control, we know that the user is female. If it is false we know that the user is male. Because there are only two options here, we do not need to check the other radio button, because its Checked property will always be the opposite of the first radio button.
Had we used more than two radio buttons, we would have had to loop through all of them, until we found one on which the Checked property was true.
When you run the example now, you should be able to get a result similar to this:
Like the normal TextBox, the RichTextBox control is derived from TextBoxBase. Because of this, it shares a number of features with the TextBox, but is much more diverse. Where a TextBox is commonly used with the purpose of obtaining short text strings from the user, the RichTextBox is used to display and enter formatted text (for example bold, underline, and italic). It does so using a standard for formatted text called Rich Text Format or RTF.
In the previous example, we used a standard TextBox. We could just as well have used a RichTextBox to do the job. In fact, as we'll see in the example later, you can remove the TextBox name txtOutput and insert a RichTextBox in its place with the same name, and the example behaves exactly as it did before.
If this kind of textbox is more advanced than the one we explored in the previous section, you'd expect there are new properties that can be used, and you'd be correct. Here are descriptions of the most commonly used properties of the RichTextBox:
CanRedo
This property is true if something has been undone, that can be reapplied, otherwise false.
CanUndo
This property is true if it is possible to perform an undo action on the RichTextBox, otherwise it is false.
RedoActionName
This property holds the name of an action that be used to redo something that has been undone in the RichTextBox.
DetectUrls
Set this property to true to make the control detect URLs and format them (underline as in a browser).
Rtf
This corresponds to the Text property, except that this holds the text in RTF.
SelectedRtf
Use this property to get or set the selected text in the control, in RTF. If you copy this text to another application, for example, MS Word, it will retain all formatting.
Like SelectedRtf you can use this property to get or set the selected text. Unlike the RTF version of the property however, all formatting is lost.
SelectionAlignment
This represents the alignment of the selected text. It can be Center, Left, or Right.
SelectionBullet
Use this property to find out if the selection is formatted with a bullet in front of it, or use it to insert or remove bullets.
BulletIndent
Use this property to specify the number of pixels a bullet should be indented.
SelectionColor
Allow you to change the color of the text in the selection.
SelectionFont
Allow you to change to font of the text in the selection.
Using this property, you either set or retrieve the length of a selection.
SelectionType
This property holds information about the selection. It will tell you if one or more OLE objects are selected or if only text is selected.
ShowSelectionMargin
If you set this property to true, a margin will be shown at the left of the RichTextBox. This will make it easier for the user to select text.
UndoActionName
Gets the name of the action that will be used if the user chooses to undo something.
SelectionProtected
You can specify that certain parts of the text should not be changed by setting this property to true.
As you can see from the listing above, most of the new properties have to do with a selection. This is because, any formatting you will be applying when a user is working on his or her text will probably be done on a selection made by that user. In case no selection is made, the formatting will start from the point in the text where the cursor is located, called the insertion point.
Most of the events used by the RichTextBox are the same as those used by the TextBox. There are a few new of interest though:
LinkedClick
This event is sent when a user clicks on a link within the text.
Protected
This event is sent when a user attempts to modify text that has been marked as protected.
SelectionChanged
This event is sent when the selection changes. If for some reason you don't want the user to change the selection, you can prevent the change here.
We'll create a very basic text editor in this example. It demonstrates how to change basic formatting of text and how to load and save the text from the RichTextBox. For the sake of simplicity, the example loads from and saves to a fixed file.
As always, we'll start by designing the form:
1. Create a new C# Windows Application project and name it RichTextBoxTest.
2. Create the form as shown in the picture below. The textbox named txtSize should be a TextBox control. The textbox named rtfText should be a RichTextBox control:
3. Name the controls as indicated in the picture above and clear the Text property of both rtfText and txtSize.
4. Excluding the text boxes, set the Text of all controls to the same as the names except for the first three letters.
5. Change the Text property of the txtSize text box to 10.
6. Anchor the controls as in the following table:
Control name
btnLoad and btnSave
RtfText
Top, Left, Bottom, Right
All others
7. Set the MinimumSize property of the form to the same as the Size property.
That concludes the visual part of the example and we'll move straight to the code. Double-click the Bold button to add the Click event handler to the code. Here is the code for the event:
private void btnBold_Click(object sender, System.EventArgs e)
{
Font oldFont;
Font newFont;
// Get the font that is being used in the selected text
oldFont = this.rtfText.SelectionFont;
// If the font is using bold style now, we should remove the
// Formatting
if (oldFont.Bold)
newFont = new Font(oldFont, oldFont.Style & ~FontStyle.Bold);
else
newFont = new Font(oldFont, oldFont.Style | FontStyle.Bold);
// Insert the new font and return focus to the RichTextBox
this.rtfText.SelectionFont = newFont;
this.rtfText.Focus();
}
We start by getting the font which is being used in the current selection and assigning it to a local variable. Then we check if this selection is already bold. If it is, we want to remove the bold setting; otherwise we want to set it. We create a new font using the oldFont as the prototype, but add or remove the bold style as needed.
Finally, we assign the new font to the selection and return focus to the RichTextBox.
See Chapter 16 for a description of the Font object.
The event handlers for btnItalic and btnUnderline are the same as the above one, except we are checking the appropriate styles. Double-click the two buttons Italic and Underline and add this code:
private void btnItalic_Click(object sender, System.EventArgs e)
{
Font oldFont;
Font newFont;
// Get the font that is being used in the selected text
oldFont = this.rtfText.SelectionFont;
// If the font is using Italic style now, we should remove it
if (oldFont.Italic)
newFont = new Font(oldFont, oldFont.Style & ~FontStyle.Italic);
else
newFont = new Font(oldFont, oldFont.Style | FontStyle.Italic);
// Insert the new font
this.rtfText.SelectionFont = newFont;
this.rtfText.Focus();
}
private void btnUnderline_Click(object sender, System.EventArgs e)
{
Font oldFont;
Font newFont;
// Get the font that is being used in the selected text
oldFont = this.rtfText.SelectionFont;
// If the font is using Underline style now, we should remove it
if (oldFont.Underline)
newFont = new Font(oldFont, oldFont.Style & ~FontStyle.Underline);
else
newFont = new Font(oldFont, oldFont.Style | FontStyle.Underline);
// Insert the new font
this.rtfText.SelectionFont = newFont;
this.rtfText.Focus();
}
Double-click the last of the formatting buttons, Center, and add the following code:
private void btnCenter_Click(object sender, System.EventArgs e)
{
if (this.rtfText.SelectionAlignment == HorizontalAlignment.Center)
this.rtfText.SelectionAlignment = HorizontalAlignment.Left;
else
this.rtfText.SelectionAlignment = HorizontalAlignment.Center;
this.rtfText.Focus();
}
Here we must check another property, SelectionAlignment to see if the text in the selection is already centered. HorizontalAlignment is an enumeration which can be Left, Right, Center, Justify, and NotSet. In this case, we simply check if Center is set, and if it is, we set the alignment to left. If it isn't we set it to Center.
The final formatting our little text editor will be able to perform is setting the size of text. We'll add two event handlers for the textbox Size, one for controlling the input, and one to detect when the user has finished enering a value.
Add the following lines to the constructor of the form:
public Form1()
{
InitializeComponent();
// Event Subscription
this.txtSize.KeyPress += new
System.Windows.Forms.KeyPressEventHandler(this.txtSize_KeyPress);
this.txtSize.Validating += new
System.ComponentModel.CancelEventHandler(this.txtSize_Validating);
}
We saw these two event handlers in the previous example. Both of the events use a helper method called ApplyTextSize, which takes a string with the size of the text:
private void txtSize_KeyPress(object sender,
System.Windows.Forms.KeyPressEventArgs e)
{
// Remove all characters that are not numbers, backspace and enter
if ((e.KeyChar < 48 || e.KeyChar > 57) &&
e.KeyChar != 8 && e.KeyChar != 13)
{
e.Handled = true;
}
else if (e.KeyChar == 13)
{
// Apply size if the user hits enter
TextBox txt = (TextBox)sender;
if (txt.Text.Length > 0)
ApplyTextSize(txt.Text);
e.Handled = true;
this.rtfText.Focus();
}
}
private void txtSize_Validating(object sender,
System.ComponentModel.CancelEventArgs e)
{
TextBox txt = (TextBox)sender;
ApplyTextSize(txt.Text);
this.rtfText.Focus();
}
private void ApplyTextSize(string textSize)
{
// Convert the text to a float because we'll be needing a float shortly
float newSize = Convert.ToSingle(textSize);
FontFamily currentFontFamily;
Font newFont;
// Create a new font of the same family but with the new size
currentFontFamily = this.rtfText.SelectionFont.FontFamily;
newFont = new Font(currentFontFamily, newSize);
// Set the font of the selected text to the new font
this.rtfText.SelectionFont = newFont;
}
The work we are interested in takes place in the helper method ApplyTextSize. It starts by converting the size from a string to a float. We've prevented the user from entering anything but integers, but when we create the new font, we need a float, so convert it to the correct type.
After that, we get the family to which the font belongs and we create a new font from that family with the new size. Finally, we set the font of the selection to the new font.
That's all the formatting we can do, but some is handled by the RichTextBox itself. If you try to run the example now, you will be able to set the text to bold, italic, and underline, and you can center the text. That is what you expect, but there is something else that is interesting. Try to type a web address, for example in the text. The text is recognized by the control as an Internet address, is underlined and the mouse pointer changes to a hand when you move it over the text. If that leads you to believe that you can click it and be brought to the page, you are almost correct. We need to handle the event that is sent when the user clicks a link: LinkClicked.
We do this by subscribing to the event as we are now used to doing, in the constructor:
this.rtfText.LinkClicked += new
System.Windows.Forms.LinkClickedEventHandler(this.rtfText_LinkedClick);
We haven't seen this event handler before. It is used to provide the text of the link that was clicked. The handler is surprisingly simple and looks like this:
private void rtfText_LinkedClick(object sender,
System.Windows.Forms.LinkClickedEventArgs e)
{
System.Diagnostics.Process.Start(e.LinkText);
}
This code opens the default browser if it isn't open already and navigates to the site to which the link that was clicked is pointing.
The editing part of the application is now done. All that remains is to load and save the contents of the control. We'll use a fixed file to do this.
Double-click the Load button, and add the following code:
private void btnLoad_Click(object sender, System.EventArgs e)
{
// Load the file into the RichTextBox
try
{
rtfText.LoadFile("../../Test.rtf");
}
catch (System.IO.FileNotFoundException)
{
MessageBox.Show("No file to load yet");
}
}
That's it! Nothing else has to be done. Because we are dealing with files, there is always a chance that we might encounter exceptions, and we have to handle these. In the Load method we handle the exception that is thrown if the file doesn't exist. It is equally simple to save the file. Double-click the Save button and add this:
private void btnSave_Click(object sender, System.EventArgs e)
{
// Save the text
try
{
rtfText.SaveFile("../../Test.rtf");
}
catch (System.Exception err)
{
MessageBox.Show(err.Message);
}
}
Run the example now, format some text, and click Save. Clear the textbox and click Load and the text you just saved should reappear.
This concludes the RichTextBox example. When you run it, you should be able to produce something like this:
List boxes are used to show a list of strings from which one or more can be selected at a time. Just like check boxes and radio buttons, the list box provides a means of asking the user to make one or more selections. You should use a list box when at design time you don't know the actual number of values the user can choose from (an example could be a list of co-workers). Even if you know all the possible values at design time, you should consider using a list box if there are a great number of values.
The ListBox class is derived from the ListControl class, which provides the basic functionality for the two list-type controls that come out-of-the-box with Visual Studio.NET. The other control, the ComboBox, is discussed later in this chapter.
Another kind of list box is provided with Visual Studio.NET. This is called CheckedListBox and is derived from the ListBox class. It provides a list just like the ListBox, but in addition to the text strings it provide a check for each item in the list.
In the list below all the properties exist in both the ListBox class and CheckedListBox class unless explicitly stated:
Avaliability
SelectedIndex
This value indicates the zero-based index of the selected item in the list box. If the list box can contain multiple selections at the same time, this property holds the index of the first item in the selected list.
ColumnWidth
In a list box with multiple columns, this property specifies the width of the columns.
Items
Read-only
The Items collection contains all of the items in the list box. You use the properties of this collection to add and remove items.
MultiColumn
A list box can have more than one column. Use this property the get or set the number of columns in the list box.
SelectedIndices
This property is a collection, which holds all of the zero-based indices of the selected items in the list box.
SelectedItem
In a list box where only one item can be selected, this property contains the selected item if any. In a list box where more than one selection can be made, it will contain the first of the selected items.
SelectedItems
This property is a collection, which contains all of the items currently selected.
SelectionMode
You can choose between four different modes of selection in a list box:
q None: No items can be selected.
q One: Only one item can be selected at any time.
q MultiSimple: Multiple items can be selected.
q MultiExtended: Multiple items can be selected and the user can use the Ctrl, Shift and arrows keys to make selections.
Sorted
Setting this property to true will cause the ListBox to sort the items it contains alphabetically.
We've seen Text properties on a number of controls, but this one works very differently than any we've seen so far. If you set the Text property of the list box control, it searches for an item that matches the text, and selects it. If you get the Text property, the value returned is the first selected item in the list. This property cannot be used if the SelectionMode is None.
CheckedIndicies
(CheckedListBox only) This property is a collection, which contaisn all indexes in the CheckedListBox that is a checked or indeterminate state.
CheckedItems
(CheckedListBox only) This is a collection of all the items in a CheckedListBox that are in a checked or indeterminate state.
CheckOnClick
(CheckedListBox only) If this property is true, an item will change its state whenever the user clicks it.
ThreeDCheckBoxes
(CheckedListBox only) You can choose between CheckBoxes that are flat or normal by setting this property.
In order to work efficiently with a list box, you should know a number of methods that can be called. The following table lists the most common methods. Unless indicated, the methods belong to both the ListBox and CheckedListBox classes:
ClearSelected
Clears all selections in the ListBox,
FindString
Finds the first string in the ListBox beginning with a string you specify for example FindString("a") will find the first string in the ListBox beginning with 'a'
FindStringExact
Like FindString but the entire string must be matched
GetSelected
Returns a value that indicates whether an item is selected
SetSelected
Sets or clears the selection of an item
ToString
Returns the currently selected item
GetItemChecked
(CheckedListBox only) Returns a value indicating if an item is checked or not
GetItemCheckState
(CheckedListBox only) Returns a value indicating the check state of an item
SetItemChecked
(CheckedListBox only) Sets the item specified to a checked stat
SetItemCheckState
(CheckedListBox only) Sets the check state of an item
Normally, the events you will want to be aware of when working with list boxes and CheckedListBoxes are those that have to do with the selections that are being made by the user:
ItemCheck
(CheckedListBox only) Occurs when the check state of one of the list items changes
SelectedIndexChanged
Occurs when the index of the selected item changes
We will create a small example with both a ListBox and a CheckedListBox. The user can check items in the CheckedListBox and then click a button which will move the checked items to the normal ListBox. We create the dialog as follows:
1. Open a new project in Visual Studio.NET called Lists. Add a ListBox, a CheckedListBox and a button to the form and change the names as shown in the picture below:
2. Change the Text property of the button to "Move".
3. Change the CheckOnClick property of the CheckedListBox to true.
Now we are ready to add some code. When the user clicks the Move button, we want to find the items that are checked, and copy Those into the Selected List Box.
Double-click the button and enter this code:
private void btnMove_Click(object sender, System.EventArgs e)
{
// Check if there are any checked items in the CheckedListBox
if (this.chkListPossibleValues.CheckedItems.Count > 0)
{
// Clear the ListBox we'll move the selections to
this.lstSelected.Items.Clear();
// Loop through the CheckedItems collection of the CheckedListBox
// and add the items in the Selected ListBox
foreach (string item in this.chkListPossibleValues.CheckedItems)
{
this.lstSelected.Items.Add(item.ToString());
}
// Clear all the checks in the CheckedListBox
for (int i = 0; i < this.chkListPossibleValues.Items.Count; i++)
this.chkListPossibleValues.SetItemChecked(i, false);
}
}
We start by checking the Count property of the CheckedItems collection. This will be greater than zero if any items in the collection are checked. We then clear all items in the Selected list box, and loop through the CheckedItems collection, adding each item to the Selected list box. Finally, we remove all the checks in the CheckedListBox.
Now we just need something in the CheckedListBox to move. We could add the items while in design mode, by selecting the Items property in the property panel and adding the items there. Instead, we'll add the items through code. We do so in the constructor of our form:
public Form1()
{
//
// Required for Windows Form Designer support
//
InitializeComponent();
// Fill the CheckedListBox
this.chkListPossibleValues.Items.Add("One");
this.chkListPossibleValues.Items.Add("Two");
this.chkListPossibleValues.Items.Add("Three");
this.chkListPossibleValues.Items.Add("Four");
this.chkListPossibleValues.Items.Add("Five");
this.chkListPossibleValues.Items.Add("Six");
this.chkListPossibleValues.Items.Add("Seven");
this.chkListPossibleValues.Items.Add("Eight");
this.chkListPossibleValues.Items.Add("Nine");
this.chkListPossibleValues.Items.Add("Ten");
}
Just as you would do if you were to enter the values through the properties panel, you use the Items collection to add items at runtime.
This concludes the list box example, and if you run it now, you will get something like this:
As the name implies, a combo box combines a number of controls, to be specific the TextBox, Button, and ListBox controls. Unlike the ListBox, it is never possible to select more than one item in the list of items contained in a ComboBox and it is optionally possible to type new entries in the list in the TextBox part of the ComboBox.
Commonly, the ComboBox control is used to save space on a dialog because the only part of the combo box that is permanently visible are the text box and button parts of the control. When the user clicks the arrow button to the right of the text box, a list box unfolds in which the user can make a selection. As soon as he or she does so, the list box disappears and the display returns to normal.
We will now look at the properties and events of the control and then create an example that uses the ComboBox and the two ListBox controls.
Because a ComboBox control includes tnhe features of TextBox and ListBox controls, many of the properties of the control can also be found on the two other controls. Because of that, there are a large number of properties and events on the ComboBox, and we will only cover the most common of them here. For a complete list, please refer to the MSDN library:
DropDownStyle
A combo box can be displayed with three different styles:
q DropDown: The user can edit the text box part of the control, and must click the arrow button to display the list part of the control.
q Simple: Same as DropDown, except that the list part of the control is always visible, much like a normal ListBox.
q DropDownList: The user cannot edit the text box part of the control, and must click the arrow button to display the list part of the control.
DroppedDown
Indicates whether the list part of the control is dropped down or not. If you set this property to true, the list will unfold.
This property is a collection, which contains all the items in the list contained in the combo box.
By setting this property to anything other than zero, you control the maximum number of characters it is possible to enter into the text box part of the control.
Indicates the index of the currently selected item in the list.
Indicates the item that is currently selected in the list.
Represents the text that is selected in the text box part of the control.
In the text box part of the control, this property represents the index of the first character that is selected.
The length of the text selected in the text box part of the control.
Set this property to true to make the control sort the items in the list portion alphabetically.
If you set this property to null, any selection in the list portion of the control is removed. If you set it to a value, which exists in the list part of the control, that value is selected. If the value doesn't exist in the list, the text is simply shown in the text portion.
There are three main actions that you will want to be notified when happens on a combo box:
To handle these three actions you will normally subscribe to one or more of the following events:
DropDown
Occurs when the list portion of the control is dropped down.
Occurs when the selection in the list portion of the control changed.
These events occur when a key is pressed while the text portion of the control has focus. Please refer to the descriptions of the events in the text box section earlier in this chapter.
TextChanged
Occurs when the Text property changes
Once again, we'll revisit the example used in the TextBox example. Recall that in the CheckBoxes example, we changed the occupation TextBox to a CheckBox. It should be noted that it is possible to have an occupation other than programmer, and so we should change the dialog once again to accommodate more occupations. To do this, we'll use a combo box which will contain two occupations: consultant and programmer and allow the user to add his or her own occupation in the TextBox should that be different
Because the user should not have to manually enter his or her occupation every time he or she uses our application, we'll save the items in the ComboBox to a file every time the dialog closes, and load it when it starts.
Let's start with the changes to the dialogs appearance:
1. Remove the CheckBox named chkProgrammer.
2. Add a label and a ComboBox and name them as shown in the picture below:
3. Change the Text property of the label to Occupation and clear it on the ComboBox.
4. No more changes need to be done on the form. However, create a text file named Occupations.txt with the following two lines:
Consultant
Programmer
This example can be found in the code download for this chapter as the ComboBox Visual Studio.NET project.
Now we are ready to change the code. Before we start writing new code, we'll change the btnOK_Click event handler to match the changes ot the form:
private void btnOK_Click(object sender, System.EventArgs e)
{
// No testing for invalid values are made, as that should
// not be neccessary
string output;
// Concatenate the text values of the controls
output = "Name: " + this.txtName.Text + "\r\n";
output += "Address: " + this.txtAddress.Text + "\r\n";
output += "Occupation: " + this.cboOccupation.Text + "\r\n";
output += "Sex: " + (string)(this.rdoFemale.Checked ? "Female" : "Male") +
"\r\n";
output += "Age: " + this.txtAge.Text;
// Insert the new text
this.txtOutput.Text = output;
}
Instead of the check box, we're now using a combo box, so when an item is selected in a combo box, it is shown in the editable portion of the control. Because of that, the value we are interested in will always be in the Text property of the combo box.
Now we are ready to start writing new code. The first thing that we'll do, is to create a method that loads the values that already exist in the file and inserts them into the combo box:
private void LoadOccupations()
{
try
{
// Create a StreamReader object. Change the path to where you put
// the file
System.IO.StreamReader sr =
new System.IO.StreamReader("../../Occupations.txt");
string input;
// Read as long as there are more lines
do
{
input = sr.ReadLine();
// Add only if the line read contains any characters
if (input != "")
this.cboOccupation.Items.Add(input);
} while (sr.Peek() != -1);
// Peek returns -1 if at the end of the stream
// Close the stream
sr.Close();
}
catch (System.Exception)
{
MessageBox.Show("File not found");
}
}
The stream reader is covered in Chapter 20, so we won't discuss the intricacies of this code here. Suffice to say, we use it to open the text file Occupations.txt and read the items for the combo box one line at the time. We add each line in the file to the combo box by using the Add() method on the Items collection.
As the user should be able to enter new items into the combo box, we'll add a check for the Enter key being pressed. If the text in the Text property of the ComboBox does not exist in the Items collection, we'll add the new item to it:
private void cboOccupation_KeyDown(object sender,
System.Windows.Forms.KeyEventArgs e)
{
int index = 0;
ComboBox cbo = (ComboBox)sender;
// We only want to do something if the enter key was pressed
if (e.KeyCode == Keys.Enter)
{
// FindStringExact searches for the string and is not
// case-sensitive, which
// is exactly what we need, as Programmer and programmer is the same.
// If we find a match we'll move the selection in the ComboBox to
// that item.
index = cbo.FindStringExact(cbo.Text);
if (index < 0) // FindStringExact return -1 if nothing was found.
cbo.Items.Add(cbo.Text);
else
cbo.SelectedIndex = index;
// Signal that we've handled the key down event
e.Handled = true;
}
}
The FindStringExact() method of the ComboBox object searches for a string that is an exact match no matter the case of either of the strings. This is perfect for us, because we don't want to add the same occupation in a variety of cases to the collection.
If we don't find an item in the Items collection that matches the text, we add a new item. Adding a new item automatically sets the currently selected item to the new one. If we find a match, we simply select the existing entry in the collection.
We also need to subscribe to the event, which we do in the constructor of the form:
this.txtAge.TextChanged += new System.EventHandler(this.txtBox_TextChanged);
this.cboOccupation.KeyDown += new
System.Windows.Forms.KeyEventHandler(this.cboOccupation_KeyDown);
When the user closes the dialog, we should save the items in the combo box. We do that in another method:
private void SaveOccupation()
{
try
{
System.IO.StreamWriter sw = new
System.IO.StreamWriter("../../Occupations.txt");
foreach (string item in this.cboOccupation.Items)
sw.WriteLine(item); // Write the item to the file
sw.Flush();
sw.Close();
catch (System.Exception)
{
MessageBox.Show("File not found or moved");
}
}
The StreamWriter class is covered in Chapter 20 and so the details of this code won't be discussed. However, once again we wrap the file IO code in a try...catch block as a precaution just in case someone inadvertently deletes or moves the text file while the form is open. We loop through the items in the Items collection and write each one to the file.
Finally, we must call the LoadOccupation() and SaveOccupation() methods we have just defined. We do so in the form constructor and Dispose() methods respectively:
public Form1()
{
.
.
.
this.cboOccupation.KeyDown += new
System.Windows.Forms.KeyEventHandler(this.cboOccupation_KeyDown);
// Fill the ComboBox
LoadOccupations();
}
public override void Dispose()
{
// Save the items in the ComboBox
SaveOccupation();
base.Dispose();
if(components != null)
components.Dispose();
}
That concludes the ComboBox example. When running the example, you should get something like this:
The list from which you select files to open in the standard dialog boxes in Windows is a ListView control Everything you can do to the view in the standard list view dialog (large icons, details view, and so on), you can do with the ListView provided with Visual Studio.NET:
The list view is usually used to present data where the user is allowed some control over the detail and style of the presentation. It is possible to display the data contained in the control as columns and rows much like in a grid, as a single column or in with varying icon representations. The most commonly used list view is like the one seen above which is used to navigate the folders on a computer.
The ListView control is easily the most complex control we're going to encounter in this chapter, and covering all of it is beyond the scope of this book. What we'll do is provide a solid base for you to work on by writing an example that utilizes many of the most important features of the ListView control, and by a thorough description of the numerous properties, events, and methods that can be used. We'll also take a look at the ImageList control, which is used to store the images used in a ListView control.
Activation
By using this property, you can control how a user activates an item in the list view. You should not change the default setting unless you have a good reason for doing so, because you will be altering a setting that the user have set for his or her entire system.
The possible values are:
·Standard: This setting is that which the user has chosen for his or her machine.
·OneClick: Clicking an item activates it.
·TwoClick: Double-clicking an item activates it.
Alignment
This property allows you to control how the items in the list view are aligned. The four possible values are:
· Default: If the user drags and drops an item it remains where he or she dropped it.
· Left: Items are aligned to the left edge of the ListView control.
· Top: Items are aligned to the top edge of the ListView control.
· SnapToGrid: The ListView control contains an invisible grid to which the items will snap.
AllowColumnReorder
If you set this property to true, you allow the user to change the order of the columns in a list view. If you do so, you should be sure that the routines that fill the list view are able to insert the items properly, even after the order of the columns is changed.
AutoArrange
If you set this property to true, items will automatically arrange themselves according to the Alignment property. If the user drags an item to the center of the list view, and Alignment is Left, then the item will automatically jump to the left of the list view. This property is only meaningful if the View property is LargeIcon or SmallIcon.
CheckBoxes
If you set this property to true, every item in the list view will have a CheckBox displayed to the left of it. This property is only meaningful if the View property is Details or List.
CheckedIndices
These two properties gives you access to a collection of indices and items, respectively, containing the checked items in the list.
Columns
A list view can contain columns. This property gives you access to the collection of columns through which you can add or remove columns.
FocusedItem
This property holds the item that has focus in the list view. If nothing is selected, it is null.
FullRowSelect
When this property is true, and an item is clicked, the entire row in which the item resides will be highlighted. If it is false, only the item itself will be highlighted.
GridLines
Setting this property to true will cause the list view to draw grid lines between rows and columns. This property is only meaningful when the View property is Details.
HeaderStyle
You can control how the column headers are displayed. There are three styles:
·Clickable: The column header works like a button.
·NonClickable: The column headers do not respond to mouse clicks.
·None: The column headers are not displayed.
HoverSelection
When this property is true, the user can select an item in the list view by hovering the mouse pointer over it.
The collection of items in the list view.
LabelEdit
When this property is true, the user can edit the content of the first column in a Details view.
LabelWrap
If this property is true, labels will wrap over as many lines is needed to display all of the text.
LargeImageList
This property holds the ImageList, which holds large images. These images can be used when the View property is LargeIcon.
MultiSelect
Set this property to true to allow the user to select multiple items.
Scrollable
Set this property to true to display scrollbars.
These two properties contain the collections that hold the indices and items that are selected, respectively.
SmallImageList
When the View property is SmallIcon this property holds the ImageList that contain the images used.
Sorting
You can allow the list view to sort the items it contains. There are three possible modes:
·Ascending
·Descending
·None
StateImageList
The ImageList contains masks for images that are used as overlays on the LargeImageList and SmallImageList images to represent custom states.
TopItem
Returns the item at the top of the list view.
View
A list view can display its items in four different modes:
·LargeIcon: All items are displayed with a large icon (32x32) and a label.
·SmallIcon: All items are displayed with a small icon (16x16) and a label.
·List: Only one column is displayed. That column can contain an icon and a label.
·Details: Any number of columns can be displayed. Only the first column can contain an icon.
For a control as complex as the list view, there are surprisingly few methods specific to it. They are all described in the table below:
BeginUpdate
By calling this method, you tell the list view to stop drawing updates until EndUpdate is called. This is useful when you are inserting many items at once, because it stops the view from flickering and dramatically increases speed.
Clear
Clears the list view completely. All items and columns are removed.
EndUpdate
Call this method after calling BeginUpdate. When you call this method, the list view will draw all of its items.
EnsureVisible
When you call this method, the list view will scroll itself to make the item with the index you specified visible.
GetItemAt
Returns the item at position x,y in the list view.
And these are the ListView control events that you might want to handle:
AfterLabelEdit
This event occurs after a label have been edited
BeforeLabelEdit
This event occurs before a user begins editing a label
ColumnClick
This event occurs when a column is clicked
ItemActivate
Occurs when an item is activated
An item in a list view is always an instance of the ListViewItem class. The ListViewItem holds information such as text and the index of the icon to display. ListViewItems have a collection called SubItems that holds instances of another class, ListViewSubItem. These sub items are displayed if the ListView control is in Details mode. Each of the sub items represents a column in the list view. The main difference of the sub items and the main items is that a sub item cannot display an icon.
You add ListViewItems to the ListView through the Items collection and ListViewSubItems to a ListViewItem through the SubItems collection on the ListViewItem.
To make a list view display column headers, you add instances of a class called ColumnHeader to the Columns collection of the ListView. ColumnHeaders provide a caption for the columns that can be displayed when the ListView is in Details mode.
The ImageList control provides a collection that can be used to store images that is used in other controls on your form. You can store images of any size in an image list, but within each control every image must be of the same size. In the case of the ListView, which means that you need two ImageList controls to be able to display both large and small images.
The ImageList is the first control we've visited in this chapter that does not display itself at runtime. When you drag it to a form you are developing, it'll not be placed on the form itself, but below it in a tray, which contains all such components. This nice feature is provided to stop controls that are not part of the user interface from clogging up the forms designer. The control is manipulated in exactly the same way as any other control, except that you cannot move it around.
You can add images to the ImageList at both design and runtime. If you know at design time which images you'll need to display, you can add the images by clicking the button at the right hand side of the Images property. This will bring up a dialog on which you can browse to the images you wish to insert. If you choose to add the images at runtime, you add them through the Images collection.
Please see the ListView example below for an example of how to use this control.
The best way of learning about using a ListView control and its associated image lists is through an example. We'll create a dialog with a ListView and two ImageLists. The ListView will display files and folders on your hard drive. For the sake of simplicity, we will not be extracting the correct icons from the files and folders, but rather use a standard folder icon for the folders and an information icon for files.
By double-clicking the folders, you can browse into the folder tree and a back button is provided to move up the tree. Four radio buttons are used to change the mode of the list view at runtime. If a file is double-clicked, we'll attempt to execute it.
As always, we'll start by creating the user interface:
1. Create a new Visual Studio.NET project called ListView. Add a ListView, a button, a label, and a group box to the form. Then add four radiovbuttons to the group box to get a form that looks like the picture below:
2. Name the controls as shown in the picture above. The ListView will not display its name as in the picture above; I've added an item to it to provide the name. You should not do so.
3. Change the Text properties of the radio buttons and button to the same as the name, except for the first three letters.
4. Clear the Text property of the label.
5. Add two ImageLists to the form by double-clicking the control's icon in the Toolbox (you'll have to scroll down to find it). Rename them ilSmall and ilLarge.
6. Change the Size property of the ImageList named ilLarge to 32, 32.
7. Click the button to the right of the Images property of the ilLage image list to bring up the dialog on which you can browse to the images you want to insert.
8. Click Add and browse to the folder under Visual Studio.NET that contains the images. The files are:
<Drive>:\Program Files\Visual Studio.NET\Common7\Graphics\Icons\Win95\clsdfold.ico
and
<Drive>:\Program Files\Visual Studio.NET\Common7\Graphics\Icons\Computer\msgbox04.ico
9. Make sure the folder icon is at the top of the list.
10. Repeat steps 7 and 8 with the other ImageList, ilSmall.
11. Set the Checked property of the radio button rdoDetails to true.
12. Set the following properties on the list view:
MultiSelect = true
LargeImageList = ilLarge
SmallImageList = ilSmall
View = Details
13. Change the Text properties of the form and Frame as shown in the picture above.
That concludes our user interface and we can move on to the code. First of all, we'll need a field to hold the folders we've browsed through in order to be able to return to them when the back button is clicked. We will store the absolute path of the folders, and so we choose a StringCollection for the job:
public class Form1 : System.Windows.Forms.Form
{
// Member field to hold previous folders
private System.Collections.Specialized.StringCollection folderCol;
We didn't create any column headers in the forms designer, so we'll have to do that now. We create them in a method called CreateHeadersAndFillListView():
private void CreateHeadersAndFillListView()
{
ColumnHeader colHead;
// First header
colHead = new ColumnHeader();
colHead.Text = "Filename";
this.lwFilesAndFolders.Columns.Add(colHead); // Insert the header
// Second header
colHead = new ColumnHeader();
colHead.Text = "Size";
this.lwFilesAndFolders.Columns.Add(colHead); // Insert the header
// Third header
colHead = new ColumnHeader();
colHead.Text = "Last accessed";
this.lwFilesAndFolders.Columns.Add(colHead); // Insert the header
}
We start by declaring a single variable, colHead, which we will use to create the three column headers. For each of the three headers we new the variable, and assign the Text to it before adding it to the Columns collection of the ListView.
The final initialization of the form as it is displayed the first time, is to fill the list view with files and folders from your hard disk. This is done in another method:
private void PaintListView(string root)
{
try
{
// Two local variables that is used to create the items to insert
ListViewItem lvi;
ListViewItem.ListViewSubItem lvsi;
// If there's no root folder, we can't insert anything
if (root.CompareTo("") == 0)
return;
// Get information about the root folder.
System.IO.DirectoryInfo dir = new System.IO.DirectoryInfo(root);
// Retrieve the files and folders from the root folder.
DirectoryInfo[] dirs = dir.GetDirectories(); // Folders
FileInfo[] files = dir.GetFiles(); // Files
// Clear the ListView. Note that we call the Clear method on the
// Items collection rather than on the ListView itself.
// The Clear method of the ListView remove everything, including column
// headers, and we only want to remove the items from the view.
this.lwFilesAndFolders.Items.Clear();
// Set the label with the current path
this.lblCurrentPath.Text = root;
// Lock the ListView for updates
this.lwFilesAndFolders.BeginUpdate();
// Loop through all folders in the root folder and insert them
foreach (System.IO.DirectoryInfo di in dirs)
{
// Create the main ListViewItem
lvi = new ListViewItem();
lvi.Text = di.Name; // Folder name
lvi.ImageIndex = 0; // The folder icon has index 0
lvi.Tag = di.FullName; // Set the tag to the qualified path of the
// folder
// Create the two ListViewSubItems.
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = ""; // Size - a folder has no size and so this column
// is empty
lvi.SubItems.Add(lvsi); // Add the sub item to the ListViewItem
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = di.LastAccessTime.ToString(); // Last accessed column
lvi.SubItems.Add(lvsi); // Add the sub item to the ListViewItem
// Add the ListViewItem to the Items collection of the ListView
this.lwFilesAndFolders.Items.Add(lvi);
}
// Loop through all the files in the root folder
foreach (System.IO.FileInfo fi in files)
{
// Create the main ListViewItem
lvi = new ListViewItem();
lvi.Text = fi.Name; // Filename
lvi.ImageIndex = 1; // The icon we use to represent a folder has
// index 1
lvi.Tag = fi.FullName; // Set the tag to the qualified path of the
// file
// Create the two sub items
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = fi.Length.ToString(); // Length of the file
lvi.SubItems.Add(lvsi); // Add to the SubItems collection
lvsi = new ListViewItem.ListViewSubItem();
lvsi.Text = fi.LastAccessTime.ToString(); // Last Accessed Column
lvi.SubItems.Add(lvsi); // Add to the SubItems collection
// Add the item to the Items collection of the ListView
this.lwFilesAndFolders.Items.Add(lvi);
}
// Unlock the ListView. The items that have been inserted will now
// be displayed
this.lwFilesAndFolders.EndUpdate();
}
Catch (System.Exception err)
{
MessageBox.Show("Error: " + err.Message);
}
}
Before the first of the two foreach blocks, we call BeginUpdate() on the ListView control. Remember that the BeginUpdate() method on the ListView signals the ListView control to stop updating its visible area until EndUpdate() is called. If we did not call this method, filling the list view would be slower and the list may flicker as the items are added. Just after the second foreach block we call EndUpdate(), which makes the ListView control draw the items we've filled it with.
The two foreach blocks contain the code we are interested in. We start by creating a new instance of a ListViewItem, and then setting the Text property to the name of the file or folder we are going to insert. The ImageIndex of the ListViewItem refers to the index of an item in one of the ImageLists. Because of that, it is important that the icons have the same indexes in the two ImageLists. We use the Tag property to save the fully qualified path to both folders and files, for use when the user double-clicks the item.
Then we create the two sub items. These are simply assigned the text to display and then added to the SubItems collection of the ListViewItem.
Finally, the ListViewItem is added to the Items collection of the ListView. The ListView is smart enough to simply ignore the sub items, if the view mode is anything but Details, so we add the sub items no matter what the view mode is now.
All that remains to be done for the list view to display the root folder, is to call the two functions in the constructor of the form. At the same time, we instantiate the folderColStringCollection with the root folder:
InitializeComponent();
// Init ListView and folder collection
folderCol = new System.Collections.Specialized.StringCollection();
CreateHeadersAndFillListView();
PaintListView(@"C:\");
folderCol.Add(@"C:\");
In order to allow the user to double-click an item in the ListView to browse the folders, we need to subscribe to the ItemActivate event. We add the subscription to the constructor:
this.lwFilesAndFolders.ItemActivate += new
System.EventHandler(this.lwFilesAndFolders_ItemActivate);
The corresponding event handler looks like this:
private void lwFilesAndFolders_ItemActivate(object sender, System.EventArgs e)
{
// Cast the sender to a ListView and get the tag of the first selected
// item.
System.Windows.Forms.ListView lw = (System.Windows.Forms.ListView)sender;
string filename = lw.SelectedItems[0].Tag.ToString();
if (lw.SelectedItems[0].ImageIndex != 0)
{
try
{
// Attempt to run the file
System.Diagnostics.Process.Start(filename);
}
catch
{
// If the attempt fails we simply exit the method
return;
}
}
else
{
// Insert the items
PaintListView(filename);
folderCol.Add(filename);
}
}
The tag of the selected item contains the fully qualified path to the file or folder that was double-clicked. We know that the image with index 0 is a folder, so we can determine whether the item is a file or a folder by looking at that index. If it is a file, we attempt to load the file.
If it is a folder, we call PaintListView with the new folder, and then add the new folder to the folderCol collection.
Before we move on to the radio buttons, we'll complete the browsing abilities by adding the click event to the Back button. Double-click the button and fill the event handle with this code:
private void btnBack_Click(object sender, System.EventArgs e)
{
if (folderCol.Count > 1)
{
PaintListView(folderCol[folderCol.Count-2].ToString());
folderCol.RemoveAt(folderCol.Count-1);
}
else
{
PaintListView(folderCol[0].ToString());
}
}
If there is more than one item in the folderCol collection, then we are not at the root of the browser, and we call PaintListView with the path to the previous folder. The last item in the folderCol collection is the current folder, which is why we need to take the second to last item. We then remove the last item in the collection, and make the new last item the current folder. If there is only one item in the collection, we simply call PaintListView with that item.
All that remains is to be able to change the view type of the list view. Double-click each of the radio buttons and add the following code:
private void rdoLarge_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
this.lwFilesAndFolders.View = View.LargeIcon;
}
private void rdoList_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checed)
this.lwFilesAndFolders.View = View.List;
}
private void rdoSmall_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
this.lwFilesAndFolders.View = View.SmallIcon;
}
private void rdoDetails_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
this.lwFilesAndFolders.View = View.Details;
}
We check the radio button to see if it changed to Checked and if it was, we set the View property of the ListView accordingly.
That concludes the ListView example. When you run it, you should see something like this:
A status bar is commonly used to provide hints for the selected item or information about an action currently being performed on a dialog. Normally, the StatusBar is placed at the bottom of the screen, as it is in MS Office applications, but it can be located anywhere you like. The status bar that is provided with Visual Studio.NET can be used to simply display a text or you can add panels to it and display text, or create your own routines for drawing the contents of the panel:
The above picture shows the status bar as it looks in MS Word. The panels in the status bar can be identified as the sections that appear sunken.
As mentioned above, you can simply assign to the Text property of a StatusBar control to display simple text to the user, but it is possible to create panels and use them to the same effect:
BackgroundImage
It is possible to assign an image to the status bar that will be drawn in the background.
Panels
This is the collection of panels in the status bar. Use this collection to add and remove panels.
ShowPanels
If you want to display panels, this property must be set to true.
When you are not using panels this property holds the text that is displayed in the status bar.
There are not a whole lot of new events for the status bar, but if you are drawing a panel manually, the DrawItem event is of crucial importance:
DrawItem
Occurs when a panel that has the OwnerDraw style set needs to be redrawn. You must subscribe to this event if you want to draw the contents of a panel yourself.
PanelClick
Occurs when a panels is clicked.
Each panel in a status bar is an instance of the StatusBarPanel class. This class contains all the information about the individual panels in the Panels collection. The information that can be set ranges from simple text and alignment of text to icons to be displayed and the style of the panel.
If you want to draw the panel yourself, you must set the Style property of the panel to OwnerDraw and handle the DrawItem event of the StatusBar.
We'll change the ListView example we created earlier to demonstrate the use of the StatusBar control. We'll remove the label used to display the current folder and move that piece of information to a panel on a status bar. We'll also display a second panel, which will display the current view mode of the list view:
1. Remove the label lblCurrentFolder.
2. Double-click the StatusBar control in the toolbox to add it to the form (again it is near to the bottom of the list). The new control will automatically dock with the bottom edge of the form.
3. Change the name of the StatusBar to sbInfo and clear the Text property.
4. Find the Panels property and double-click the button to the right of it to bring up a dialog to add panels.
5. Click Add to add a panel to the collection. Set the AutoSize property to Spring. This means that the panel will share the space in the StatusBar with other panels.
6. Click Add again, and change the AutoSize property to Contents. This means that the panel will resize itself to the size of the text it contains. Set the MinSize property to 0.
7. Click OK to close the dialog.
8. Set the ShowPanels property on the StatusBar to true.
This example can be found in the code download for this chapter as the StatusBar Visual Studio.NET project.
That's it for the user interface and we'll move on to the code. We'll start by setting the current path in the PaintListView method. Remove the line that set the text in the label and insert the following in its place:
this.sbInfo.Panels[0].Text = root;
The first panel has index 0, and we simply set its Text property just as we set the Text property of the label. Finally, we change the four radio button CheckedChanged events to set the text of the second panel:
private void rdoLarge_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
{
this.lwFilesAndFolders.View = View.LargeIcon;
this.sbInfo.Panels[1].Text = "Large Icon";
}
}
private void rdoList_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
{
this.lwFilesAndFolders.View = View.List;
this.sbInfo.Panels[1].Text = "List";
}
}
private void rdoSmall_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
{
this.lwFilesAndFolders.View = View.SmallIcon;
this.sbInfo.Panels[1].Text = "Small Icon";
}
}
private void rdoDetails_CheckedChanged(object sender, System.EventArgs e)
{
RadioButton rdb = (RadioButton)sender;
if (rdb.Checked)
{
this.lwFilesAndFolders.View = View.Details;
this.sbInfo.Panels[1].Text = "Details";
}
}
The panel text is set in exactly the same way as in PaintListView above.
That concludes the StatusBar example. If you run it now, you should see something like this:
The TabControl provides an easy way of organizing a dialog into logical parts that can be accessed through tabs located at the top of the control. A TabControl contains TabPages that essentially work in a similar way to a GroupBox control, though it is somewhat more complex:
The above screen shot shows the Options dialog in MS Word 2000 as it is typically configured. Notice the two rows of tabs at the top of the dialog. Clicking each of them will show a different selection of controls in the rest of the dialog. This is a very good example of how to use a tab control to group related information together, making it easier for the user to find the information s/he is looking for.
Using the tab control is very easy. You simply add the number of tabs you want to display to the control's TabPages collection and then drag the controls you want to display to the respective pages.
At runtime, you can navigate the tabs through the properties of the control.
The properties of the TabControl are largely used to control the appearance of the container of TabPages, in particular the tabs displayed:
Controls where on the tab control the tabs are displayed. The default is at the top.
Controls how the tabs are displayed. The tabs can be displayed as normal buttons or with flat style.
HotTrack
If this property is set to true the appearance of the tabs on the control change as, the mouse pointer passes over them.
If this property is set to true, it is possible to have several rows of tabs.
RowCount
Returns the number of rows of tabs that is currently displayed.
Returns or sets the index of the selected tab.
TabCount
Returns the total number of tabs.
TabPages
This is the collection of TabPages in the control. Use this collection to add and remove TabPages.
The TabControl works a little differently from all other controls we've seen so far. When you drag the control on to a form, you will see a gray rectangle that doesn't look very much like the control in the screenshot as shown above. You will also see, below the Properties panel, two buttons that look like links with the captions Add Tab and Remove Tab. Clicking Add Tab will insert a new tab page on the control, and the control will start to be recognizable. Obviously, you can remove the tab with the Remove Tab link button.
The above procedure is provided in order for you to get up and running quickly with the control. If, on the other hand, you want to change the behavior or style of the tabs, you should use the TabPages dialog – accessed through the button when you select TabPages in the properties panel.
The TabPages property is also the collection used to access the individual pages on a tab control. Let's create an example to demonstrate the basics of the control. The example demonstrates how to controls located on different pages on the tab control:
1. Create a new C# Windows Application project and name it TabControl.
2. Drag a TabControl control from the toolbox to the form.
3. Click Add Tab to add a tab to the control.
4. Find the TabPages property and click the button to the right of it after selecting it.
5. Add another tab page to the control by clicking Add.
6. Change the Text property of the tab pages to Tab One and Tab Two respectively.
7. You can select the tab pages to work on by clicking on the tabs at the top of the control. Select the tab with the text TabOne. Drag a button on to the control. Be sure to place the button within the frame of the TabControl. If you place it outside, then the button will be placed on the form rather than on the control.
8. Change the name of the button to btnShowMessage and the Text of the button to Show Message.
9. Click on the tab with the Text property TabTwo. Drag a TextBox control onto the TabControl surface. Name this control txtMessage and clear the Text property.
10. Return to the form by clicking OK in the dialog. The two tabs should look like these two screenshots:
We are now ready to access the controls. If you run the code as it is, you will see the tab pages displayed properly. All that remains for us to do to demonstrate the use of the tab control is add some code is that when the user clicks the Show Message button on one tab, the text entered in the other tab will be displayed in a message box. First, we add a handler for the Click event by double-clicking the button on the first tab and adding the following code:
private void btnShowMessage_Click(object sender, System.EventArgs e)
{
// Access the TextBox
MessageBox.Show(this.txtMessage.Text);
}
You access a control on a tab just as you would any other control on the form. We get the Text property of the TextBox and display it in a message box.
Earlier in the chapter, we saw that it is only possible to have one radio button selected at a time on a form (unless you put them in group boxes). The TabPages work in precisely the same way as group boxes and it is, therefore, possible to have mulitiple sets of radio buttons on different tabs without the need to have group boxes.
The last thing you must know to be able to work with a tab control, is how to determine which tab is currently being displayed. There are two properties you can use for this purpoase: SelectedTab and SelectedIndex. As the names imply, SelectedTab will return the TabPage object to you or null if no tab is selected, and SelectedIndex will return the index of the tab or –1 if no tab is selected.
In this chapter we visited some of the most commonly used controls when creating Windows Applications and saw how they can be used to create simple, yet powerful user interfaces. We discussed the properties and events of these controls and gave examples of their use.
The controls discussed in this chapter were:
In Chapter 14, we will be looking at more complex controls, such as menus and toolbars, and we will use them to develop Multi-Document Interface (MDI) Windows applications. We'll also demonstrate how to create a user control, which combines the functionality of the simple controls covered in this chapter.
This chapter is taken from "Beginning C#" by Marco Bellinaso, Ollie Cornes, David Espinosa, Zach Greenvoss, Jacob Hammer Pedersen, Christian Nagel, Jon D Reid, Matthew Reynolds, Morgan Skinner, Karli Watson, Eric White, published by Wrox Press Limited in September 2001; ISBN 1861004982; copyright ©. | http://www.codeproject.com/Articles/1465/Beginning-C-Chapter-13-Using-Windows-Form-Controls?msg=4031178 | CC-MAIN-2014-42 | refinedweb | 20,707 | 63.19 |
For questions, use instead! This chat is for informal discussions and debugging sessions with developers of the HoloViz.org tools (HoloViews, Datashader, Panel, hvPlot, GeoViews, Param, etc.), but user questions should go to the HoloViz Discourse site so that others won't have to ask them again.
pn.Tabelements. By using
dynamic=True, I can get the panel object to display very quickly (under 2 seconds). Each tab has reasonable responsiveness too. However, construction of the panel object takes more than 30 seconds. That's a cumbersome startup time for each person that visits my panel app. Is it possible to serialize/deserialize the panel object to pickle for faster loading? Or is there an alterative way to pre-construct the object and only render it when someone uses the app?
Hi @jbogaardt . The right place to post usage questions is on Discourse. Then the answer is recorded and shared with the community. Please post it here.
If possible please post a minimal example or a link to code that can be run. Potentially also include some screenshots or a .gif video. It is sometimes very difficult sitting on the other side and trying to guess what the problem can be and how to solve it.
Thanks.
@michaelaye Where did you find that showcase link? Was it from google?
Google "holoviews tutorials" first hit is and Showcase is the first link on that page.
import panel as pn pn.extension( js_files={'textae': ""}, css_files=[""] ) pane = pn.panel(""" <link rel="stylesheet" href="" /> <script src=""></script> <div class="textae-editor"> { "text":"Hello World!", "denotations":[ {"span":{"begin":0,"end":5},"obj":"Greet"}, {"span":{"begin":6,"end":11},"obj":"Object"} ] } </div> """) pane
Calling the .opts method with options broken down by options group (i.e. separate plot, style and norm groups) is deprecated. Use the .options method converting to the simplified format instead or use hv.opts.apply_groups for backward compatibility.. My simple opts are
plot = plot.opts(shared_axes=False).opts(opts.Curve(width=400)). Can anyone advise? | https://gitter.im/pyviz/pyviz?at=5ee5af4c035dfa12611cb299 | CC-MAIN-2021-17 | refinedweb | 333 | 61.12 |
Comments serve as a sort of in-code documentation. When inserted into a program, they are effectively ignored by the compiler; they are solely intended to be used as notes by the humans that read source code.
All comments are removed from the program at translation phase 3 by replacing each comment with a single whitespace character.
C-style comments are usually used to comment large blocks of text or small fragments of code; however, they can be used to comment single lines. To insert text as a C-style comment, simply surround the text with
/* and
*/. C-style comments tell the compiler to ignore all content between
/* and
*/. Although it is not part of the C standard,
/** and
*/ are often used to indicate documentation blocks; this is legal because the second asterisk is simply treated as part of the comment.
Except within a character constant, a string literal, or a comment, the characters
/* introduce a comment. The contents of such a comment are examined only to identify multibyte characters and to find the characters
*/ that terminate the comment. C-style comments cannot be nested.
Because comments are removed before the preprocessor stage, a macro cannot be used to form a comment and an unterminated C-style comment doesn't spill over from an #include'd file.
/* An attempt to use a macro to form a comment. */ /* But, a space replaces characters "//". */ #ifndef DEBUG #define PRINTF // #else #define PRINTF printf #endif ... PRINTF("Error in file %s at line %i\n", __FILE__, __LINE__);
Besides commenting out, other mechanisms used for source code exclusion are:
#if 0 puts("this will not be compiled"); /* no conflict with C-style comments */ // no conflict with C++-style comments #endif
and.
if(0) { puts("this will be compiled but not be executed"); /* no conflict with C-style comments */ // no conflict with C++-style comments }
The introduction of // comments in C99 was a breaking change in some rare circumstances:
a = b //*divisor:*/ c + d; /* C89 compiles a = b / c + d; C99 compiles a = b + d; */
#include <stdio.h> /* C-style comments can contain multiple lines. */ /* Or, just one line. */ // C++-style comments can comment one line. // Or, they can // be strung together. int main(void) { // The below code won't be run // puts("Hello"); // The below code will be run puts("World"); }
Output:
World
© cppreference.com
Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0. | https://docs.w3cub.com/c/comment/ | CC-MAIN-2020-40 | refinedweb | 400 | 62.68 |
fontcharset.serif.0=koi8.converter.CharToByteKOI8_R fontcharset.sansserif.0=koi8.converter.CharToByteKOI8_R fontcharset.monospaced.0=koi8.converter.CharToByteKOI8_R fontcharset.dialog.0=koi8.converter.CharToByteKOI8_R fontcharset.dialoginput.0=sun.io.CharToByteKOI8_Rwhich tell JVM how to convert. This is supposed to be done in the package koi8. This package it not written yet: I still can't scratch time to get it done. If anyone wants to do it before me, please, drop me a line. I would like this to be implemented in the following way:
package koi8.converter; import sun.io.CharToByte8859_1; public class CharToByteKOI8_R extends sun.io.CharToByte8859_1 { public boolean canConvert(char ch) { ... } public int convert(char[] input, int inStart, int inEnd, byte[] output, int outStart, int outEnd) throws ConversionBufferFullException; { ... } public String toString() { return "KOI8_R"; } }I was not able to write this on-a-fly because I could not find how sun.io is implemented, and I was told that Sun will not release source code for this package. | http://www.ibiblio.org/sergei/Software/java.html | crawl-002 | refinedweb | 160 | 51.55 |
Puppy gets a Pet action and no attack but cats and butterflies are still Attack
Katie Russell
2013-06-03
Kimmo Rundelin
2013-09-13
The puppy (ab)uses the custom menu system to override the default menu items for RPEntities. The other PassiveNPCs can use the same trick if there are nice alternate actions for them.
Here's the current situation:
The cats in Felina's house are
PassiveNPCs and they already have a custom menu entry - "Own", which makes the non-attackable.
The pet cats are instances of a different class -
Cat. They're not
PassiveNPCs and have no custom menu entry, so they're attackable.
Attackable
PassiveNPCs are butterflies (Semos) and fishes (Ados).
There's not much one could do to a fish (... maybe disturb it, and make it swim the other way?) or a butterfly. So adding a custom menu entry is not always easy.
There's a check in
Entity2DView.buildActions()
if (!entity.getRPObject().has("menu")) { if (entity.isAttackedBy(User.get())) { list.add(ActionType.STOP_ATTACK.getRepresentation()); } else { list.add(ActionType.ATTACK.getRepresentation()); } }
Maybe we could replace this "has-menu" condition with something more meaningful, like a
NotAttackable marker interface. Something like:
public interface NotAttackable { } public class Cat extends Pet implements NotAttackable public class PassiveNPC extends NPC implements NotAttackable
and then check this way:
if (!entity.getRPObject() instanceof NotAttackable)
What do you guys think?
Hendrik Brummermann
2013-12-01
Although this is more work and requires compatibility code:
I think it may be a good idea to let the server decide to add an Attack menu.
A quick idea (without looking too deep into the code) is to extend the "menu" attribute to contain a list of all menu items. The client will still need to do some situation specific processing, e. g. convert "Attack" into "Stop Attack".
This way the client will become more generic. And it is one step forward on untangling the various aspects of Sheeps, Pets, Creatures, SpeakerNPCs, PassiveNPCs on the server side in the future.
Anonymous | http://sourceforge.net/p/arianne/bugs/5706/ | CC-MAIN-2014-23 | refinedweb | 335 | 57.47 |
Plotting Data From Arduino
59,359
63
14
Arduino doesn't have much in terms of debug and analysis capability built in, so it can be very useful to have a facility to plot data that's sent from Arduino over the Serial Port.
There are several ways to do that including Processing, Python + Matplotlib etc.. but none of these methods work effectively with very little setup and offer expected features such as Zoom, Scroll, Write to File, Save Setup etc.
I want to show you how to produce the kinds of plots illustrated in my Frequency Detection Instructable.
The Software I used to produce those plots is Bridge Control Panel.
It's released as a part of Cypress Semiconductor's PSOC Programming Utilities.
==============
By the way. If you like this Instructable, you might also like to read My Blog covering various projects and Tutorials.
==============
Step 1: Bridge Control Panel
You need to write the Arduino Data over the Serial Port one byte at a time. For an int data type that would look as follows:
// RX8 [h=43] @1Key1 @0Key1 Serial.print("C"); Serial.write(data>>8); Serial.write(data&0xff);We Signal that we're sending a data byte then send each byte of the int.
The Syntax for the Bridge Command is shown in the comment and in the Picture of the Bridge Window.
Notice we have successfully connected to Arduino on COM6.
The command to Read Data is: RX8 [h=43] @1Key1 @0Key1
RX8 is the read command
[h=43] means the next valid byte is "C" in ASCII
then the High Byte of Key1
then the Low Byte of Key1
[Chart -> Variable Settings] panel is illustrated in the Picture.
In here you need to tell Bridge Key1 is Type:int , signed. The TICK means this is in use.
If you have made any mistakes here the command in the Editor Window will show the Errors in BLUE.
Finally you need to setup the comm's protocol in [Tools -> Protocol Configuration F7]
Make sure the Baud Rate matches the one specified in Arduino.
To Capture Data press REPEAT in the Window.
Step 2: Plots
That's it.
Enjoy making great plots and enjoy a useful debugging tool.
You can also SAVE data for further processing by MATLAB or EXCEL etc..
14 Discussions
4 years ago on Introduction
I made an equivalent tool in python that print real time data from ADXL345 accelerometer....
may be it will be helpful for someone
Reply 3 years ago on Introduction
Thank You!
You're a saviour
4 years ago on Introduction
For multiple values just add bytes. If you want to send two 16 bit words for example:
RX8 [h=43] @1Key1 @0Key1 @1Key2 @0Key2
Remember to select Key 2 as Active in {Chart -> Variable Settings} and set that data as having the appropriate formatting.
In Arduino:
Serial.print("C");
Serial.write(data1>>8);
Serial.write(data1&0xff);
Serial.write(data2>>8);
Serial.write(data2&0xff);
Reply 4 years ago
This software is no longer available. Can you upload it again?
Reply 4 years ago on Introduction
I just checked that the software is still available under that link.
1 year ago
sir how to connect arduino to the bridge control?
2 years ago
Hello
Im using following Adafruit sketch to read and plot the data from themocouple
based on MAX 31855 device using Bridge Control Panel
I
did receive plot ,set X axis as a time ,and Y axis as temperature readings. but
instead of temperature readings I getting triangle waveform with amplitude vary from 30 to 7710 (see attached images) , values
represented in Bridge Control panel TABLE capture looks okay to me. It is
possible to adjust Y axis temperature readings scale to get scaled temperature
readings ? Or there is something wrong in my data readings ?
Source h and cpp files :
Thank You
#include <SPI.h>
#include "Adafruit_MAX31855.h"
int maxSO = 5;
int maxCS = 6;
int maxSCK = 7;
int abc;
//Create a MAX31855 reference and tell it what pin does what
Adafruit_MAX31855 kTC(maxSCK, maxCS, maxSO);
void setup() {
Serial.begin(9600);
// The MAX31855 needs a little time to stabilize
delay(500);
}
void loop() {
Serial.print("C = ");
Serial.println(kTC.readCelsius());
abc= int(kTC.readCelsius());
delay(1000);
Serial.print("C");
Serial.write(abc>>8);
Serial.write(abc&0xff);
}
2 years ago
If you are looking for a general purpose solution, you can also check out my project on Github
I was tired of reinventing the wheel for serial so I figured out why not make something clean once and for all, well tested, etc.
The arduino library is highlevel, allows to write any kind of data (string, numbers, arrays) and is super easy to use. For PC side, I wrote a command-line interface (a terminal where you type commands), where, once connected, you can open as many plot as you want just using a command, selecting the received data by their label (each data you send is attached to a label). You can also write variables from the terminal, display a list of received data, log everything, etc.
This is just scratching the surface, there are many goodies in this library and the list is growing every day :)
3 years ago
I also am having trouble getting 'Bridge Control Panel' to show output. I want to use it to draw graphs. I have read the complete Help documentation that comes with it.
I have a (cypress psoc based) device connected to Windows COM5, and I know it is streaming my data, with a header of 'C' followed by binary bytes, which I can verify in TeraTerm of other serial emulator. I have set the serial parameters are the same as yours, RX8, 115200,8N1.
Bridge reports 'Syntax: OK' and 'Connected', I have the variables Chart Variables as your example. But nothing shows when I hit 'Repeat' or first highlight the command and hit 'Repeat'
1) to get a real basic hex dump of the serial port in Bridge Control, shouldn't I be able to use a command like: 'RX8 @0Key1' and set the chart variable 'Key1' to Byte. Could you possible post a step by step procedure for a simple hex dump with Bridge Control Panel?
2) Can you explain why only the 'Repeat' and 'ToFile' are enabled, but not 'Send' ?
3) Does the software on the device have to wait to receive a command from Bridge? Or can it just be streaming out the data?
4) Is it possible in RX8 mode to send a command to the device to initiate the stream?
5) Your example does not show any incoming data in the 'Results' window. In my attempts I saw some lines of hex values there 3 times, but cannot consistently get any.
3 years ago
I'm using this sketch to read Thermistor data (... but I've problems plotting the data with Bridge Control.
I modified a bit the code adding an int Temp_copy and the last two Serial.write lines at the end of the sketch but I don't see nothing in Bridge control.
Please help.
Thanks
#include <math.h>
int Temp_copy;
double Thermistor(int RawADC) {
double Temp;
Temp = log(10000.0*((1024.0/RawADC-1)));
// =log(10000.0/(1024.0/RawADC-1)) // for pull-up configurationistor(analogRead(0)))); // display Fahrenheit
Temp_copy = int(Thermistor(analogRead(0)));
Serial.println(Temp_copy);
delay(100);
Serial.write(Temp_copy>>8);
Serial.write(Temp_copy&0xff);
}
Reply 3 years ago
I don't think Serial.println(Temp_copy); is the correct thing to do. This will get sent to the Bridge control panel and won't mean anything to it. I think it should be :
Serial.print("C");
Serial.write(Temp_copy>>8);
Serial.write(Temp_copy&0xff);
The C character is improtant because that's what Bridge is lloking for to tell it that a number is coming next.
4 years ago on Introduction
Hey
Great instructable, very helpfull.
Would you be able to tell me how to send a float data type to Bridge Control Panel? I tryed but no luck :(.
Reply 3 years ago on Introduction
I've tried to send float data type to BCP as well with no luck. Have you managed to figure it out? Cheers.
Reply 3 years ago
Send as an INT by multiplying by your required precision e.g. int( float_val*1000) | https://www.instructables.com/id/Plotting-Data-From-Arduino/ | CC-MAIN-2018-51 | refinedweb | 1,389 | 63.29 |
I have got a car.csv, which has lines like this:
vhigh,vhigh,2,2,small,low,unacc
[vhigh,vhigh,2,2,small,low,unacc].
import csv
a = []
with open("car.csv", 'r') as f:
reader = csv.reader(f)
for line in f:
a.append([line]);
['vhigh,vhigh,2,2,small,med,unacc\n'].
When you create the reader, iterate over that instead of over the file so do:
for line in reader: a.append(line)
or if you just want all the lines as a list
a = list(reader)
putting that all together:
with open("car.csv", "r") as f: reader = csv.reader(f) a = list(reader) | https://codedump.io/share/w34sCvDeXLLP/1/read-csv-file-into-list-of-lists-without-string-convertion-in-python | CC-MAIN-2017-30 | refinedweb | 108 | 63.15 |
Failure test based on whether the norm of a vector has a finite value. More...
#include <NOX_StatusTest_FiniteValue.H>
Failure test based on whether the norm of a vector has a finite value..::Complete, NOX::StatusTest::Failed,.
The finite number test algorithm.
Autoconf will test to see if the compiler implements the isnan() and isinf() functions in the cmath or math.h headers. If so, we will use these. If not, we supply a default implementation. The default implementation is only guaranteed to work if the code is IEEE 748/754 compliant. The checks for isnan and isinf are separate because compilers like the old sgi platforms support one but not the other. See bug 2019 for more details.
This method is public so that other objects (solvers, line searches, and directions) can use this test on their own values.
Return Values:
Referenced by NOX::LineSearch::Polynomial::checkConvergence(), and NOX::LineSearch::Backtrack::compute(). | http://trilinos.sandia.gov/packages/docs/dev/packages/nox/doc/html/classNOX_1_1StatusTest_1_1FiniteValue.html | CC-MAIN-2014-15 | refinedweb | 152 | 58.58 |
ui.load_view() fails with 'UI could not be loaded' (Custom Action Advanced)
Hi all,
just started on my Jekyll's post settings editor and I'm kinda lost now, because it fails to load my view and don't know what else to try. Here's the link to the unlisted (for now) workflow.
What's hapenning ...
1st scenario
- start Editorial from scratch
- try to run this workflow
- fails with UI could not be loaded
2nd scenario
- start Editorial from scratch
- just for fun, run Calculator workflow (included within Editorial)
- try to run my workflow
- Calculator UI appears with ton of messages about not working bindings, ...
Anyone have an idea what's going on?
iPad Air 2 & Editorial 1.1.1
If I move my stuff under
Run Python Script(without input parameters), it does work. It just doesn't work if it's within
Custom Action (Advanced).
Here's rewritten version which does work perfectly (not
Custom Action, just
Run Python Script). Anyway I'm still curious what's wrong with the first version (OLD ...).
I can not resolve your issue but I like your code... In
parse_file_settings()you write:
string.find(content,SEPARATOR) # which can be rewritten content.find(SEPARATOR)
This is because content is already a str so it has a
.find()method. This way is shorter, easier to read, and means that you no longer need to
import string.
However, I sense that
parse_file_settings()could be made even simpler thru the use of:
before, _, after = content.partition(SEPARATOR)
str.partition()really simplifies finding text between separators.
In
update_post_settings()you write:
output = '---\n' for name, value in settings.iteritems(): output += '%s: %s\n' % (name, value) output += '---\n' # which could be rewritten out_list = ['%s: %s' % (name, value) for name, value in settings.iteritems()] output = '---\n%s\n---\n' % '\n'.join(out_list)
The first line is a list comprehension which is really fast and it avoids doing string concatenation which is slow in Python. The second line joins the strings instead of concatenating them which is much faster. These improvements will help as settings get longer but will be of negligible value on short files.
Thanks @ccc for your comments. I updated the workflow. Skilled asm, C, C++, Swift, Obj-C, ... guy, but total newbie in Python :-)
But back to my original issue. I just tried to:
- delete OLD - RV: Jekyll Edit Post Settings
- install it again
- run Calculator for fun
- run OLD - RV: Jekyll Edit Post Settings
Still same problem - Calculator UI appears in my sheet, because UI can't be loaded. Here's the video of how it behaves. | https://forum.omz-software.com/topic/1729/ui-load_view-fails-with-ui-could-not-be-loaded-custom-action-advanced | CC-MAIN-2018-05 | refinedweb | 433 | 65.62 |
DENAIR, Calif. --.
Any society that would give up a little liberty to gain a little security will deserve neither and lose both.” B. Franklin
What in the heck? Isn't this the US of America where flying the flag is a show of patriotism?
All these overboard politically correct Boobs need to be holstered.
At least if they all wore bras on their heads, we would know in advance what type of idiocy is going to spill out.
Though, like the other type, they can be fun at parties.
Too late, much too late.
At last, a thing upon which we agree.
So should the hispanic kids be able to fly their Mexican flags? Or should all kids be held to the same standard?
Freedom means Freedom for all. Anyone can fly whatever flag they want.
Look at the Rebel flag!
From reading the article it looks like there have been some ethnic tensions in the school - maybe even fights. Was the boy trying to cause trouble with his flag? Or simply trying to show his patriotism? Or trying to cause trouble and calling that patriotism?
There's probably more to the story that we aren't seeing. If I was an administrator, in charge of my students safety, and there had been fights between different groups of kids, and I saw a student do something that might lead to another fight, I would sure as heck do something to keep a fight from starting.
If I didn't, and some kid got hurt in a fight, I'd be responsible for his injury.
This flag was not within the school. It was on the boy's bike that he rode to school.
Parental responsibility, if a child is fighting and disrupting normal school activities then a suspension would be in order.
If the problems continues the offending student/s should be expelled.
Right, but if the student was expelled for fighting, would the headline read: "Student Expelled for Flying American Flag"?
You can't trust everything you read.
I watched the video on the article, the boy speaks his mind very well.
He unquestioningly removed the flag, he seems like a well intentioned young man.
And maybe I'm being unfair to him by saying he might have been trying to start a fight. But still, we are second guessing the person who's job it is to keep the peace at a public school. Even if the boy was completely well intentioned, he might have been causing a problem.
Don't trust the media - their job is to get you to click on a headline so that you will see their advertisements. If you write on Hubpages you should know this.
Makes sense to me so long as there actually is a problem and the rule is applied uniformly.
Then the problems the child was causing should have been addressed with the parents.
I do not believe the school even has the legal right to stop such behavior.
The kid had the falg on his bike for 2 months! If there was an issue wouldn't they have said something right away?
I heard an interview with the boy this morning. He said some of the other students complained. Maybe. Perhaps.
It was probably some of the teachers as well. After all, it is California.
Isn't flying the American Flag in America an expected act of patriotism?
Wouldn't flying the American Flag in America be considered to be covered under the First Amendment as Freedom of Speech?
Isn't America the logical place to fly the American Flag?
What does flying the American Flag in America have to do with Ethnic Tension?
Isn't it up to the school board to handle problems with Ethnic Tension, at the schools, while enforcing freedoms guaranteed by the American Constitution? (because, after all, we are in America)
If anyone complains about an America Flag being flown, out in public,.....
Maybe they should not in fact live in America?
Oh, Cags, you are SO right! We have Mexican restaurants in town that fly or display the Mexican flag, and I don't have a problem with that. Our US society is getting ridiculous!
Hey Habee, "ridiculous" isn't even close enough of a word I would use to describe what is happening in this F**king Country.
It's not our society, it's the socialist liberal leftist democrats that hate America!
Well who are the ones that always call for this? It's not the right! It's always the left, the pc crowd, apologists for America! God forbid we should insult some illegal immigrant with our flag! Oh no! We can't do that!
It's too many people who don't believe in America and it's spreading across this country. It's NOT just the politicians. Got it!
Did I say it was just politicians? It's liberals, all of em socialists America haters!
Liberals? Nice label.
Freedom is what matters, and those who choose to ignore it is the problem.
It's truly amusing that you label the people who work the hardest to advance what is best about American society as "America haters".
I suppose only the sheep who bleat the right-wing pap are patriots in your mind?
I love how you promote the left as "advancing what is best for American society", as if somehow the left is endowed with some special intelligence, or intuitiveness that allows them to determine what's best for everyone (collectively as a group) and being so secure in that knowledge they can't imagine how any one individual might be harmed by their vision!
gzmflakdoomorphly?
That's they typical from the radical socialist liberal left democrats when confronted with common sense arguments for their arrogant and foolish policies.
As soon as you bring one to the table...
Can you prove that with just one common sense argument?
lady love, you have some major issues...
as far as the boy with the flag, ridiculous. I'm sure there's an American flag in each classroom. I think the administration missed it here.
Not every classroom in America has a flag. Since public schools are publicly funded they should be required to honor all the symbols of the Republic.
Uncorrected:
Give us an example, pls, of another symbol of the "republic." TY
they do in Florida, it's a law. there's a flag in every public classroom K-20. even publicly funded, this is the US of A this is the country we live in. If I lived in France, I would expect to see a French flag. I don't see that too terribly hard to understand.
I do! Nothing that can't be fixed by un-indoctrinating liberals of their socialist leanings! I'm doing my part to educate America, in order to 'keep the republic" that Ben Franklin alluded to in his famous quote.
Now that is just weird.
I checked out Denair to see if something would jump out as the reason for this insanity.
Admittedly the census info is 10 years old, but the demographics all check out to being more than tolerant of flag flying (overwhelmingly white, represented by Republicans). It's quite possible, however, that in the ensuing 10 years the population has shifted to more Hispanic. However, if that is the cause of the tension, then I would have to say the school isn't handling this very well.
I mean it's not like the kid is flying an Aryan Brotherhood flag, right???? And last I checked, this IS still America.(Yes, even California is still considered part of the United States!)
I have a shirt that displays the Scottish flag - my ancestors came from Scotland. Guess I wouldn't be allowed to wear the shirt!
Just hope you don't have to go to a parent/teacher conference, you might not be allowed in wearing such a shirt.
Only if you're clan was going all William Wallace on the rest of the students.
Too bad. This school really missed an opportunity to teach why we celebrate Veterans' Day. The spokesperson for the school sounds ridiculous.
Anyone needing evidence that our schools are failing our children... read this article.
It started with too much misinformation, political twists to stories(disregarding truth) and gullible people willing to not stand up for their rights.
It's unfortunate, many people forget that their freedom stems from those who protect this country.
Here ya go, Habee!
LOL!! It would take all three of those tops to cover the twins, along with at least two of the bottoms to cover my butt!
Wrong country Mighty Mom, those are Mexican flag bikini- Not Scottish.
MOM, I SALUTE THEM...I MEAN YOU!!!! THANK YOU!!!!
I know those are Mexican flag-tinis. I was trying to combine the issue here in Denair with Habee saying they fly Mexican flags in Florida too (they do here).
I didn't miss that she said she's Scottish.
Phrase of the day: "Don't mess with me or I'll go all William Wallace on your ass!" Love it!
I was at the gym working out when i looked out the window and saw an employee of the gym dragging our flag across the dirty sidewalk. He was supposed to "fly" it. It was the birthday of the US Marines.
I am a USAF vet. There were 2, young, US Army soldiers working out. We all saw it at the same time.
I looked at the "kids" and said "uh uh, we have to stop that!
The 3 of us went outside and told the fellow to treat that flag with the respect it deserves. Keep it off the ground! he looked at us like we were idiots, but picked the flag up and folded it into his arms.
There is "trouble-in-river city."
If you are an American, how can you disrespect the flag which represents all who have given their lives to insure it's purpose i.e. to "fly" over the land of their birth?
That school kid should fly that flag anywhere he desires! In any school or classroom that is built on American soil!
Any who don't like it should pack up and leave the USA!
Qwark
The flag is an important symbol of what we stand for (some fight and die for). Citizens through their actions and advocacy protect it.
Burning or otherwise desecrating the flag is a form of free speech. The Constitution protects that.
This sort of thing just proves how unfathomably stupid huge chunks of America is getting (reminds me of the stuff I wrote about in my America Needs an Enema hub). The schools are the worst. The crap that goes on in our school is so unfathomably backwards and common-sense defying it is hard to wrap one's head around it. This is the mindset of a broken, weak, trembling nation of cowards and appeasers. I freaking HATE that watching the country erode like this. It's like nobody is thinking at all anymore. We focus on EVERY wrong thing and ignore what is ACTUALLY important over and over.
I don't see a single person disagreeing here, really.
I did like the story of Qwark and his serviceman workout buddies teaching respect for our flag. Way to go.
I guess if we're not even flying the American flag in public school classrooms anymore we can't reasonably expect our young people to know how to treat it, can we???
The flag is flown in every classroom in GA - at least in all the rooms I've seen. We say the pledge every morning, too. Kids don't have to join in if they don't want to, however. Over my years of teaching, I had only a handful of kids who didn't participate. Of these, probably 90% of them were American whites and blacks - not the Asian, Muslim, or Hispanic students I taught.
What ever happened to "RESPECT" here in America? Not only for our American Flag but for all else as well.
What, like respect for the Vets who were prepared to die for their country but were cast aside, unwanted, after they'd made that sacrifice?
nothing like demanding that everyone respect you for invading a foreign country for some wussy president's greed.
Take me out of context, I don't care!
I was just wondering how a country could show more respect for a coloured square of cloth than it does for citizens who, by their lights, served their country well.
Nothing to do with the morality, or lack, of invading other countries.
There really is nothing like an opinion planted in the soil of liberty and watered with the blood of patriots to grow flowers that stink like death. What a rarefied air you must breath to put out such a pungent fragrance.
But we can't be wasting our precious tax dollars on better care for these people!
Some of them might be malingerers. Better to give as little as possible than run the danger of someone getting something undeservedly. We do the same with all our social programs and that is what makes us the greatest country on earth. Sort of. Unless you actually measure stuff, but as long as there are flags to wave, none of that matters.
Thanks, yes I do see your point, it could involve fewer tax cuts for the wealthy and then where would you all be?
I mean, isn't it disgusting when you see some work shy malingerer
who prefers to sleep on the pavement using your national flag as a cover when there are plenty of penthouse apartments they could buy if they could just be bothered.
We laughed, — knowing that better men would come,
And greater wars: when each proud fighter brags
He wars on Death, for lives; not men, for flags.
(Wilfred Owen)
I like it as well.
That's good And Owen's point, of course, is that flags, Kings' standards, etc have been used throughout the ages as substitutes for true justification for carnage.
Ugh.. .I'll probably get banned again for typing this:
There haven't been many veterans worth saluting for QUITE some time - the only justifiable war that the US have embarked upon was the Revolutionary war...
either way, the fact that SCHOOL officials are having a student take down an image of their flag shows you the utter lunacy of Government-Mandated schooling. Only the government could make such an idiotic request of a student. I'm sure that some time in the same week a teacher at that school had told a student that freedom of speech is important
No, you probably won't get banned, but you should, since you pretty much attacked every person in this country with those stupid, cruel, and incredibly moronic remarks.
What an idiot--I used to think you had some valid points in your arguments...not anymore..
Was there not enough disagreement and argument in the thread?
Seriously??????????
Old Glory
Jon Townsend, Former Green Beret Captain.
I'm glad that I was able to raise my sons to know their grandfather who served in WW11. what an ignorant, idiotic, insensitive remark to make in a public forum. you must be starved for attention. I wonder if you would make such a statement if you were standing in a room with all of us.
I wonder that, too...
I lost my grandfather from exposure to mustard gas in WWI...my father-in-law carried shrapnel in his back from WWII until the day he died...the list could go on and on...
So Evan--Tell me to my face--not on a damn computer screen- that the men or women who have served our country didn't deserve to be saluted...and respected until their dying day.
Tell anyone...I dare you. Jimbo'daNimbo 6 years ago
Here's the story: … ndence-DayAre we really surprised given the President and First Lady had already publicly shown their derision for the flag? Sychophantastic 19 months ahorseback 6 years ago
I know some people may have questioned the attitude of hubber Ahorseback and some of the posts I make on certain forums! I am Ahorseback and wish to explain my actions , One of the most irritating things for me in the forums is with certain posters that...
by Castlepaloma 2 years ago
In 2014, 34,000 gun related deaths happen in the USA and only 15 Citizen deaths by terrorist. How are Islamic countries a greater threat to America than their | https://hubpages.com/politics/forum/59213/school-forces-boy-to-take-flag-off-bike | CC-MAIN-2018-30 | refinedweb | 2,788 | 73.98 |
Creating and Publishing an Android Library
Introduction
Our lives as Android developers would be a lot harder if not for all those third-party libraries out there that we love to include in our projects. In this tutorial, you will learn how to give back to the developer community by creating and publishing your own Android libraries, which people can effortlessly add and use in their projects.
1. Creating an Android Library
If your library is going to be composed of only Java classes, packaging it as a JAR and distributing it using a file host is perhaps the quickest and easiest way to share it. If you were to create it from the console, the following command would suffice:
jar cvf mylibrary.jar Class1.class Class2.class ... ClassN.class
This tutorial however, shows you how to work with more complex libraries that contain not just Java classes, but also various types of XML files and resources. Such libraries are created as Android library modules and are usually packaged as AAR files.
Let’s create a simple Android library that offers a custom
View to developers who use it.
Step 1: Add a New Module
To begin, add a new Android module to your project by selecting New > New Module from the File menu. You will be shown the following screen, which offers lots of choices:
Select Android Library and press Next. In the form that follows, enter a name for your library and press Next. I’ll be calling this library mylittlelibrary.
In the last screen, select Add no Activity and press Finish.
Your project will now have two modules, one for the app and one for the library. Here’s what its structure looks like:
Step 2: Create a Layout
Create a new layout XML by right-clicking on the res folder of your library module and selecting New > XML > Layout XML File. Name it my_view.xml.
To keep this tutorial simple, we’ll be creating a custom
View that has two
TextView widgets inside a
LinearLayout. After adding some text to the
TextView widgets, the layout XML file should look like this:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <TextView android: </LinearLayout>
Step 3: Create a Java Class
Create a new Java class and name it MyView.java. Make sure to put this file in the src directory of the library module–not the app module.
To make this class behave as a
View, make it a subclass of the
LinearLayout class. Android Studio will prompt you to add a few constructors to the class. After adding them, the new class should look like this:
public class MyView extends LinearLayout { public MyView(Context context) { super(context); } public MyView(Context context, AttributeSet attrs) { super(context, attrs); } }
As you can see, we now have two constructors. To avoid adding initialization code to each constructor, call a method named initialize from each constructor. Add the following code to each constructor:
initialize(context);
In the
initialize method, call
inflate to associate the layout we created in the previous step with the class.
private void initialize(Context context){ inflate(context, R.layout.my_view, this); }
2. Using the Library Locally
Now that the library is ready, let’s make use of it in the app module of the same project in order to make sure that there are no issues. To do so, add it as a
compile dependency in the build.gradle file of the app module:
compile project(":mylittlelibrary")
Create a new Java class, MainActivity, inside the app module. Make it a subclass of the
Activity class and override its
onCreate method.
public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } }
Inside the
onCreate method, create an instance of the custom view using its constructor. Pass it to the
setContentView method so that it fills all the screen space of the
Activity:
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); View v = new MyView(this); setContentView(v); }
Your
Activity is now ready. After adding it to the app manifest, build your project and deploy your app to an Android device. You should be able to see the custom view when the app starts.
3. Publishing Your Library on Bintray
Bintray is a popular platform you can use to publish Android libraries. It is free and easy to use.
Start by creating an account on Bintray. After signing in to your account, you will see that you already own six repositories. You can either use one of them or create a new repository. For this tutorial, I will be using the repository called maven, which is a Maven repository.
Visit your profile page and click the Edit button. On the next page, click the API Key link to view your API key.
Make a note of the key, because you will be needing it to authenticate yourself when using the Bintray plugin.
Step 1: Add Necessary Plugins
To interact with Bintray in Android Studio, you should include the Bintray plugin in the
dependencies of your project’s build.gradle file.
classpath 'com.jfrog.bintray.gradle:gradle-bintray-plugin:1.2'
Because you will be uploading the library to a Maven repository, you should also add the Maven plugin as shown below.
classpath "com.github.dcendents:android-maven-gradle-plugin:1.3"
Step 2: Apply the Plugins
Open the build.gradle file of your library module and add the following code to apply the plugins we added in the previous step.
apply plugin: 'com.jfrog.bintray' apply plugin: 'com.github.dcendents.android-maven'
Step 3: Specify POM Details
The Bintray plugin will look for a POM file when it uploads the library. Even though the Maven plugin generates it for you, you should specify the value of the
groupId tag and the value of the
version tag yourself. To do so, use the
group and
version variables in your gradle file.
group = 'com.github.hathibelagal.librarytutorial' // Change this to match your package name version = '1.0.1' // Change this to match your version number
If you are familiar with Maven and you are wondering why we didn’t specify the value of the
artifactId tag, it is because the Maven plugin will, by default, use the name of your library as the
artifactId.
Step 4: Generate a Sources JAR
To conform to the Maven standards, your library should also have a JAR file containing the library’s source files. To generate the JAR file, create a new
Jar task, generateSourcesJar, and specify the location of the source files using the
from function.
task generateSourcesJar(type: Jar) { from android.sourceSets.main.java.srcDirs classifier 'sources' }
Step 5: Generate a Javadoc JAR
It is also recommended that your library has a JAR file containing its Javadocs. Because you currently don’t have any Javadocs, create a new
Javadoc task, generateJavadocs, to generate them. Use the
source variable to specify the location of the source files. You should also update the
classpath variable so that the task can find classes that belong to the Android SDK. You can do this by adding the return value of the
android.getBootClasspath method to it.
task generateJavadocs(type: Javadoc) { source = android.sourceSets.main.java.srcDirs classpath += project.files(android.getBootClasspath() .join(File.pathSeparator)) }
Next, to generate a JAR from the Javadocs, create a
Jar task, generateJavadocsJar, and pass the
destinationDir property of
generateJavadocs to its
from function. Your new task should look like this:
task generateJavadocsJar(type: Jar) { from generateJavadocs.destinationDir classifier 'javadoc' }
To make sure the
generateJavadocsJar task only starts when the
generateJavadocs task has completed, add the following code snippet, which uses the
dependsOn method to order the tasks:
generateJavadocsJar.dependsOn generateJavadocs
Step 6: Include the Generated JAR files
To include the source and Javadoc JAR files in the list of artifacts, which will be uploaded to the Maven repository, you should add the names of their tasks to a
configuration called archives. To do so, use the following code snippet:
artifacts { archives generateJavaDocsJar archives generateSourcesJar }
Step 7: Run Tasks
It is now time to run the tasks we created in the previous steps. Open the Gradle Projects window and search for a task named install.
Double-click it to run the tasks associated with the library module. Once it’s finished running, you will have everything you need to publish your library, a valid POM file, an AAR file, a sources JAR, and a Javadocs JAR.
Step 8: Configure the Bintray Plugin
To configure the plugin, you should use the
bintray closure in your Gradle file. First, authenticate yourself using the
user and
key variables, corresponding to your Bintray username and API key respectively.
On Bintray, your library will reside inside a Bintray package. You should provide details about it using the intuitively named
repo,
name,
licenses, and
vcsUrl parameters of the
pkg closure. If the package doesn’t exist, it will be created automatically for you.
When you upload files to Bintray, they will be associated with a version of the Bintray package. Therefore,
pkg must contain a
version closure whose
name property is set to a unique name. Optionally, you can also provide a description, release date, and Git tag using the
desc,
released, and
vcsTag parameters.
Finally, to specify the files that should be uploaded, set the value of the
configuration parameter to archives.
This is a sample configuration:
bintray { user = 'test-user' key = '01234567890abcdef01234567890abcdef' pkg { repo = 'maven' name = 'com.github.hathibelagal.mylittlelibrary' version { name = '1.0.1-tuts' desc = 'My test upload' released = new Date() vcsTag = '1.0.1' } licenses = ['Apache-2.0'] vcsUrl = '' websiteUrl = '' } configurations = ['archives'] }
Step 9: Upload Files Using the Bintray Plugin
Open the Gradle Projects window again and search for the bintrayUpload task. Double-click it to begin uploading the files.
Once the task completes, open a browser to visit your Bintray package’s details page. You will see a notification that says that you have four unpublished files. To publish these files, click the Publish link.
4. Using the Library From Bintray
Your library is now available as a Bintray package. Once you share the URL of your Maven repository, along with the group ID, artifact ID, and version number, any developer can access your library. For example, to use the library we created, developers would have to include the following code snippet:
repositories { maven { url '' } } dependencies { compile 'com.github.hathibelagal.librarytutorial:mylittlelibrary:1.0.1@aar' }
Note that the developer has to explicitly include your repository in the list of
repositories before adding the library as a
compile dependency.
5. Adding the Library to JCenter
By default, Android Studio searches for libraries in a repository called JCenter. If you include your library in the JCenter repository, developers won’t have to add anything to their
repositories list.
To add your library to JCenter, open a browser and visit your Bintray package’s details page. Click the button labeled Add to JCenter.
You will then be taken to a page that lets you compose a message. You can use the Comments field to optionally mention any details about the library.
Click the Send button to begin Bintray’s review process. Within a day or two, the folks at Bintray will link your library to the JCenter repository and you will be able to see the link to JCenter on your package’s details page.
Any developer can now use your library without changing the list of
repositories.
Conclusion
In this tutorial, you learned how to create a simple Android library module and publish it to both your own Maven repository and to the JCenter repository. Along the way, you also learned how to create and execute different types of gradle tasks.
To learn more about Bintray, visit Bintray’s user manual.
Source: Tuts Plus
… [Trackback]
[…] Read More Infos here: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Read More: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Read More Infos here: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Find More Informations here: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Read More here: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Read More Infos here: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] There you will find 72408 more Infos: designncode.in/creating-and-publishing-an-android-library/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/creating-and-publishing-an-android-library/ […] | http://designncode.in/creating-and-publishing-an-android-library/ | CC-MAIN-2017-43 | refinedweb | 2,110 | 55.24 |
Auto fill/correct blowing up editor/app
This was an old problem, that sort of went away. I am working on a huge file (pythonista 3, non-beta) and am seeing it again. I will type
mainView['tableview_prog_display']
as I type a period after the closing brace to type in a method, pythonista crashes. This is a valid ui.TableView subview. This is just one example. Seems to always occur when adding/changing an instance method or instance variable
@polymerchm said:
]
Should that be:
mainView['tableview_prog_display']
with the extra gnyf ' ?
Yep. But thats not the issue.
@polymerchm I think this is fixed in the beta. Would you be willing to give that a try?
Was on the 2 beta and have switched over to 3, so yes.
@polymerchm I've sent a beta invite for Pythonista 3 to your forum account email address.
Sorry Ole. No Change. Still blows up. I uploaded this to my github www://github.com/polymerchm/ccMVC.git. Its the progressions branch. You do need pypubsub installed. For example line 705. Just try to erase the "hidden" and then re-type it. Poof.
@polymerchm Thanks, I'll look into it.
As a temporary workaround, @polymerchm you could disable completion...
import editor editor._get_editor_tab().editorView().completionProvider=None
disables completions altogether for the current editor tab, or
editor._get_editor_tab().editorView().completionProvider().fallbackCompletionProvider=None
disables jedi, which is probably the problem, while leaving built in module completion, import completion, etc. | https://forum.omz-software.com/topic/3692/auto-fill-correct-blowing-up-editor-app | CC-MAIN-2017-26 | refinedweb | 241 | 61.43 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
I am presenting a paper at PADL 2011, in which I discuss the fundamental limitations of current Haskell parser combinator libraries like Parsec and UUParse. The limitations are inherent to the modelling of non-terminal references using direct recursion in a referentially transparent functional language like Haskell.
The basic problem is that because of the model used, an algorithm can only ever "look at" a finite top part of the infinite tree representation of the grammar, and the full grammar can never be observed. If you have ever wondered why libraries like Parsec or UUParse only ever use LL-style algorithms (sometimes with memoization or lazy computation of first sets), it is because only these algorithms can work with such a finite top part of a grammar. Bottom-up algorithms or grammar transformations (which are common in parsing literature and widely used in parser generators) cannot be implemented. Even printing a BNF representation of the grammar is fundamentally impossible.
In my paper, I show how a different representation of the recursion in the grammar allows for a full "view" on the structure of the grammar, while keeping the DSL very shallow. I demonstrate the additional power with implementations of a few grammar algorithms in the accompanying technical report. An elaborate implementation is available in the grammar-combinators library (featuring a packrat parsing algorithm, left-corner grammar transform, many grammar manipulation primitives, basic UUParse and Parsec compatibility etc.).
Details are in the paper. I'm very interested in feedback from the community. For those of you interested in playing with the code, there is a tutorial available.
Can you tell why you introduced parser combinators in your paper by inventing 7 symbols? Is this intended to increase the felt math factor in it?
The choice of symbols is of course not fundamental and I'm not 100% happy with the situation myself. In the paper, I (unfortunately) use two sets of operators: the UUParse operators when discussing that library and the operators from my grammar-combinators library. The reason I have not used the UUParse symbols (from Applicative and associated classes) in my library is technical, and partly explained here.
Have you tried sending this to any of your non-Haskell-using peers? It's unfortunate to see such work potentially ignored simply due to its presentation style.
Well, I'm presenting the paper at the non-Haskell-specific PADL conference, so I hope to find some interested people there. We are also considering writing a journal version of the work at some point, and then I plan to improve the notation.
Bottom-up algorithms or grammar transformations (which are common in parsing literature and widely used in parser generators) cannot be implemented. Even printing a BNF representation of the grammar is fundamentally impossible.
Parser combinators can easily express context-dependent grammars, an example was given with Parsec:
xmlContent = do
(tagName,params) <- openTag
content <- many xmlContent
closeTag tagName
return $ Node (tagName,params) content
I think, you cannot use BNF for parser combinators.
This is a valid remark, but it does not invalidate my point. Even if parser combinators can actually do context-sensitive things in the individual rules, the question remains how you model recursion. You still need a full view of the grammar for many applications, and then you need an encoding which allows you to observe the full grammar and the structure of the relationship between its non-terminals.
By the way, I think it is possible to take my paper and replace the applicative style production rule combinators and replace them with monadic ones to obtain the same concepts for a form of context-sensitive grammars.
Ulf Norell and I represented grammars as functions from well-typed nonterminals to (somewhat restricted) monadic right-hand sides in the paper Structurally Recursive Descent Parsing.
I glanced over the technical report, so take the following with a grain of salt.
Am I correct that the paper boils down to, in short, that by "reifying" parser rules, different parser strategies may be used for a defined grammar?
I would be interested in seeing this work in languages which allow a simple form of introspection/reification in the language. It would give a nice argument for a Haskell extension which makes that possible. (I envision that one could just reuse the function names as symbols for the generating grammar.)
[ The question being, does your manner of reification generalize to other problem domains as well? What are other means of reification in Haskell? How do Haskell extensions help you in reifying certain data? What other choices are there? How do we know that this is not a 'Rube Goldberg' solution? ]
I'm not 100% sure what you mean with reification, but I think my solution is specific to a pure programming language. If you have compile-time or run-time meta-programming or even mutable state, I think other solutions may be possible and perhaps simpler.
By means of reification something that was previously implicit, unexpressed and possibly unexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation. Informally, reification is often referred to as "making something a first-class citizen" within the scope of a particular system. (From Wikipedia)
One of the ways of looking at your report is that it is difficult, but not impossible, to exploit parser combinators to the fullest since you can't really reuse what you expressed as grammar rules since function definitions are not first-class citizens of the language.
Concrete, the function 'def expr = constant + (expr . op . expr)', may read like a grammar rule, but since in most languages you don't have access to the defined terms (introspection), it is difficult, but not impossible, to reason over the grammar defined.
Your solution -to me- reads like: 'We use a manner of reifying grammar rules in the language [through type trickery?], such that they become first class citizens, and from there, we can easily derive properties over the expressed grammar and switch between different parsers.'
I am not so surprised by the fact that by reification you can implement more functionality and more complex parsers. I wonder more about whether the strategy chosen for reification is in any sense, for lack of better words, 'right.' Does it generalize?
Not sure if this point is helpful or stupid as I don't have to write specs this formally. So on the off chance it is the former and not the latter, I'll take the invitation as part of the community.
The idea of a Parser is to take a string to a structure in other words take something relatively unstructured and make it structured not to take something structured and evaluate it, that's more like an evaluator. In looking at the base structure:
data Domain = Line | Expr | Term | Factor | Digit
You are picturing a source much more hierarchical than what actually exists in most places where someone has to parse. Terms can be on multiple lines. Digit seems pointless the first thing and only thing you do with the digit structure is convert it into a number (where I assume in your notation number:: Number -> something).
I'm all in favor of advances in parsing. I love your idea of better organizations leading to desirable properties of parsers. IMHO think about parsing the TeX underlying your paper and trying to pull out the paragraphs before and after references 2,6,14 including equations.
That is what a parser has to handle. It has to be able to grab what it needs from an unstructured source without understanding the underlying structure, that is without being a TeX compiler.
The example is kept simple for clarity of presentation. Concerning TeX, I'm not familiar with the details, but I understand it is fundamentally based on macro-expansion. As such, I expect it will be hard to separate parsing from execution, so I'm not sure that classical parsing solutions apply.
But anyway, I agree with you that any parsing tool should be usable for real languages, but I believe that with some extensions for context-sensitivity as suggested in another thread, this will be okay for my library.
I liked your paper very much, but I found it a bit obfuscated. I concur with the comment on the symbols, they seemed out of place. What bothered me more was the reuse of very same names in different namespaces. I'm familiar with Haskell enough to know that the two Line identifiers in declaration Line :: φarith Line are not related, but I still found it confusing.
It wasn't until I read your comparison with Oleg's finally-tagless work that I finally realized how you resolved the question of parse tree nodes and semantic actions, by the way. You first suggest the parse tree construction, then dismiss it, but never seem to clearly explain what's replacing it. Perhaps I was reading too quickly.
The last comment I want to make is that I was surprised by your unguarded admission, in section 3.2, that the added abstraction inevitably has a performance cost. I mean, that is no doubt true if you use the same parsing algorithms. I thought the whole point of the abstraction was that you can now employ different algorithms, including more efficient ones, more suitable for a particular grammar or input. Is this impossible for some reason I'm missing?
Mario,
Thanks for your comments. Regarding the use of identifiers like Line in different namespaces (Haskell's type namespace is separate from its value namespace): this is inspired by Yakushev et al. who do the same in Multirec. I find it readable, even though I agree it might be confusing in a first read.
About the performance discussion: when I say "the added abstraction inevitably has a performance cost", I was indeed referring to a comparison against using the underlying parsing algorithms more directly. In theory, a compiler might be able to compile out a lot of the abstraction, and I think that's what GHC partly does when you use inlining flags like Magalhaes et al., even though I haven't checked in the generated Core.
By the way, for pure performance, there is some work in the library to allow you to use a given grammar with a kind of parser generator through Template Haskell, but this currently does not yet inline the semantic actions and I do not yet get the same result as when I hand-implement the parser directly.
Note that my library is currently not faster than parser combinator libraries for any example. Note also that my library currently does not have a direct implementation of an LR-style algorithm, but I believe that you can actually simulate something like LALR(1) by running an LL(1) parser on the uniform Paull transformation of the grammar (just like you can simulate a left-corner parser by running a top-down parser on the left-corner transform of the grammar). | http://lambda-the-ultimate.org/node/4160 | CC-MAIN-2018-43 | refinedweb | 1,857 | 50.77 |
A.
Simple Example
A stable sort can be best illustrated with an example. Consider the following simple Person class with Name and Age fields. The class also implements the IComparable interface that has the CompareTo method, which in this example sorts the Person objects by Age:
class Person : IComparable { public Person( string name, int age ) { this.Name = name; this.Age = age; } public string Name; public int Age; public int CompareTo( object obj ) { int result = 1; if (obj != null && obj is Person) { Person person = (Person)obj; result = this.Age.CompareTo( person.Age ); } return result; } public override string ToString() { return String.Format( "{0} - {1}", this.Name, this.Age ); } }
Now let’s create, sort and write a collection of Person objects:
Person p1 = new Person( "Abby", 38 ); Person p2 = new Person( "Bob", 23 ); Person p3 = new Person( "Charlie", 23 ); Person p4 = new Person( "Danielle", 18 ); List<Person> list = new List<Person>(); list.Add( p1 ); list.Add( p2 ); list.Add( p3 ); list.Add( p4 ); list.Sort(); Console.WriteLine( "Unstable List Sort:" ); foreach (Person p in list) { Console.WriteLine( p ); }
In their original order, Bob (age 23) appears before Charlie (also age 23). But because both objects have the same age, and the List<T>.Sort method is unstable, the order of equal objects is reversed. Here is the output from the code above:
Unstable List Sort:
Danielle – 18
Charlie – 23
Bob – 23
Abby – 38
Stable Insertion Sort
There are many stable sorts available. The Insertion Sort is one of the more simple but efficient stable sorts:
public static void InsertionSort<T>( IList<T> list, Comparison<T> comparison ) { if (list == null) throw new ArgumentNullException( "list" ); if (comparison == null) throw new ArgumentNullException( "comparison" ); int count = list.Count; for (int j = 1; j < count; j++) { T key = list[j]; int i = j - 1; for (; i >= 0 && comparison( list[i], key ) > 0; i--) { list[i + 1] = list[i]; } list[i + 1] = key; } }
Notice the InsertionSort<T> method requires a Comparison<T> delegate, so we need to add a static Compare method in the Person class with the Comparison<T> signature. For reliability, Compare simply calls the Person’s CompareTo method:
static public int Compare( Person x, Person y ) { int result = 1; if (x != null && x is Person && y != null && y is Person) { Person personX = (Person)x; Person personY = (Person)y; result = personX.CompareTo( personY ); } return result; }
Now let’s clear the list, add the person objects again, and this time use our stable insertion sort:
list.Clear(); list.Add( p1 ); list.Add( p2 ); list.Add( p3 ); list.Add( p4 ); InsertionSort<Person>( list, Person.Compare ); Console.WriteLine( "Stable Insertion Sort:" ); foreach (Person p in list) { Console.WriteLine( p ); }
As you can see from the output below of the code above, Bob appears before Charlie, and the original order of equal elements is preserved:
Stable Insertion Sort:
Danielle – 18
Bob – 23
Charlie – 23
Abby – 38
Sort Methods Compared
As this C Programming article explains, the Insertion Sort has an algorithmic efficiency of O(n^2) and in the best case (already sorted) has a constant time O(n). The QuickSort is more efficient with an average efficiency of O(n*log(n)). So in general you should use the .NET built-in Sort methods and use the Insertion Sort only if you explicitly require a stable sort.
Excellent!
Good, but there are much better stable sorts available, for example MergeSort and Binary tree sort, both of which have the same average time complexity as QuickSort (n log n).
See
Check out PowerCollections () for a stable sort implementation and many other useful functions…
(Source code is available)
[…] […]
Just what I was looking for. I too ran into the .NET quicksort stability problem. Insertion sort, although it is of higher time complexity O(N^2) than quicksort’s O(nlogn), ends up being better in practice for smaller collections as there is less overhead (space complexity for one, as this is an in-place algorithm) than, say, merge sort or heap sort. And of course it’s stable. Also it is said to be “adaptive” so if the data is “nearly sorted” it can outperform other DAC (divide and conquer) algorithms like merge sort. In fact you’ll often see “hybrid” sort algorithms using insertion sort when the problem size is small.
For large collections, no contest: merge sort.
BTW, I really like how the author has properly laid out the sort using generic types as it makes other convenient methods possible (such as delegate comparison functions). For those that do not want to explicitly implement the CompareTo method in their class (let’s say you have various keys you want to sort on determined at runtime, such as the columns in a gridview), you can use the author’s sort method like this:
// or name, etc, determined at runtime.
string SortKey = “age”;
InsertionSort(list, delegate(Person a, Person b) { if (SortKey == “name”) return a.Name.CompareTo(b.Name); else if (SortKey == “age”) return a.Name.CompareTo(b.Name); });
You can see how the anonymous delegate has scope over your SortKey variable which can be the user’s custom sort expression. Very useful stuff here especially when combined with the power of the .NET GridView control.
If yo uwant to reverse the sort direction, pass in a flag that tells you what the sort direction is (ascending or descending) and simply flip the person objects around:
bool AscendingSort = false;
// or name, etc, determined at runtime.
string SortKey = “age”;
InsertionSort(list, delegate(Person a, Person b) {
if (SortKey == “name”) {
if (AscendingSort) return a.Name.CompareTo(b.Name);
else return b.Name.CompareTo(a.Name);
} else if (SortKey == “age”) {
if (AscendingSort) return a.Age.CompareTo(b.Age);
else return b.Age.ComapreTo(a.Age);
}
return 0;
});
Brillaint!!!!! Thanks a lot!!!!!
Very Big Hug! Thank you…
Most of the example that I’ve seen sort a relatively simple structure made up of two objects (e.g., Person {Name, Birthday}). However, I am now faced with a more complex structure (i.e., HSLColor {Hus, Saturation, Lightness}). I also need to take into account a users desire to sort by HSL, HLS, SHL, SLH, LHS, or LSH. Assuming the desired sort order is a string named sort_order, then I am using
private int HSL_comparer ( HSLColor HSL_1,
HSLColor HSL_2 )
{
int HSL_H_comparison =
HSL_1.Hue.CompareTo (
HSL_2.Hue );
int HSL_L_comparison =
HSL_1.Luminosity.CompareTo (
HSL_2.Luminosity );
int HSL_S_comparison =
HSL_1.Saturation.CompareTo (
HSL_2.Saturation );
for ( var i = 0; ( i < sort_order.Length ); i++ )
{
switch ( sort_order.Substring ( i, 1 ) )
{
case "H":
if ( HSL_H_comparison != 0 )
{
return ( HSL_H_comparison );
}
break;
case "S":
if ( HSL_S_comparison != 0 )
{
return ( HSL_S_comparison );
}
break;
case "L":
if ( HSL_L_comparison != 0 )
{
return ( HSL_L_comparison );
}
break;
default:
break;
}
}
return ( 0 );
}
Of course, what I get is unstable. Any thoughts?
TIA | https://www.csharp411.com/c-stable-sort/ | CC-MAIN-2021-39 | refinedweb | 1,113 | 57.06 |
Sortix issues 2018-03-10T12:42:22Z SIGINFO (^T) 2018-03-10T12:42:22Z Jonas Termansen SIGINFO (^T) BSD has SIGINFO that can be sent with `^T`. Programs like dd, and other long running programs handle it and write out progress statistics. I think this is fairly useful and Sortix should implement it, though Linux doesn't. Sortix 1.2 feature-addition kernel Jonas Termansen Jonas Termansen Ability to not load /src in live environment 2018-03-04T21:57:23Z Jonas Termansen Ability to not load /src in live environment The bootloader configuration should offer you the ability to not load the system source code to keep the loaded system small. Sortix 1.1 feature-addition user-experience Jonas Termansen Jonas Termansen libstdc++ is lacking thread support 2018-03-01T08:38:10Z Davin McCall libstdc++ is lacking thread support libstdc++ does not have thread support enabled. Test program: ``` #include <mutex> std::mutex m; ``` And then: ``` root@dragonsort $ g++ -std=c++11 -c mtest.cc mtest.cc:3:6 error: `mutex' in namespace `std' does not name a type ``` Trivial C++ program crashes 2018-02-28T21:32:07Z Davin McCall Trivial C++ program crashes The following program crashes if compiled and run with current nightly build: ``` #include <iostream> #include <sys/types.h> #include <sys/socket.h> void some_func(int filedes[]) { socketpair(AF_UNIX, SOCK_STREAM, 0, filedes); } int main(int argc, char **argv) { return 0; } ``` Note that the `#include <iostream>` is necessary to cause the crash even though nothing from it is used, and the call to `socketpair` is necessary to cause the crash even though it is never executed. ``` root@dragonsort ~/prog/simplecc # g++ testme.cc root@dragonsort ~/prog/simplecc # ./a.out The current process (pid 1195 `./a.out') crashed and was terminated: Page fault exception at ip=0x0 (cr2=0x0, err_code=0x14) Segmentation fault ``` Code of Conduct 2018-02-14T22:24:15Z Jonas Termansen Code of Conduct To scale the Sortix community and make it a good community for everyone, we need a code of conduct that takes professional experience out there into account, provides clear paths of escalation (who to get in touch with if there's a problem, easily findable on a search engine), is enforceable, yet handles abusers abusing the CoC to abuse (or going exactly to the line), handles if the anti-abuse team itself is abusing (including the case where I am), and signals how competent this community is at handling such issues (including how incompetent we are, if that is the case). As well as other requirements that are clear after researching this topic and getting help from professionals or experienced people are able to handle escalated situations such as harassment, doxxing, stalking, and other unpleasant situations up to and including violent crimes. Sortix 1.1 awesome security user-experience www Jonas Termansen Jonas Termansen mandoc -Thtml .Bd -literal emits trailing whitespace 2018-02-07T20:27:42Z Jonas Termansen mandoc -Thtml .Bd -literal emits trailing whitespace mandoc's html output (-Thtml) has trailing whitespace inside .Bd -literal environments. That's not supposed to be there and annoying gets rendered by the browser inside the `<pre>` and gets copied into shells when one copy pastes. That's really annoying whenever broken shell commands inside a .Bd -literal ends in a backslash to escape the linebreak. A workaround could be to trim trailing whitespace of the -Thtml output always, anyway, since it shouldn't be significant in any case I can think of / need. Sortix 1.1 bug documentation needs-investigation port user-experience userspace Jonas Termansen Jonas Termansen Default kernel command line and GRUB configuration 2018-02-03T23:47:06Z Jonas Termansen Default kernel command line and GRUB configuration GRUB's 10_sortix should support a default kernel command line configuration file. Either that this file is read at the time of 10_sortix running, or it is read by the bootloader prior to invoking the multiboot command. Then update the advise for passing --disable-network-drivers by default in installation(7). Sortix 1.1 feature-addition user-experience userspace Jonas Termansen Jonas Termansen Supervisor Memory Protection 2018-02-02T21:57:37Z Jonas Termansen Supervisor Memory Protection SMAP and SMEP is awesome. SMAP uses an EFLAGS bit (AC, set with slac and unset with clac) to control whether the kernel can access user-space pages. Set/unset it in CopyFromUser / CopyToUser. Unset AC in interrupt handlers. User-space can set AC. Use the slac instruction in the interrupt handler and change the IDT offset if it's unsupported. CopyFromUser/fromToUser can be runtime patched with nops in case it's not supported, but they're not super fast on Sortix anyway, might as well branch. This should be simple to do and well worth it and gives the insurance all memory is properly copied in and out. The features appeared about Sandy Bridge for Intel and Ryzen on AMD. For more information see <> and the Intel Manual and sortie's conversation with geist in #osdev on 2018-01-02. Sortix 1.1 awesome feature-addition kernel needs-investigation security Jonas Termansen Jonas Termansen Spectre and Meltdown 2018-01-10T19:15:21Z Jonas Termansen Spectre and Meltdown Sortix is affected. It's pretty lightweight compared to other worse Sortix security problems (Sortix 1.1 is only meant to not be remotely exploited, but locally is not secured yet), so it's not too important, but the documentation, website, and installer should at least mention the issue. Sortix 1.1 awesome bug documentation feature-addition kernel needs-investigation release-blocking security www Jonas Termansen Jonas Termansen Symmetric multiprocessing (SMP) 2018-01-06T19:49:22Z Jonas Termansen Symmetric multiprocessing (SMP) SMP should be implemented so Sortix can make use of other cores. Prerequisite work: * Power efficient idling (#235) * Kernel mutexes poll instead of truly sleeping (#236) * APIC * LAPOC (/ IO APIC?) * Per-CPU variables (%gs, swapgs). * Spinlocks * Some code must gain support for multiple CPUs: * Scheduler * Interrupt worker * errno * Whether signals are pending * TLB shootdowns. * PCI IRQ routing. Sortix 1.2 awesome feature-addition kernel release-goal Jonas Termansen Jonas Termansen editor(1) crash when selecting an empty last line 2018-03-11T09:59:14Z heat editor(1) crash when selecting an empty last line Steps to reproduce: 1. Open editor(1) 2. Press CTRL+Down arrow Sortix 1.1 bug user-experience userspace Jonas Termansen Jonas Termansen Protect the kernel image's segments 2018-01-05T21:29:15Z heat Protect the kernel image's segments Right now, the kernel image's segments remain with full RWX permissions; this is very dangerous and allows for some bad exploits. Remap the kernel with the proper permissions after early boot. @sortie: This another situation where a custom linker script could be useful(we need to get the start and end of .text and .data and .rodata, creating custom symbols at the start and end of the sections is a very simple way to do it). Sortix 1.2 awesome kernel security Randomize the stack canary 2018-01-05T21:29:27Z heat Randomize the stack canary Right now the stack canary is hardcoded, which effectively makes the kernel vulnerable to buffer overflows since it's rather easy to go around it when the canary's value is known. Randomize it at boot and at program load. Sortix 1.2 kernel libc security userspace High precision timing support 2017-12-27T23:35:07Z heat High precision timing support Right now the only sub-second timing source is the PIT, which works with millis. This is not optimal and the system should be adapted so that it doesn't necessarily need a timer that generates interrupts. driver feature-addition kernel heat heat Port gdb 2017-12-16T13:06:50Z Jonas Termansen Port gdb Having a real debugger on Sortix would be awesome. The kernel needs a debug facility though. Sortix 1.2 awesome feature-addition needs-investigation port release-goal Jonas Termansen Jonas Termansen Ability to lock down UDP socket 2017-12-12T21:48:12Z Jonas Termansen Ability to lock down UDP socket Provide socket options for locking the local address / remote address / socket options, so you get a very limited file descriptor. You can then provide such a file descriptor to a sandboxed process and know it can't communicate with the wrong remote. Sortix 1.2 awesome feature-addition kernel security Jonas Termansen Jonas Termansen Sortix 1.0 directories have trailing slashes in tix manifests 2017-12-03T11:33:46Z Jonas Termansen Sortix 1.0 directories have trailing slashes in tix manifests Due to a bug in the kernel tar extractor, binary packages installed as kernel initrds have incorrect manifest files: The directories listed there contain trailing slashes, but the format doesn't have trailing slashes for directories (besides the root directory). This bug is now fixed. However, this may pose some complexity when Sortix 1.0 systems are updated to the next release, and the upgrade procedure should know about this problem and resolve it. See #516 for the need to build a proper manifest upgrade procedure. Sortix 1.1 cleanup feature-addition needs-investigation regression release-blocking Jonas Termansen Jonas Termansen PIPE_BUF 2017-12-05T18:12:20Z Jonas Termansen PIPE_BUF Implement `PIPE_BUF` semantics where writes to pipes smaller than this will not be interleaved with other data. Sortix 1.2 bug cleanup feature-addition kernel Jonas Termansen Jonas Termansen Sortix ports website 2017-10-25T13:56:29Z Jonas Termansen Sortix ports website I could imagine going to and seeing all the information about the the port. Generate this page automatically from the ports wiki, the port tarballs themselves, etc. Have it be generated nightly. patches, tixbuildinfo, sha256sum's of upstream tarball, etc, should be available here. Show policies for upstreaming from the Sortix project. Make it an easy one-stop solution for a given upstream to pull my stuff. "Port of Sortix -- Sourcing Tix since 2011" Sortix 1.1 awesome documentation feature-addition needs-investigation release-goal user-experience Jonas Termansen Jonas Termansen Hardlink and symbolic link protections 2017-12-05T18:13:36Z Jonas Termansen Hardlink and symbolic link protections See <>. Some of my thoughts on twitter <>. Sortix 1.2 awesome feature-addition kernel needs-investigation port security Jonas Termansen Jonas Termansen | https://gitlab.com/sortix/sortix/issues.atom?scope=all&state=opened | CC-MAIN-2018-13 | refinedweb | 1,699 | 51.48 |
I'm trying to parse an XML to create Json but I couldn't find any json package
I tried running
import json json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) print json.dumps("\"foo\bar")
under modeler script window but it throws an error
Error: AEQMJ0132E: Script cannot load module json on line 447 column 1
Appreciate your help
Answer by Kenneth Jensen (IBM) (2577) | Apr 21, 2016 at 03:31 PM
There is an extension that will allow you to import data from a JSON array, but I do not know of one that will allow you to export data to JSON.
Since a JSON file is really just a text file that follows a particular format, you can utilize the "Report" output node to create a JSON formatted data file. I have done this myself in a number of projects that required an XML formatted output, but the concept would be the same for JSON.
Please take a look at the json-export.str sample stream that I have attached for a simple example of how I would do this.
Thanks Kenneth
I got the job done via scripting but your approach is clean and interesting too. Can the output be directly exported to a database too ?
The solution does not immediately allow for exporting the JSON to a database as the 'Report' node only writes a file. You would have to first create the file using the 'Report' node and then read that file in using the 'Var. File' source node - making sure to read the entire file as a single value - and then export the contents to a database using the 'Database' export node. This definitely removes some of the elegance, cleanliness, and transparency, but it would work.
@Kenneth Jensen (IBM), This should work. But just for sake of clarity, you are talking about same stream, right? Will Modeler be able to synchronize the whole process by itself if this was done in single stream?
62 people are following this question.
Installation of json extension fails in Modeler 18.0 - but I can't determine why 1 Answer
output of HDBSCAN blank 0 Answers
How to read data from user input node in SPSS Modeler 17.1 that has double quote ( " ) character as part of the value ? 1 Answer
Are there any tutorials for using python plugins with spss modeler premium? 1 Answer
Help with SPSS Modeler - Twitter Data and JSON Node 0 Answers | https://developer.ibm.com/answers/questions/264893/is-there-any-support-available-to-create-json-in-m.html | CC-MAIN-2019-43 | refinedweb | 410 | 70.94 |
I? I did anything wrong here?
I am pretty sure that my thinger.io setting is correct, as I am able to run the same code on my linkitone and work perfectly.
thankyou all here.
Ivan
code:
#include <BridgeSSLClient.h>
#include <ThingerYun.h>
#define USERNAME "myusername"
#define DEVICE_ID "mydeviceid"
#define DEVICE_CREDENTIAL "mydevice_credential"
ThingerYun thing(myusername, mydeviceid, mydevice_credential);
void setup() {
pinMode(LED_BUILTIN, OUTPUT);
// initialize bridge
Bridge.begin();
// pin control example (i.e. turning on/off a light, a relay, etc)
thing["led"] << digitalPin(LED_BUILTIN);
// resource output example (i.e. reading a sensor value, a variable, etc)
// more details at
}
void loop() {
thing.handle();
}
Seeeduino Cloud works with Thinger.io?
Arduino, Seeeduino Serials and mutants. Share your problems and experence on arduino compatible board such as seeeduino/stalker, etc.
Moderators: violet, jessie
Post Reply
1 post • Page 1 of 1
Post Reply
1 post • Page 1 of 1 | https://forum.seeedstudio.com/viewtopic.php?p=25872 | CC-MAIN-2018-39 | refinedweb | 146 | 53.47 |
Editor's note: This article is the fifth in a series of articles on composite applications to be published on developerWorks Lotus over the next few months. See the previous developerWorks articles, "Designing composite applications: Unit testing," "Designing composite applications: Design patterns," "Designing composite applications: Component design," and "The Lead Manager application in IBM Lotus Notes V8: An Overview." with each other. composition.
The.
A component view is more than just a piece of UI. It is a programmatic unit that communicates and participates with the outside world. Through this communication capability, several component views are wired together to form a composite application. We now look at some features of the Lotus Expeditor and Lotus Notes platforms that allow for this.
- Topology handler. This feature lets you set static values at assembly time for a component view. The topology handler uses these values at runtime to lay out the component view within the Lotus Notes client. Other values may be component view specific and used at runtime to customize its presentation or function. Using the Composite Application Editor (CAE) in Lotus Notes, you can access properties for a component view through the Advanced Component Properties dialog box (see figure 1), allowing for assembly-time configuration of a component view.
Figure 1. Advanced Component Properties in the CAE
For example, the tag cloud component lets you set advanced component properties called drawHeader and drawFooter at assembly time. Then, at runtime, the component view checks the values of these properties and, if true, the corresponding section of UI is drawn.
- Property broker. This feature allows component-view-to-component-view connections to be created. A property is a source value that a component view can set to indicate a change of value. An action is a destination value that a component view can indicate that it consumes.
- Wiring. Wires can be established at assembly time to connect properties output by component views to actions in component views that process changes to those properties. In Lotus Notes, the CAE is used to wire components.
At runtime, a component view can indicate to the property broker that one of its property's values has changed. The property broker responds by passing the new values to the component views that have been connected with a wire. The new value of the property is passed as a parameter to the action selected. For example, the wiring in figure 2 shows how the tag cloud's FocusedEntity value is transmitted and set into the document viewer's Set Column Filter value at runtime when the tag cloud selection changes.
Figure 2. Properties, actions, and wires in the wiring tool in the CAE
- Data model. Both properties and actions involve getting or setting the value of data elements in your component view. The conventional structure for storing collections of data elements in Java is a JavaBean, which, conveniently, also has a well defined mechanism for propagating changes. Note that there are many other methods to maintain your data model in Java besides JavaBeans. Eclipse-based component views are not required to use JavaBeans; however, we use them in this article as an example of a data model with which most Java programmers are familiar. You may apply these techniques to any data model you like.
We introduce a number of helper classes over the course of this example. Although not required for the simple component view we are making, they can be reused across components and are of great value when developing more complicated components.
An Eclipse component view is based on a View part, which is coupled with a data model. Each instance of the component view has its own instance of a data model. All additional work consists of coupling the pieces together. Your data model is coupled with your view to present the data to the user and to allow the user to interact with and change the data. On the backend, the data model is coupled with the property broker to broadcast changes to values in the data model and to listen for changes imposed from other component views. Last, the data model is coupled with the topology handler to read initial instance values into the data model. This strategy is easily compatible with the popular Model-View-Controller (MVC) design pattern (see figure 3).
Figure 3. MVC design pattern
For this component view, we create a tag cloud component view. We have an SWT control (see TagCloud.java in the com.ibm.cademo.sl.comp.cloud package) that we want to expose. We may use this to display the categories in a discussion database as shown in figure 4.
Figure 4. Composite application with a discussion database and a tag cloud
The first important decision to be made is which data elements to expose. For simplicity, we discuss two: PrimaryData as a HashMap of key-value pairs that drive the display model, and FocusedEntity, which represents the current tag under focus. For example, figure 4 shows a composite application with a discussion database Notes component view and a tag cloud. Here information about all the categories in the database is weighted by the number of articles and sent for display to the tag cloud. Additionally, categories can be selected in the tag cloud and that focus passed back to the discussion database component view. This uses view filtering to show only articles in that category.
For Lotus Expeditor 6.x and Lotus Notes 8, only strings can be sent as property values between Eclipse components and Lotus Notes components. A HashMap, though, is a complex type. The technique for dealing with this is to devise a form of serialization that represents the complex type in a simple string. Even though only base types of strings are supported, it is possible to add typing data to distinguish between types of strings semantically. Let's do that to ensure that only values of TagCloudDataString are passed in for the tag cloud. Because the FocusedEntity is just a key, we use the generic xsd:string type.
The data model we use is a simple JavaBean, and to aid this we introduce our first helper class: PCSBean.java. This is a very simple class that uses the java.beans.PropertyChangeSupport class to create APIs to support the propagation of JavaBean property changes to property broker property changes.
Now we can create our bean as a subclass of this. In TagCloudViewBean.java (see listing 1) you can see where we've created the two data members, PrimaryData and FocusedEntity, and added setters and getters for them. Last, we extend the setters to call into the functions of the base class to propagate notification of property changes.
Listing 1. TagCloudViewBean.java
The last part of the data model is the creation of the Web Services Description Language (WSDL) file that describes it to the property broker. The WSDL file describes the public interface to the component view. This is used, for example, by the CAE to present what properties and actions are available for wiring and what is contained in TagCloudView.wsdl. This file can be used as a template for creating WSDL files for your own components.
A Property Broker Editor, which ships with Lotus Domino Designer, provides a user interface that lets you define properties, types, and actions without writing an WSDL file. An understanding of the structure of the WSDL file is helpful, though, even when you use the editor.
The Eclipse integrated development environment (IDE) framework also provides a graphical WSDL editor. That editor, though, uses WSDL-formatted files for defining Web services. Although the format is the same for composite applications, the specific usage is different enough that the Eclipse IDE editor is not always the best choice.
For each element you have in your data model, you must add to the WSDL file in a number of areas: <types>, <message>, <portType>, and <operation>.
Types determine the type of your property. In this implementation, they are all strings; however, we can add a semantic type to it in the WSDL file. These must be consistent from component to component. You can create wires only from properties to actions that are of the same semantic type. For the PrimaryData property, we decided to call the format TagCloudDataType. We declare it within the <types> element as shown in listing 2.
Listing 2. TagCloudDataType
For other types, duplicate the <xsd:simpleType> tag and rename TagCloudDataType with the desired type.
NOTE: In addition to having the same semantic type, properties must also lie in the same namespace. Be sure to be consistent with your use of namespaces within your WSDL files across multiple components.
Messages are collections of parameters used as profiles in determining the input and output values for a function call. You must have one for each getter and one for each setter as shown in listing 3.
Listing 3. Example messages
For other elements, add these same two messages again, change PrimaryData to the base name of your property, and add TagCloudDataType to the type for your property. Or you can use xsd:string for the generic string type.
The port type links an operation to messages for its input and output parameters. Because we are dealing with simple accessor methods for beans, we have one <portType> for the setter containing a single input message, and one for the getter containing a single output message (see listing 4).
Listing 4. Example port type
For other properties, duplicate the two <operation> tags, and change PrimaryData to the base name of your property.
Last, we create our operations in the <binding> element (see listing 5). These fully define the profile and refer to the previously defined constructs. We also declare the captions and descriptions used for these properties. This is what is used for display in the CAE. (Note that you can declare a Properties file along with your WSDL file, using the conventional naming scheme for localization. Prefixing the value in your caption or description with % indicates that the CAE should look up the value in the appropriate property file for the locale.)
Listing 5. Binding operation
For other properties, duplicate the two <operation> tags, change PrimaryData to the base name of your property, and add appropriate descriptions.
Note that, in this example, we have exposed both the set and get access functions for our data type. For something like the FocusedEntity, it makes sense to have both. An outside component view can tell this component view where it wants the focus to be. If the user clicks in the right area, this component view can broadcast to other component views what the new selection is.
While it is clear that we want to be able to set the PrimaryData on this component view, the use case is not as clear for letting other component views get it. Because assembly takes place after component creation, and possibly by someone else in a completely different application, it is worth considering exposing such things, even if there is no immediate use for them. It is better to build in flexibility up front.
Now that we have created and defined our data model, we must hook it up programmatically to the property broker mechanism.
Helper class for properties
Here we introduce the PBBroadcast.java helper class for managing the broadcasting of property changes from our data model to the property broker (see listing 6). When constructed, we pass it into the view with which it is associated and in the data model bean. In the constructor, it attaches itself to the bean as a listener.
When a value in the bean changes, the propertyChange() method is called. To broadcast the changed value, we need to assemble a collection of property-value pairs and publish them to the broker. In this simple case, we deal only with single property changes. In a more specific implementation, you have the option of publishing several property changes at once.
Listing 6. PBBroadcast.java helper class example
In the first line of listing 6, we get the PropertyBroker Property class corresponding to the property changed from the name of the changed property in the event. In the second line, we create the array to store the changed values. In the third line, we create the PropertyValue object from the value of the changed property in the event. In the last line, we broadcast the changed value to the property broker. (Error checking has been removed for clarity of discussion. See the referenced source code for full details.)
Every component view must have a registered handler to receive notifications of property changes. We introduce the PBHandler.java helper class to help manage this. Because this is constructed by the Lotus Expeditor framework, we cannot initialize this specific instance with our data model. In fact, the same handler is used for all instances of a component view. The event notification, though, includes information about which Eclipse view is the target of the notification. We retrieve that Eclipse view, which is required to implement the IDataProvider helper interface, and through this interface we retrieve our data model. We then extract the name of the changed property, the new value, and pass that into a function that sets the value through reflection (see listing 7).
Listing 7. PBHandler.java helper class example
In the second line of listing 7, we cast the event to the PropertyChangeEvent specific to this operation. In the third line, we extract the wire from the event and the view ID from the wire, and we use SWTHelper to find the Eclipse view from the framework. In the fifth line, we call the setValue helper function with the data model (extracted from the view), the name of the changed property (extracted from the action definition in the event), and the new value (extracted from the event). (Error checking has been removed for clarity of discussion. See the referenced source code for full details.)
Helper class for the Topology Handler
The TopologyHelper class is provided to help set up initial values in your data model. It contains a single static function to do the initialization. This is passed to the context of the plug-in, the data model, and the ID of the view. The function accesses the TopologyHandler, extracts the keys specifically set for this component view (identified by the view ID), finds the values for each of those keys, and attempts to set them into the data model (see listing 8).
Listing 8. TopologyHelper.java class example
Here, the first two lines retrieve the TopologyHandler from the context. The third and fourth lines retrieve the settings that are specific to this component view. We then loop through them. Within the loop, we extract the value for the iterated key, and then we attempt to set that value into the bean through reflection, using the helper function defined in the PBHandler class.
We use all these helper classes together with our data model to create our View class. The framework requires this class to extend ViewPart, and we need the class to implement the IDataProvider interface so that we can get the data model from it later:
public class TagCloudView extends ViewPart implements IDataProvider
When we construct it, after we create the data model, we create an instance of the PBBroadcast class. To this, we pass the view, the data model, and a list of the namespaces used. Because this links itself to the data model, we don't need to keep a reference to it (see listing 9).
Listing 9. PBBroadcast class instance
In the createPartControl() method, which we must implement for the ViewPart, we do four things as shown in listing 10.
Listing 10. createPartControl() method
First, in line 1 we create the actual control and place it in our view. Second, we establish the connection from our data model to the control by registering a listener for changes on the data model. When it receives one, it processes the change and passes that on to the control (changing the format if required).
Third, we establish the connection from the control to our data model. We do this in the same way as in the previous step, but this time propagating change back when detected. Last, we use some helper functions to do some setup. The registerData() method binds the data model to an internal map so that the action handler functions can find it. The initialize function on the TopologyHelper class and the setupNames() function help to set up the default values into our data model. Because we've already connected it up with the control, these automatically propagate.
Because each component view needs a unique handler, we must create a TagCloudHandler class, which can subclass the PBHandler helper class. It doesn't need to add any methods. We cannot use the PBHandler helper class directly because the calling of actions is determined by the class name of its handler. A trivial subclass fills the need of having a class with a unique name and leaves all the actual handling to be common.
Now that all the code is in place, we must let the PropertyHandler know of it by filling in an extension point. We create an extension under the com.ibm.rcp.propertybroker.ProperBrokerAction extension point, filling out the class field to point at our handler. We then fill out the file field and point it at our WSDL file (see figure 5).
Figure 5. Setting the PropertyBrokerAction extension point in the plugin.xml file in the Eclipse IDE
Integrating Lotus Notes data
There are a number of circumstances in which an Eclipse component view can be used to surface Lotus Notes data other than using traditional Lotus Notes UI elements. In fact, you can do this by accessing the Lotus Notes Java API. To make things easier, the Java API is packaged into a plug-in that is available in the environment. On the Dependencies tab of your plug-in editor, select the com.ibm.notes.java.api plug-in, and all the APIs are immediately available.
It is best to keep the access to Lotus Notes data asynchronous. When you want to look up data, use a NotesThread object to execute the lookup. When finished, if the data involves a UI update, remember to spawn off the update into a thread that is run with the UI thread. The code in listing 11 can be used as a template for this. You might also look at the com.ibm.cademo.sl.comp.leadbrow package included with the Lead Manager example for an example.
Listing 11. NotesThread object template
If you want to go further
If you want to extend an existing component view, perform the following steps:
- Create the additional fields in your data model as well as the accessor functions. As part of this, you must ensure that the accessor functions broadcast changes through the JavaBean property-change mechanism. Also, add those values to your WSDL file with unique type definitions if required.
- Connect those fields to the ViewPart to visualize these new values and, if appropriate, to allow the user to change those values. No change is necessary to ensure that our new values are broadcast.
The PBBroadcast helper class listens for all changes to the JavaBean and as long as the names of the fields are in sync with the names defined in the WSDL file, they are appropriately broadcast. Similarly, no change is required to accept changes to those values from other component views. Adding their values to the WSDL file ensures they can be wired up, and the PBHandler class accepts those values and ties them to the data model through introspection.
- If you want to create another, entirely new component, there are other simple steps. The helper functions are not tied to a specific component; instead, they are provided in the OpenNTF library as a separate plug-in. Each component you produce that uses them has a dependency on that plug-in.
NOTE: It is recommended that you place only one component view in each plug-in. The WSDL file is associated with the component itself, meaning that all declared values are valid for all component views in a single plug-in. Placing a single component view in each plug-in resolves whatever confusion may result from having properties from several components display when referring to a single one in the CAE UI.
- Create the new plug-in, mark it to depend on the plug-in with the helper classes in it, and then create your data model, inheriting from PCSBean and its corresponding WSDL file. Finally, create the ViewPart that instantiates the data model and that implements the IDataProvider interface.
- In the constructor, the PBBroadcast class can be instantiated with a reference to the data model. In the createPartControl() function, a call can be made into the TopologyHelper to set initial values.
- Last, a facade helper class is created as a subclass from PBHandler, and that class along with the WSDL file are registered against the com.ibm.rcp.propertybroker.PropertyBrokerAction extension point.
You can repeat these steps for each component you want to add. All the Eclipse-based plug-ins in the OpenNTF library are done in this manner, forming a fertile ground of examples that can be copied and changed for your own needs.
Through this exercise, we introduced several helper functions for creating a component that can be deployed in a composite application. More than that, though, we showed you how to create a foundation on which other components can be created quickly and easily. These functions are not required as simple components can encompass these steps in their actual code, or you may find other ways of establishing common usage of the APIs. They are presented here as a helping hand to get beyond simple components and onto your first suite of components.. | http://www.ibm.com/developerworks/lotus/library/notes8-eclipse-comp/index.html | crawl-002 | refinedweb | 3,657 | 53 |
First off, please don't hate on me I know I'm a total noob! I keep getting an error when I try to compile this and I can't seem to find out why. I'm reviewing the book I purchased to help me, but I still can't find out whats going wrong. I know its something small and dumb.. Can ANYONE help
If you could please point out what needs to be corrected and why, it would be extremely helpful to me. Thank you so much in advance.
Code:#include <iostream> using namespace std; int main(int argc, char *argv[]) { int secret_number, guess, count; cout << "Let's play a guessing game!" << endl << endl; cout << "I'll pick a number from 0 to 9, inclusive." << endl; cout << "Good luck!" << endl << endl; secret_number = rand() % 10; count = 0; do { if (count) cout << "Sorry, that's not it!" << endl; cout << "enter your guess: "; cin >> guess; { while ( guess != secret_number ) && ( ++count <= 9 ) ); if ( guess == secret_number ) { cout << "You got it in " << count << "guesses!" < endl; cout << "Good work!" << endl; } else { cout << "Sorry, you used up all your guesses!" << endl; cout << "The secret number was: " << secret_number << endl; cout << "Better luck next time!" << endl; } system("PAUSE"); return EXIT_SUCCESS; } | http://cboard.cprogramming.com/cplusplus-programming/129991-new-cplusplus-writing-first-script-need-help-finding-bug.html | CC-MAIN-2014-23 | refinedweb | 201 | 84.78 |
public class ZoomEvent extends GestureEvent
The event is delivered to the top-most node picked on the gesture coordinates in time of the gesture start - the whole gesture is delivered to the same node even if the coordinates change during the gesture.
The event provides two values:
zoomFactor is the zooming amount
of this event,
totalZoomFactor is the zooming amount of the whole
gesture. The values work well when multiplied with the node's
scale
properties (values greater than
1 for zooming in).
As all gestures, zooming can be direct (performed directly at the concrete coordinates as on touch screen - the center point among all the touches is usually used as the gesture coordinates) or indirect (performed indirectly as on track pad - the mouse cursor location is usually used as the gesture coordinates).
The gesture's
ZOOM events are surounded by
ZOOM_STARTED
and
ZOOM_FINISHED events. If zooming inertia is active on the
given platform, some
ZOOM events with
isInertia() returning
true can come after
ZOOM_FINISHED.<ZoomEvent> ANY
public static final EventType<ZoomEvent> ZOOM
public static final EventType<ZoomEvent> ZOOM_STARTED
public static final EventType<ZoomEvent> ZOOM_FINISHED
public double getZoomFactor()
scaleproperties (values greater than
1for zooming in, values between
0and
1for zooming out).
public double getTotalZoomFactor()
scaleproperties (values greater than
1for zooming in, values between
0and
1for zooming out).
public java.lang.String toString()
ZoomEventobject.
toStringin class
GestureEvent
ZoomEventobject.
Copyright (c) 2008, 2014, Oracle and/or its affiliates. All rights reserved. | http://docs.oracle.com/javafx/2/api/javafx/scene/input/ZoomEvent.html | CC-MAIN-2016-40 | refinedweb | 239 | 52.09 |
Core Text Tutorial for iOS: Making a Magazine App
Update note: This tutorial has been updated to Swift 4 and Xcode 9 by Lyndsey Scott. The original tutorial was written by Marin Todorov.
Core Text is a low-level text engine that when used alongside the Core Graphics/Quartz framework, gives you fine-grained control over layout and formatting.
With iOS 7, Apple released a high-level library called Text Kit, which stores, lays out and displays text with various typesetting characteristics. Although Text Kit is powerful and usually sufficient when laying out text, Core Text can provide more control. For example, if you need to work directly with Quartz, use Core Text. If you need to build your own layout engines, Core Text will help you generate “glyphs and position them relative to each other with all the features of fine typesetting.”
This tutorial takes you through the process of creating a very simple magazine application using Core Text… for Zombies!
Oh, and Zombie Monthly’s readership has kindly agreed not to eat your brains as long as you’re busy using them for this tutorial… So you may want to get started soon! *gulp*
Note: To get the most out of this tutorial, you need to know the basics of iOS development first. If you’re new to iOS development, you should check out some of the other tutorials on this site first.
Getting Started
Open Xcode, create a new Swift universal project with the Single View Application Template and name it CoreTextMagazine.
Next, add the Core Text framework to your project:
- Click the project file in the Project navigator (the strip on the left hand side)
- Under “General”, scroll down to “Linked Frameworks and Libraries” at the bottom
- Click the “+” and search for “CoreText”
- Select “CoreText.framework” and click the “Add” button. That’s it!
Now the project is setup, it’s time to start coding.
Adding a Core Text View
For starters, you’ll create a custom
UIView, which will use Core Text in its
draw(_:) method.
Create a new Cocoa Touch Class file named CTView subclassing
UIView .
Open CTView.swift, and add the following under
import UIKit:
import CoreText
Next, set this new custom view as the main view in the application. Open Main.storyboard, open the Utilities menu on the right-hand side, then select the Identity Inspector icon in its top toolbar. In the left-hand menu of the Interface Builder, select View. The Class field of the Utilities menu should now say UIView. To subclass the main view controller’s view, type CTView into the Class field and hit Enter.
Next, open CTView.swift and replace the commented out
draw(_:) with the following:
//1 override func draw(_ rect: CGRect) { // 2 guard let context = UIGraphicsGetCurrentContext() else { return } // 3 let path = CGMutablePath() path.addRect(bounds) // 4 let attrString = NSAttributedString(string: "Hello World") // 5 let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString) // 6 let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, attrString.length), path, nil) // 7 CTFrameDraw(frame, context) }
Let’s go over this step-by-step.
- Upon view creation,
draw(_:)will run automatically to render the view’s backing layer.
- Unwrap the current graphic context you’ll use for drawing.
- Create a path which bounds the drawing area, the entire view’s bounds in this case
- In Core Text, you use
NSAttributedString, as opposed to
Stringor
NSString, to hold the text and its attributes. Initialize “Hello World” as an attributed string.
CTFramesetterCreateWithAttributedStringcreates a
CTFramesetterwith the supplied attributed string.
CTFramesetterwill manage your font references and your drawing frames.
- Create a
CTFrame, by having
CTFramesetterCreateFramerender the entire string within
path.
CTFrameDrawdraws the
CTFramein the given context.
That’s all you need to draw some simple text! Build, run and see the result.
Uh-oh… That doesn’t seem right, does it? Like many of the low level APIs, Core Text uses a Y-flipped coordinate system. To make matters worse, the content is also flipped vertically!
Add the following code directly below the
guard let context statement to fix the content orientation:
// Flip the coordinate system context.textMatrix = .identity context.translateBy(x: 0, y: bounds.size.height) context.scaleBy(x: 1.0, y: -1.0)
This code flips the content by applying a transformation to the view’s context.
Build and run the app. Don’t worry about status bar overlap, you’ll learn how to fix this with margins later.
Congrats on your first Core Text app! The zombies are pleased with your progress.
The Core Text Object Model
If you’re a bit confused about the
CTFramesetter and the
CTFrame – that’s OK because it’s time for some clarification. :]
Here’s what the Core Text object model looks like:
When you create a
CTFramesetter reference and provide it with an
NSAttributedString, an instance of
CTTypesetter is automatically created for you to manage your fonts. Next you use the
CTFramesetter to create one or more frames in which you’ll be rendering text.
When you create a frame, you provide it with the subrange of text to render inside its rectangle. Core Text automatically creates a
CTLine for each line of text and a
CTRun for each piece of text with the same formatting. For example, Core Text would create a
CTRun if you had several words in a row colored red, then another
CTRun for the following plain text, then another
CTRun for a bold sentence, etc. Core Text creates
CTRuns for you based on the attributes of the supplied
NSAttributedString. Furthermore, each of these
CTRun objects can adopt different attributes, so you have fine control over kerning, ligatures, width, height and more.
Onto the Magazine App!
Download and unarchive the zombie magazine materials.
Drag the folder into your Xcode project. When prompted make sure Copy items if needed and Create groups are selected.
To create the app, you’ll need to apply various attributes to the text. You’ll create a simple text markup parser which will use tags to set the magazine’s formatting.
Create a new Cocoa Touch Class file named MarkupParser subclassing
NSObject.
First things first, take a quick look at zombies.txt. See how it contains bracketed formatting tags throughout the text? The “img src” tags reference magazine images and the “font color/face” tags determine text color and font.
Open MarkupParser.swift and replace its contents with the following:
import UIKit import CoreText class MarkupParser: NSObject { // MARK: - Properties var color: UIColor = .black var fontName: String = "Arial" var attrString: NSMutableAttributedString! var images: [[String: Any]] = [] // MARK: - Initializers override init() { super.init() } // MARK: - Internal func parseMarkup(_ markup: String) { } }
Here you’ve added properties to hold the font and text color; set their defaults; created a variable to hold the attributed string produced by
parseMarkup(_:); and created an array which will eventually hold the dictionary information defining the size, location and filename of images found within the text.
Writing a parser is usually hard work, but this tutorial’s parser will be very simple and support only opening tags — meaning a tag will set the style of the text following it until a new tag is found. The text markup will look like this:
These are <font color="red">red<font color="black"> and <font color="blue">blue <font color="black">words.
and produce output like this:
These are red and blue words.
Lets’ get parsin’!
Add the following to
parseMarkup(_:):
//1 attrString = NSMutableAttributedString(string: "") //2 do { let regex = try NSRegularExpression(pattern: "(.*?)(<[^>]+>|\\Z)", options: [.caseInsensitive, .dotMatchesLineSeparators]) //3 let chunks = regex.matches(in: markup, options: NSRegularExpression.MatchingOptions(rawValue: 0), range: NSRange(location: 0, length: markup.characters.count)) } catch _ { }
attrStringstarts out empty, but will eventually contain the parsed markup.
- This regular expression, matches blocks of text with the tags immediately follow them. It says, “Look through the string until you find an opening bracket, then look through the string until you hit a closing bracket (or the end of the document).”
- Search the entire range of the markup for
regexmatches, then produce an array of the resulting
NSTextCheckingResults.
Note: To learn more about regular expressions, check out NSRegularExpression Tutorial.
Now you’ve parsed all the text and formatting tags into
chunks, you’ll loop through
chunks to build the attributed string.
But before that, did you notice how
matches(in:options:range:) accepts an
NSRange as an argument? There’s going to be lots of
NSRange to
Range conversions as you apply
NSRegularExpression functions to your markup
String. Swift’s been a pretty good friend to us all, so it deserves a helping hand.
Still in MarkupParser.swift, add the following
extension to the end of the file:
// MARK: - String extension String { func range(from range: NSRange) -> Range<String.Index>? { guard let from16 = utf16.index(utf16.startIndex, offsetBy: range.location, limitedBy: utf16.endIndex), let to16 = utf16.index(from16, offsetBy: range.length, limitedBy: utf16.endIndex), let from = String.Index(from16, within: self), let to = String.Index(to16, within: self) else { return nil } return from ..< to } }
This function converts the String's starting and ending indices as represented by an
NSRange, to
String.UTF16View.Index format, i.e. the positions in a string’s collection of UTF-16 code units; then converts each
String.UTF16View.Index to
String.Index format; which when combined, produces Swift's range format:
Range. As long as the indices are valid, the method will return the
Range representation of the original
NSRange.
Your Swift is now chill. Time to head back to processing the text and tag chunks.
Inside
parseMarkup(_:) add the following below
let chunks (within the
do block):
let defaultFont: UIFont = .systemFont(ofSize: UIScreen.main.bounds.size.height / 40) //1 for chunk in chunks { //2 guard let markupRange = markup.range(from: chunk.range) else { continue } //3 let parts = markup[markupRange].components(separatedBy: "<") //4 let font = UIFont(name: fontName, size: UIScreen.main.bounds.size.height / 40) ?? defaultFont //5 let attrs = [NSAttributedStringKey.foregroundColor: color, NSAttributedStringKey.font: font] as [NSAttributedStringKey : Any] let text = NSMutableAttributedString(string: parts[0], attributes: attrs) attrString.append(text) }
- Loop through
chunks.
- Get the current
NSTextCheckingResult's range, unwrap the
Range<String.Index>and proceed with the block as long as it exists.
- Break
chunkinto parts separated by "<". The first part contains the magazine text and the second part contains the tag (if it exists).
- Create a font using
fontName, currently "Arial" by default, and a size relative to the device screen. If
fontNamedoesn't produce a valid
UIFont, set
fontto the default font.
- Create a dictionary of the font format, apply it to
parts[0]to create the attributed string, then append that string to the result string.
To process the "font" tag, insert the following after
attrString.append(text):
// 1 if parts.count <= 1 { continue } let tag = parts[1] //2 if tag.hasPrefix("font") { let colorRegex = try NSRegularExpression(pattern: "(?<=color=\")\\w+", options: NSRegularExpression.Options(rawValue: 0)) colorRegex.enumerateMatches(in: tag, options: NSRegularExpression.MatchingOptions(rawValue: 0), range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in //3 if let match = match, let range = tag.range(from: match.range) { let colorSel = NSSelectorFromString(tag[range]+"Color") color = UIColor.perform(colorSel).takeRetainedValue() as? UIColor ?? .black } } //5 let faceRegex = try NSRegularExpression(pattern: "(?<=face=\")[^\"]+", options: NSRegularExpression.Options(rawValue: 0)) faceRegex.enumerateMatches(in: tag, options: NSRegularExpression.MatchingOptions(rawValue: 0), range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in if let match = match, let range = tag.range(from: match.range) { fontName = String(tag[range]) } } } //end of font parsing
- If less than two parts, skip the rest of the loop body. Otherwise, store that second part as
tag.
- If
tagstarts with "font", create a regex to find the font's "color" value, then use that regex to enumerate through
tag's matching "color" values. In this case, there should be only one matching color value.
- If
enumerateMatches(in:options:range:using:)returns a valid
matchwith a valid range in
tag, find the indicated value (ex.
<font color="red">returns "red") and append "Color" to form a
UIColorselector. Perform that selector then set your class's
colorto the returned color if it exists, to black if not.
- Similarly, create a regex to process the font's "face" value. If it finds a match, set
fontNameto that string.
Great job! Now
parseMarkup(_:) can take markup and produce an
NSAttributedString for Core Text.
It's time to feed your app to some zombies! I mean, feed some zombies to your app... zombies.txt, that is. ;]
It's actually the job of a
UIView to display content given to it, not load content. Open CTView.swift and add the following above
draw(_:):
// MARK: - Properties var attrString: NSAttributedString! // MARK: - Internal func importAttrString(_ attrString: NSAttributedString) { self.attrString = attrString }
Next, delete
let attrString = NSAttributedString(string: "Hello World") from
draw(_:).
Here you've created an instance variable to hold an attributed string and a method to set it from elsewhere in your app.
Next, open ViewController.swift and add the following to
viewDidLoad():
// 1 guard let file = Bundle.main.path(forResource: "zombies", ofType: "txt") else { return } do { let text = try String(contentsOfFile: file, encoding: .utf8) // 2 let parser = MarkupParser() parser.parseMarkup(text) (view as? CTView)?.importAttrString(parser.attrString) } catch _ { }
Let’s go over this step-by-step.
- Load the text from the
zombie.txtfile into a
String.
- Create a new parser, feed in the text, then pass the returned attributed string to
ViewController's
CTView.
Build and run the app!
That's awesome? Thanks to about 50 lines of parsing you can simply use a text file to hold the contents of your magazine app.
A Basic Magazine Layout
If you thought a monthly magazine of Zombie news could possibly fit onto one measly page, you'd be very wrong! Luckily Core Text becomes particularly useful when laying out columns since
CTFrameGetVisibleStringRange can tell you how much text will fit into a given frame. Meaning, you can create a column, then once its full, you can create another column, etc.
For this app, you'll have to print columns, then pages, then a whole magazine lest you offend the undead, so... time to turn your
CTView subclass into a
UIScrollView.
Open CTView.swift and change the
class CTView line to:
class CTView: UIScrollView {
See that, zombies? The app can now support an eternity of undead adventures! Yep -- with one line, scrolling and paging is now available.
Up until now, you've created your framesetter and frame inside
draw(_:), but since you'll have many columns with different formatting, it's better to create individual column instances instead.
Create a new Cocoa Touch Class file named
CTColumnView subclassing
UIView.
Open CTColumnView.swift and add the following starter code:
import UIKit import CoreText class CTColumnView: UIView { // MARK: - Properties var ctFrame: CTFrame! // MARK: - Initializers required init(coder aDecoder: NSCoder) { super.init(coder: aDecoder)! } required init(frame: CGRect, ctframe: CTFrame) { super.init(frame: frame) self.ctFrame = ctframe backgroundColor = .white } // MARK: - Life Cycle override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.textMatrix = .identity context.translateBy(x: 0, y: bounds.size.height) context.scaleBy(x: 1.0, y: -1.0) CTFrameDraw(ctFrame, context) } }
This code renders a
CTFrame just as you'd originally done in
CTView. The custom initializer,
init(frame:ctframe:), sets:
- The view's frame.
- The
CTFrameto draw into the context.
- And the view's backgound color to white.
Next, create a new swift file named CTSettings.swift which will hold your column settings.
Replace the contents of CTSettings.swift with the following:
import UIKit import Foundation class CTSettings { //1 // MARK: - Properties let margin: CGFloat = 20 var columnsPerPage: CGFloat! var pageRect: CGRect! var columnRect: CGRect! // MARK: - Initializers init() { //2 columnsPerPage = UIDevice.current.userInterfaceIdiom == .phone ? 1 : 2 //3 pageRect = UIScreen.main.bounds.insetBy(dx: margin, dy: margin) //4 columnRect = CGRect(x: 0, y: 0, width: pageRect.width / columnsPerPage, height: pageRect.height).insetBy(dx: margin, dy: margin) } }
- The properties will determine the page margin (default of 20 for this tutorial); the number of columns per page; the frame of each page containing the columns; and the frame size of each column per page.
- Since this magazine serves both iPhone and iPad carrying zombies, show two columns on iPad and one column on iPhone so the number of columns is appropriate for each screen size.
- Inset the entire bounds of the page by the size of the margin to calculate
pageRect.
- Divide
pageRect's width by the number of columns per page and inset that new frame with the margin for
columnRect.
Open, CTView.swift, replace the entire contents with the following:
import UIKit import CoreText class CTView: UIScrollView { //1 func buildFrames(withAttrString attrString: NSAttributedString, andImages images: [[String: Any]]) { //3 isPagingEnabled = true //4 let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString) //4 var pageView = UIView() var textPos = 0 var columnIndex: CGFloat = 0 var pageIndex: CGFloat = 0 let settings = CTSettings() //5 while textPos < attrString.length { } } }
buildFrames(withAttrString:andImages:)will create
CTColumnViews then add them to the scrollview.
- Enable the scrollview's paging behavior; so, whenever the user stops scrolling, the scrollview snaps into place so exactly one entire page is showing at a time.
CTFramesetter
framesetterwill create each column's
CTFrameof attributed text.
UIView
pageViews will serve as a container for each page's column subviews;
textPoswill keep track of the next character;
columnIndexwill keep track of the current column;
pageIndexwill keep track of the current page; and
settingsgives you access to the app's margin size, columns per page, page frame and column frame settings.
- You're going to loop through
attrStringand lay out the text column by column, until the current text position reaches the end.
Time to start looping
attrString. Add the following within
while textPos < attrString.length {.:
//1 if columnIndex.truncatingRemainder(dividingBy: settings.columnsPerPage) == 0 { columnIndex = 0 pageView = UIView(frame: settings.pageRect.offsetBy(dx: pageIndex * bounds.width, dy: 0)) addSubview(pageView) //2 pageIndex += 1 } //3 let columnXOrigin = pageView.frame.size.width / settings.columnsPerPage let columnOffset = columnIndex * columnXOrigin let columnFrame = settings.columnRect.offsetBy(dx: columnOffset, dy: 0)
- If the column index divided by the number of columns per page equals 0, thus indicating the column is the first on its page, create a new page view to hold the columns. To set its frame, take the margined
settings.pageRectand offset its x origin by the current page index multiplied by the width of the screen; so within the paging scrollview, each magazine page will be to the right of the previous one.
- Increment the
pageIndex.
- Divide
pageView's width by
settings.columnsPerPageto get the first column's x origin; multiply that origin by the column index to get the column offset; then create the frame of the current column by taking the standard
columnRectand offsetting its x origin by
columnOffset.
Next, add the following below
columnFrame initialization:
//1 let path = CGMutablePath() path.addRect(CGRect(origin: .zero, size: columnFrame.size)) let ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(textPos, 0), path, nil) //2 let column = CTColumnView(frame: columnFrame, ctframe: ctframe) pageView.addSubview(column) //3 let frameRange = CTFrameGetVisibleStringRange(ctframe) textPos += frameRange.length //4 columnIndex += 1
- Create a
CGMutablePaththe size of the column, then starting from
textPos, render a new
CTFramewith as much text as can fit.
- Create a
CTColumnViewwith a
CGRect
columnFrameand
CTFrame
ctframethen add the column to
pageView.
- Use
CTFrameGetVisibleStringRange(_:)to calculate the range of text contained within the column, then increment
textPosby that range length to reflect the current text position.
- Increment the column index by 1 before looping to the next column.
Lastly set the scroll view's content size after the loop:
contentSize = CGSize(width: CGFloat(pageIndex) * bounds.size.width, height: bounds.size.height)
By setting the content size to the screen width times the number of pages, the zombies can now scroll through to the end.
Open ViewController.swift, and replace
(view as? CTView)?.importAttrString(parser.attrString)
with the following:
(view as? CTView)?.buildFrames(withAttrString: parser.attrString, andImages: parser.images)
Build and run the app on an iPad. Check that double column layout! Drag right and left to go between pages. Lookin' good. :]
You've columns and formatted text, but you're missing images. Drawing images with Core Text isn't so straightforward - it's a text framework after all - but with the help of the markup parser you've already created, adding images shouldn't be too bad.
Drawing Images in Core Text
Although Core Text can't draw images, as a layout engine, it can leave empty spaces to make room for images. By setting a
CTRun's delegate, you can determine that
CTRun's ascent space, descent space and width. Like so:
When Core Text reaches a
CTRun with a
CTRunDelegate it asks the delegate, "How much space should I leave for this chunk of data?" By setting these properties in the
CTRunDelegate, you can leave holes in the text for your images.
First add support for the "img" tag. Open MarkupParser.swift and find "} //end of font parsing". Add the following immediately after:
//1 else if tag.hasPrefix("img") { var filename:String = "" let imageRegex = try NSRegularExpression(pattern: "(?<=src=\")[^\"]+", options: NSRegularExpression.Options(rawValue: 0)) imageRegex.enumerateMatches(in: tag, options: NSRegularExpression.MatchingOptions(rawValue: 0), range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in if let match = match, let range = tag.range(from: match.range) { filename = String(tag[range]) } } //2 let settings = CTSettings() var width: CGFloat = settings.columnRect.width var height: CGFloat = 0 if let image = UIImage(named: filename) { height = width * (image.size.height / image.size.width) // 3 if height > settings.columnRect.height - font.lineHeight { height = settings.columnRect.height - font.lineHeight width = height * (image.size.width / image.size.height) } } }
- If
tagstarts with "img", use a regex to search for the image's "src" value, i.e. the filename.
- Set the image width to the width of the column and set its height so the image maintains its height-width aspect ratio.
- If the height of the image is too long for the column, set the height to fit the column and reduce the width to maintain the image's aspect ratio. Since the text following the image will contain the empty space attribute, the text containing the empty space information must fit within the same column as the image; so set the image height to
settings.columnRect.height - font.lineHeight.
Next, add the following immediately after the
if let image block:
//1 images += [["width": NSNumber(value: Float(width)), "height": NSNumber(value: Float(height)), "filename": filename, "location": NSNumber(value: attrString.length)]] //2 struct RunStruct { let ascent: CGFloat let descent: CGFloat let width: CGFloat } let extentBuffer = UnsafeMutablePointer<RunStruct>.allocate(capacity: 1) extentBuffer.initialize(to: RunStruct(ascent: height, descent: 0, width: width)) //3 var callbacks = CTRunDelegateCallbacks(version: kCTRunDelegateVersion1, dealloc: { (pointer) in }, getAscent: { (pointer) -> CGFloat in let d = pointer.assumingMemoryBound(to: RunStruct.self) return d.pointee.ascent }, getDescent: { (pointer) -> CGFloat in let d = pointer.assumingMemoryBound(to: RunStruct.self) return d.pointee.descent }, getWidth: { (pointer) -> CGFloat in let d = pointer.assumingMemoryBound(to: RunStruct.self) return d.pointee.width }) //4 let delegate = CTRunDelegateCreate(&callbacks, extentBuffer) //5 let attrDictionaryDelegate = [(kCTRunDelegateAttributeName as NSAttributedStringKey): (delegate as Any)] attrString.append(NSAttributedString(string: " ", attributes: attrDictionaryDelegate))
- Append an
Dictionarycontaining the image's size, filename and text location to
images.
- Define
RunStructto hold the properties that will delineate the empty spaces. Then initialize a pointer to contain a
RunStructwith an
ascentequal to the image height and a
widthproperty equal to the image width.
- Create a
CTRunDelegateCallbacksthat returns the ascent, descent and width properties belonging to pointers of type
RunStruct.
- Use
CTRunDelegateCreateto create a delegate instance binding the callbacks and the data parameter together.
- Create an attributed dictionary containing the delegate instance, then append a single space to
attrStringwhich holds the position and sizing information for the hole in the text.
Now
MarkupParser is handling "img" tags, you'll need to adjust
CTColumnView and
CTView to render them.
Open CTColumnView.swift. Add the following below
var ctFrame:CTFrame! to hold the column's images and frames:
var images: [(image: UIImage, frame: CGRect)] = []
Next, add the following to the bottom of
draw(_:):
for imageData in images { if let image = imageData.image.cgImage { let imgBounds = imageData.frame context.draw(image, in: imgBounds) } }
Here you loop through each image and draw it into the context within its proper frame.
Next open CTView.swift and the following property to the top of the class:
// MARK: - Properties var imageIndex: Int!
imageIndex will keep track of the current image index as you draw the
CTColumnViews.
Next, add the following to the top of
buildFrames(withAttrString:andImages:):
imageIndex = 0
This marks the first element of the
images array.
Next add the following,
attachImagesWithFrame(_:ctframe:margin:columnView), below
buildFrames(withAttrString:andImages:):
func attachImagesWithFrame(_ images: [[String: Any]], ctframe: CTFrame, margin: CGFloat, columnView: CTColumnView) { //1 let lines = CTFrameGetLines(ctframe) as NSArray //2 var origins = [CGPoint](repeating: .zero, count: lines.count) CTFrameGetLineOrigins(ctframe, CFRangeMake(0, 0), &origins) //3 var nextImage = images[imageIndex] guard var imgLocation = nextImage["location"] as? Int else { return } //4 for lineIndex in 0..<lines.count { let line = lines[lineIndex] as! CTLine //5 if let glyphRuns = CTLineGetGlyphRuns(line) as? [CTRun], let imageFilename = nextImage["filename"] as? String, let img = UIImage(named: imageFilename) { for run in glyphRuns { } } } }
- Get an array of
ctframe's
CTLineobjects.
- Use
CTFrameGetOriginsto copy
ctframe's line origins into the
originsarray. By setting a range with a length of 0,
CTFrameGetOriginswill know to traverse the entire
CTFrame.
- Set
nextImageto contain the attributed data of the current image. If
nextImagecontain's the image's location, unwrap it and continue; otherwise, return early.
- Loop through the text's lines.
- If the line's glyph runs, filename and image with filename all exist, loop through the glyph runs of that line.
Next, add the following inside the glyph run
for-loop:
// 1 let runRange = CTRunGetStringRange(run) if runRange.location > imgLocation || runRange.location + runRange.length <= imgLocation { continue } //2 var imgBounds: CGRect = .zero var ascent: CGFloat = 0 imgBounds.size.width = CGFloat(CTRunGetTypographicBounds(run, CFRangeMake(0, 0), &ascent, nil, nil)) imgBounds.size.height = ascent //3 let xOffset = CTLineGetOffsetForStringIndex(line, CTRunGetStringRange(run).location, nil) imgBounds.origin.x = origins[lineIndex].x + xOffset imgBounds.origin.y = origins[lineIndex].y //4 columnView.images += [(image: img, frame: imgBounds)] //5 imageIndex! += 1 if imageIndex < images.count { nextImage = images[imageIndex] imgLocation = (nextImage["location"] as AnyObject).intValue }
- If the range of the present run does not contain the next image, skip the rest of the loop. Otherwise, render the image here.
- Calculate the image width using
CTRunGetTypographicBoundsand set the height to the found ascent.
- Get the line's x offset with
CTLineGetOffsetForStringIndexthen add it to the
imgBounds' origin.
- Add the image and its frame to the current
CTColumnView.
- Increment the image index. If there's an image at images[imageIndex], update
nextImageand
imgLocationso they refer to that next image.
OK! Great! Almost there - one final step.
Add the following right above
pageView.addSubview(column) inside
buildFrames(withAttrString:andImages:) to attach images if they exist:
if images.count > imageIndex { attachImagesWithFrame(images, ctframe: ctframe, margin: settings.margin, columnView: column) }
Build and run on both iPhone and iPad!
Congrats! As thanks for all that hard work, the zombies have spared your brains! :]
Where to Go From Here?
As mentioned in the intro, Text Kit can usually replace Core Text; so try writing this same tutorial with Text Kit to see how it compares. That said, this Core Text lesson won't be in vain! Text Kit offers toll free bridging to Core Text so you can easily cast between the frameworks as needed.
Have any questions, comments or suggestions? Join in the forum discussion below!
Team
Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are:
- Author
Lyndsey Scott
- Tech Editor
Michael Gazdich
- Final Pass Editor
Darren Ferguson
- Team Lead
Andy Obusek | https://www.raywenderlich.com/153591/core-text-tutorial-ios-making-magazine-app | CC-MAIN-2018-13 | refinedweb | 4,604 | 58.99 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
1
results of 1
Michael Kay wrote:
> When elements from a source document are copied to a result document, all
> their in-scope namespaces are copied too. This is because it is in general
> impossible to tell which namespaces are used and which aren't.
That makes sense...
> XSLT 2.0
> provides a means to suppress this copying of namespace nodes using the
> attribute copy-namespaces=yes|no on xsl:copy and xsl:copy-of.
Aha, thank you :-) I'd missed that. I obviously need to spend some more
time with the 2.0 spec...
Thanks again,
~~
Daniel Neades
Araxis Ltd | http://sourceforge.net/p/saxon/mailman/saxon-help/?viewmonth=200405&viewday=8 | CC-MAIN-2015-18 | refinedweb | 129 | 83.56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.